Databricks' serverless database slashes app development from months to days as companies prep for agentic AI

Databricks' serverless database slashes app development from months to days as companies prep for agentic AI
Binance



Thank you for reading this post, don't forget to subscribe!

Five years ago, Databricks coined the term 'data lakehouse' to describe a new type of data architecture that combines a data lake with a data warehouse. That term and data architecture are now commonplace across the data industry for analytics workloads.

Now, Databricks is once again looking to create a new category with its Lakebase service, now generally available today. While the data lakehouse construct deals with OLAP (online analytical processing) databases, Lakebase is all about OLTP (online transaction processing) and operational databases. The Lakebase service has been in development since June 2025 and is based on technology Databricks gained via its acquisition of PostgreSQL database provider Neon. It was further enhanced in October of 2025 with the acquisition of Mooncake, which brought capabilities to help bridge PostgreSQL with lakehouse data formats.

Lakebase is a serverless operational database that represents a fundamental rethinking of how databases work in the age of autonomous AI agents. Early adopters, including easyJet, Hafnia and Warner Music Group, are cutting application delivery times by 75 to 95%, but the deeper architectural innovation positions databases as ephemeral, self-service infrastructure that AI agents can provision and manage without human intervention.

This isn't just another managed Postgres service. Lakebase treats operational databases as lightweight, disposable compute running on data lake storage rather than monolithic systems requiring careful capacity planning and database administrator (DBA) oversight.

 "Really, for the vibe coding trend to take off, you need developers to believe they can actually create new apps very quickly, but you also need the central IT team, or DBAs, to be comfortable with the tsunami of apps and databases," Databricks co-founder Reynold Xin told VentureBeat. "Classic databases simply won't scale to that because they can't afford to put a DBA per database and per app."

92% faster delivery: From two months to five days

The production numbers demonstrate immediate impact beyond the agent provisioning vision. Hafnia reduced delivery time for production-ready applications from two months to five days — or 92% — using Lakebase as the transactional engine for their internal operations portal. The shipping company moved beyond static BI reports to real-time business applications for fleet, commercial and finance workflows.

EasyJet consolidated more than 100 Git repositories into just two and cut development cycles from nine months to four months — a 56% reduction — while building a web-based revenue management hub on Lakebase to replace a decade-old desktop app and one of Europe's largest legacy SQL Server environments.

Warner Music Group is moving insights directly into production systems using the unified foundation, while Quantum Capital Group uses it to maintain consistent, governed data for identifying and evaluating oil and gas investments — eliminating the data duplication that previously forced teams to maintain multiple copies in different formats.

The acceleration stems from the elimination of two major bottlenecks: database cloning for test environments and ETL pipeline maintenance for syncing operational and analytical data.

Technical architecture: Why this isn't just managed Postgres

Traditional databases couple storage and compute — organizations provision a database instance with attached storage and scale by adding more instances or storage. AWS Aurora innovated by separating these layers using proprietary storage, but the storage remained locked inside AWS's ecosystem and wasn't independently accessible for analytics.

Lakebase takes the separation of storage and compute to its logical conclusion by putting storage directly in the data lakehouse. The compute layer runs essentially vanilla PostgreSQL— maintaining full compatibility with the Postgres ecosystem — but every write goes to lakehouse storage in formats that Spark, Databricks SQL and other analytics engines can immediately query without ETL.

"The unique technical insight was that data lakes decouple storage from compute, which was great, but we need to introduce data management capabilities like governance and transaction management into the data lake," Xin explained. "We're actually not that different from the lakehouse concept, but we're building lightweight, ephemeral compute for OLTP databases on top."

Databricks built Lakebase with the technology it gained from the acquisition of Neon. But Xin emphasized that Databricks significantly expanded Neon's original capabilities to create something fundamentally different.

"They didn’t have the enterprise experience, and they didn’t have the cloud scale," Xin said. "We brought the Neon team's novel architectural idea together with the robustness of the Databricks infrastructure and combined them. So now we've created a super scalable platform."

From hundreds of databases to millions built for agentic AI

Xin outlined a vision directly tied to the economics of AI coding tools that explains why the Lakebase construct matters beyond current use cases. As development costs plummet, enterprises will shift from buying hundreds of SaaS applications to building millions of bespoke internal applications.

"As the cost of software development goes down, which we're seeing today because of AI coding tools, it will shift from the proliferation of SaaS in the last 10 to 15 years to the proliferation of in-house application development," Xin said. "Instead of building maybe hundreds of applications, they'll be building millions of bespoke apps over time."

This creates an impossible fleet management problem with traditional approaches. You cannot hire enough DBAs to manually provision, monitor and troubleshoot thousands of databases. Xin's solution: Treat database management itself as a data problem rather than an operations problem.

Lakebase stores all telemetry and metadata — query performance, resource utilization, connection patterns, error rates — directly in the lakehouse, where it can be analyzed using standard data engineering and data science tools. Instead of configuring dashboards in database-specific monitoring tools, data teams query telemetry data with SQL or analyze it with machine learning models to identify outliers and predict issues.

"Instead of creating a dashboard for every 50 or 100 databases, you can actually look at the chart to understand if something has misbehaved," Xin explained. "Database management will look very similar to an analytics problem. You look at outliers, you look at trends, you try to understand why things happen. This is how you manage at scale when agents are creating and destroying databases programmatically."

The implications extend to autonomous agents themselves. An AI agent experiencing performance issues could query the telemetry data to diagnose problems — treating database operations as just another analytics task rather than requiring specialized DBA knowledge. Database management becomes something agents can do for themselves using the same data analysis capabilities they already have.

What this means for enterprise data teams

The Lakebase construct signals a fundamental shift in how enterprises should think about operational databases — not as precious, carefully managed infrastructure requiring specialized DBAs, but as ephemeral, self-service resources that scale programmatically like cloud compute. 

This matters whether or not autonomous agents materialize as quickly as Databricks envisions, because the underlying architectural principle — treating database management as an analytics problem rather than an operations problem — changes the skill sets and team structures enterprises need.

Data leaders should pay attention to the convergence of operational and analytical data happening across the industry. When writes to an operational database are immediately queryable by analytics engines without ETL, the traditional boundaries between transactional systems and data warehouses blur. This unified architecture reduces the operational overhead of maintaining separate systems, but it also requires rethinking data team structures built around those boundaries.

When lakehouse launched, competitors rejected the concept before eventually adopting it themselves. Xin expects the same trajectory for Lakebase. 

"It just makes sense to separate storage and compute and put all the storage in the lake — it enables so many capabilities and possibilities," he said.



Source link