The Data Infrastructure Lag
With 88% of enterprises now using AI, the biggest hurdle to scaling these initiatives is data infrastructure and readiness. This episode, featuring insights from Karthik Ranganathan, CEO and co-founder of Yugabyte, and Kirill Gavriluk, leader of Azure Cosmos DB at Microsoft, explores a major market trend: the consolidation around Postgres as the single standard for both relational and non-relational data. This standardization, accelerated by the demand for ultra-resilience and auto-remediation against unexpected AI-driven workload shifts, is becoming critical for survival in an exponentially accelerating development landscape.
Ultra-Resilience: Beyond Simple Uptime
The traditional definition of database resilience—simply replacing a failed component—is no longer sufficient. In a world of always-on applications and exponentially increasing speed, resilience must become ultra-resilience.
“We’re going from the days of resilience to the days of ultra resilience, where it’s not just outages, it’s about how do you ensure uptime risk mitigation, et cetera, in this fast paced era.”
According to Karthik, this new level of resilience is driven by:
- Unexpected Workload Shifts: Massive, bursty, and unpredictable loads from AI queries (the ebbs and flows of real-time applications).
- Faster Cadence of Releases: The compressed time cycle means humans make more mistakes, and the system must automatically recover from these errors, patches, and frequent upgrades.
- Commodity Infrastructure: Running on cost-efficient, commodity hardware means dealing with performance degradation, not just outright outages.
Because speed and scale increase the probability of failure, businesses can no longer afford human intervention to fix every problem. The system must be architected for auto-remediation to mitigate risk and protect business reputation.
The Consolidation of Standards: Why Postgres is Winning
Recent significant industry moves—such as Databricks buying Neon and Snowflake buying Crunchy Data—are unequivocally signaling a massive consolidation toward Postgres as the single standard API for data access.
Kirill explains that this trend is fueled by the success of open source and the need for standardization across all data types:
- Non-Relational Standardization: Unlike relational data, which has long been standardized by SQL/Postgres, NoSQL/non-relational data lacked a universal standard. Microsoft’s decision to open source DocumentDB (a MongoDB API-compatible database built on Postgres) is a key move to bring non-relational data into the standard stack.
- Leveraging Ecosystem Investment: Every major player—Yugabyte, Microsoft, AWS, and Google—is heavily investing in Postgres. By building on this open, extensible foundation, users gain access to the collective innovation (including new features like vectors for AI) without vendor lock-in.
This standardization means data, whether transactional, analytical, or semi-structured, can live in different locations but be accessed through a standard interface. This simplifies operations, security, and ultimately increases velocity.
Observability + Open Source = Auto-Remediation
Organizations currently suffer from immense tool sprawl, with 75% of respondents using 6 to 15 different tools. As 54% favor a unified approach, the need for simplicity is clear. The convergence of open source, observability, and auto-remediation directly addresses this complexity.
The dual forces at play are:
| Standardization (APIs) | Observability (Auto-Remediation) |
| Goal: Increase Velocity for faster development and easier uptake. | Goal: Handle Exposure caused by speed and scale. |
| Mechanism: Use a standard API (Postgres) so developers don’t have to learn new technologies. | Mechanism: Systems automatically remediate to prevent revenue loss and reputation damage. |
Kirill emphasizes that the primitives and declarative capabilities that an application depends on must be open source to guarantee portability. The competition then shifts to how fast the cloud platform (Microsoft, Yugabyte, etc.) can implement those primitives (e.g., speed of remediation, cost of vectorization), not the primitives themselves.
Karthik reinforces that consolidation reduces the numerous investment streams and management pain points associated with managing seven or eight siloed tools.
Preparing for Exponential AI Growth
To prepare for future AI workloads, enterprises must modernize their stack now under two guiding principles:
- Always Bet on Open: Open source, by definition, receives the collective innovation of the entire industry, inevitably making it the true center of innovation. Closed commercial software will always be playing catch-up.
- Build from Change, Not to Last: Given the exponential speed of AI, the old adage of “build to last” is gone. Infrastructure must be modular, pluggable, and interchangeable.
The next three years are critical. As AI enters the exponential growth phase, human visualization of what the stack will look like is difficult. The main advice is to standardize on open source, get the foundational system functional, and be ready to leverage innovation as soon as it unlocks business value.

