298 | Breaking Analysis | Resetting GPU Depreciation — Why AI Factories Bend, But Don’t Break, Useful Life Assumptions

In January 2020, Amazon changed the depreciation schedule for its server assets, from three years to four years. This accounting move was implemented because Amazon found that it was able to extend the useful life of its servers beyond three years. Moore’s Law was waning and at Amazon’s scale, it was able to serve a diverse set of use cases, thereby squeezing more value out of its EC2 assets for a longer period of time. Other hyperscalers followed suit and today, the big three all assume six year depreciation schedules for server assets.
Microsoft Ignite 2025 Wrap-Up: From Copilots to Agents, Microsoft Races to the Frontier Firm

Microsoft Ignite 2025 reveals Microsoft’s Agentic AI vision, unifying data, security, and cloud operations for the Frontier Firm era.
Palo Alto Networks’ $3.35B Chronosphere Bet Signals the Next Era of AI Observability

Palo Alto’s $3.35B Chronosphere deal signals a unified era of AI observability, security, and agent-driven operations for modern enterprises.
Fueling Real-Time AI with Federated Queries

How federated queries, Trino, and Data-as-a-Product architectures solve the stale data crisis and unlock real-time AI.
Unified Gateways, Hybrid Cloud Optionality, and the Kubernetes Ingress Shift: What IT Leaders Need to Know

Explore how unified gateways, hybrid cloud optionality, and the Ingress NGINX shift are reshaping Kubernetes and enterprise strategy.
Defining Sovereign AI for the Enterprise Era

Explore how sovereign AI enables control, compliance, and offline safety across hybrid and air-gapped environments.
297 | Breaking Analysis | AI Factories Face a Long Payback Period but Trillions in Upside

Our latest forecast indicates that it will take a decade or more for AI factory operators and model builders to reach breakeven on their massive capital outlays. Our projections call for nearly $4T in cumulative CAPEX outlays by 2030, with just under $2T in cumulative AI revenue generated in that timeframe. We have the crossover point occurring early next decade (2032 on a run rate basis) then gains far surpassing initial investments by the middle part of the 2030s. While such projections are invariably subject to constant revision, we believe the size and speed of the initial investments, combined with the challenges of profitably monetizing AI at scale, will require patient capital and long term thinking to realize durable business results.
The Ultra-Resilience Mandate of Standardizing Data for the AI Era

Why Postgres is becoming the standard for ultra-resilient, AI-ready data infrastructure built on open source and auto-remediation.
Why AI Chooses Your Brand: Demystifying How AI Discovery and Digital Buyer Journeys Work

AI discovery and AEO are reshaping how B2B buyers find and trust brands — here’s how to ensure yours shows up in AI search. As generative AI assistants like ChatGPT, Claude, Gemini, Grok, and Perplexity replace traditional search, brand visibility now depends on how large language models (LLMs) learn, rank, and recommend. Together, they unpack a 19-attribute framework across four categories that explain how LLMs discover, learn, and select brands to include in AI-generated answers.
.