AWS CEO Matt Garman delivered a technically rich keynote at re:Invent 2025. The announcements were numerous and impressive, but the unifying vision for the AI-native cloud was left to the audience to infer.
Then came the AMA.
In our view, the unscripted session revealed the greater depth of Matt Garman – confident, funny , fluent, deeply informed, and capable of speaking to AWS’s business, customers, economics, and technology with clarity that the keynote did not convey. If the keynote was a product launch, the AMA was a more strategically-oriented, unstructured briefing where Garman went toe-to-toe with industry analysts.
What remains unclear is the direct connection to the future we believe will define the next decade – i.e. the transition from Cloud Services to Service-as-Software, where data becomes workflow raw material, agents complement human interfaces, and a System of Intelligence SoI becomes the new operating layer of enterprise computing.
Garman’s answers repeatedly circled around this concept — often implicitly, occasionally explicitly — even though AWS continues to express its strategy through a bottom-up builder’s view, not the top-down systems vision customers sometimes need. We see drawbacks to solely bottom up or top down views and no clear path to the SoI has been laid out with the exception of clues from a few vendors.
Below we break down the key signals Garman delivered, what they mean for AWS’s competitive posture, and where we believe AWS still needs to evolve.
The AMA: A Much Clearer Lens Into Garman’s Strategy
Across dozens of questions spanning hardware, silicon, multicloud, inference, developer experience, sovereignty, AI factories, robotics, quantum, and open-weight models, several themes emerged.
1. AWS sees AI infrastructure as a long-duration industrial investment.
Garman was clear that CAPEX intensity is not slowing. AWS is “pushing capital as far out as possible,” securing land and power for 2028–2030 and beyond. Their confidence is grounded in enterprise ROI – every CIO Garman spoke with sees a clear payback path for AI investments. This reinforces our belief that AI factories will take 8–10 years to fully return capital but will ultimately transform industry economics.
2. Training and inference are converging – elongating the useful life of AI hardware.
This author asked the question on depreciation cycles for AI infrastructure, which elicited a series of detailed insights.
Garman acknowledged:
- AI hardware cycles are more volatile than CPUs.
- Training clusters increasingly support inference.
- Smaller models run beautifully on prior-generation silicon.
- AWS intentionally amortizes AI hardware conservatively because the future performance curve is uncertain.
In our view, this supports a key thesis we’ve been advancing:
today’s training systems become tomorrow’s inference engines, increasing the revenue per token generated over a chip’s useful life – a foundational principle of the AI Factory economic model.
3. AWS’s chip strategy is pragmatic, not ideological.
He reaffirmed AWS will:
- Buy “a crap ton of Nvidia”
- Deepen investment in Trainium
- Use whatever networking tech (NVLink, Ultra Ethernet) delivers best performance
- Pursue multiple fab partners, including Intel, TSMC, and eventually Japanese fabs
AWS wants control, margin, and performance – but not at the expense of customer choice. This is aligned with the company’s historic pattern to own the parts of the stack where customer experience depends on undifferentiated heavy lifting, and partner everywhere else.
4. Hybrid and multicloud are now first-class citizens.
Garman acknowledged that AWS’s stance has evolved:
- A decade ago AWS resisted multicloud.
- Today AWS embraces it realistically in that customers will run workloads in multiple places.
- AWS intends to provide the networking, security, and identity layers across those estates.
This is a notable shift and it positions AWS not just as a cloud, but increasingly as a control fabric.
5. Developer experience is becoming strategic.
AWS sometimes ceded developer tooling to the ecosystem. Garman admitted as much. But AI agents and code-generation capabilities now create a wedge for AWS to re-enter the tooling game with differentiated value.
We believe this is one of the most important strategic initiatives for AWS in the past decade. The closer AWS sits to the build process – not just the deploy – it becomes the operating environment of enterprise AI workflows.
6. AWS is betting that open-weight models without training data are not sustainable long-term.
Garman’s critique was candid and he emphasized the following:
- There is a limited business model for open-weight LLMs as currently structured.
- No model provider, except AWS, exposes training data; therefore these are not truly open-source systems.
- Someone eventually must charge.
- AWS will not subsidize foundation models indefinitely without the ability to create differentiated customer value.
We believe this signals a future where foundation models become platform features, not monetization engines – shifting value to systems, workflows, and data gravity.
7. Sovereignty is now architecture, not policy.
Garman explained AWS’s European Sovereign Cloud with conviction:
- Fully isolated infrastructure
- EU-only operators
- A unique governance model with its own board
- No scenario where extraterritorial demands could access customer data – “If President Trump says give me that data, I can’t…even if I wanted to…”
This structural separation is more credible than the original Outposts infrastructure AWS has offered. This is Outposts on steroids. What remains to be seen is the degree to which sovereign infrastructure keeps pace with best-of-breed servcies.
8. Physical AI and robotics will be huge — but are early.
AWS is optimistic but also realistic. Robots lack the data corpus that supercharged LLMs. We agree that robotics is 2–3 years (at least) behind LLMs, but the convergence is inevitable in our view. Amazon’s fulfillment centers give it a proprietary dataset no other hyperscaler can match. Amazon is probably, if not the largest, one of the most substantial consumers of robots in its warehouses.
9. Quantum remains research-grade for now.
He was blunt: there’s no reason for enterprises to engage yet. Researchers only.
The Missing Narrative: AWS Frames the Future From the Bottom Up
Despite the richness of the AMA, we believe AWS still struggles to articulate the future state of cloud computing that customers are organically moving toward.
In our view, enterprise architecture is shifting into a new phase:
**From: Cloud as a Collection of Services (Build)
To: Cloud as a System of Intelligence (Operate)**
This distinction is relevant because:
- Services require expertise; Systems of Intelligence require context.
- Services expose primitives; Systems expose workflows.
- Services scale through APIs; Systems scale through data + policy + intent.
AWS continues to tell a tooling and scaffolding story. But in our view, enterprises increasingly need a workflow story – one that abstracts AWS’s complexity into an intelligence fabric orchestrating:
- data movement
- identity
- agents
- governance
- lineage
- cost
- optimization
- autonomy
And the brain of Service-as-Software, the system of intelligence (SoI); and the framework we’ve been developing:
Software stops being the thing you install; it becomes the output of a higher-order system that generates, adapts, and executes workflows based on data and business intent.
Garman touched this future repeatedly – inadvertently at times – when discussing:
- agent-based software development
- training/inference unification
- multicloud fabric
- data harmonization challenges
- foundational metadata
- the need for “top-down push” inside Amazon itself
But AWS has not yet distilled this into a compelling customer narrative.
That remains the opportunity.
Where AWS Must Evolve
1. A robust System-of-Intelligence layer
AWS needs to present a unifying architecture that ties Bedrock, SageMaker, agents, metadata, governance, and orchestration into a coherent operating system for enterprise AI. Neptune may be a key ingredient of our vision, where a 4D map of the enterprise is enabled that injects process knowledge into the system.
2. A data harmonization strategy
Customers need a clear answer to the question:
How will AWS unify analytical, operational, and semantic metadata into a single intelligence layer?
This is the linchpin of Service-as-Software.
3. A workflow abstraction
AWS must provide a higher-level fabric that allows lines of business – not just developers – to express intent and let the system compile workflows from data + models + agents.
4. A vision beyond “builder culture.”
Builders matter. But the future belongs to operators of intelligent systems, not just writers of code.
Conclusion: The AMA Filled Gaps the Keynote Didn’t Articulate
In our view:
- Garman’s keynote showed the AWS machine.
- His AMA explained it.
- But the future requires continuing to re:Invent it.
AWS is executing exceptionally well on infrastructure, chips, models, and developer tooling. The AMA demonstrated a clear command of the business and a CEO leaning confidently into the AI-native era.
But the greatest opportunity – and the gap Garman has not yet explicitly disclosed – is the shift from cloud as infrastructure to cloud as intelligence.
When AWS connects its unparalleled bottom-up strengths to that top-down vision, the company has an opportunity to redefine the cloud again. But it’s not alone as Microsoft uses abstraction to simplify, Google leverages world class tech and AI chops to compete and an emerging set of neoclouds position for viability.
One this was clear in the Garman keynote and his AMA – AWS is a serious player in AI and has the customer connections, listening engine, technology and business acumen to remain a major player in the game indefinitely.

