Formerly known as Wikibon

From Answers to Actions: Why Google’s Full-Stack AI Bet Is Gaining Momentum – And What Still Needs to Happen – Reflections from an AMA with Thomas Kurian

The Shift From Answers to Actions

Enterprises are rapidly moving from an AI that answers questions and generates content to one that performs tasks and takes actions. According to Thomas Kurian, CEO of Google Cloud, this shift requires a fundamentally different approach to infrastructure and software. Google’s view is that only a tightly integrated portfolio – spanning silicon to applications and everything in between – can effectively support this transition.

In a private “ask me anything” session with industry analysts, Kurian made the case that Google is uniquely positioned to deliver on this vision. He pointed to Google’s strength across every layer of the stack, including its leadership in frontier models – an area where not all hyperscale competitors have parity. At the same time, Kurian explained how Google has significantly strengthened its security capabilities. Its acquisitions of Mandiant for threat intelligence and Wiz for cloud security and posture management are two examples we note. These moves reflect an understanding that AI at scale cannot succeed without deeply embedded, end-to-end security and governance.

Kurian’s Premise and the Importance of Choice

In our view, Kurian’s premise has strong merit. However, two critical mandates stand out as essential for Google to convert its technical leadership into sustainable enterprise advantage: 1) despite its excellence in custom silicon with TPUs and CPUs, Google must continue to ensure access to NVIDIA’s solutions and ecosystem. Nvidia is setting the industry’s cadence and remains the de facto standard for AI compute in enterprise environments; and 2) Google must accelerate ecosystem expansion and maintain an open posture, offering customers flexibility across chips, models, and data platforms rather than forcing vertically integrated choices.

On both points, our analysis shows that today, so-called accelerated computing is roughly 2X the size of traditional x86-based general purpose computing. By the end of this decade, our models project that accelerated computing will be 20X the size of general purpose computing, with Nvidia maintaining its current 80% share. As such, developers will increasingly make investments around CUDA in order to maximize their total available market. In our view, Google recognizes this imperative and is more than comfortable with its position in the market. Abstraction layers will emerge on top of accelerated computing infrastructure and this leaves many opportunities for players like Google to continue to attract developers and build a robust community. In addition, because of its AI excellence, we believe Google Cloud’s optionality and relative openness make it an attractive destination for devs.

We further believe that Google’s work with TPUs, combined with its foundation model leadership, clearly differentiates it from AWS and Microsoft. While its overall share of the accelerated silicon market will be in the mid single digits, in many ways that metric doesn’t matter. TPUs fill the demand gaps that Nvidia and others can’t meet and give Google integration advantages where it can optimize up and down the stack, including tight integration with its in-house models. Ironically, while AWS leveraged its custom silicon last decade to reduce costs and differentiate itself with Nitro and Graviton, we believe it is not keeping up with the pace of AI development to the same degree as is Google.

For example, Google’s silicon is pacing on a twelve-month innovation cycle while AWS is typically on an 18 month cycle. Google’s announcement of TPU 8t for training and 8i for inference recognizes the different business requirements that are evolving around each of these platforms. AWS on the other hand is basically killing its inference chip (Inferentia) and consolidating development around Trainium. While things can change rapidly in this current environment, we feel Google’s progress is more notable than that of AWS. Microsoft is even further behind when it comes to custom silicon.

Or perhaps AWS knows something we don’t yet understand…

From Analytics to Systems of Action (aka Agency)

What became clear in the discussion is that the industry is moving beyond an obsession with models toward what Google calls “Systems of Action.” The last two years have been defined by model performance and generative capability. Enterprises are now asking how to translate those capabilities into real business outcomes. This shift requires more than better models. It demands real-time data integration, context, policy-aware execution, secure orchestration, and agent-driven workflows that can operate at machine speed.

This is where Google’s full-stack approach becomes compelling in our view. The company is attempting to position itself not simply as a model provider or cloud vendor, but as the platform that orchestrates the entire lifecycle of AI-driven systems. Its portfolio spans custom silicon, global infrastructure, frontier models, data platforms, application layers, and increasingly, integrated security and governance. The value of this approach is not just in the individual components, but in the co-design across layers. Google’s ability to optimize silicon for model workloads, tune networking for AI traffic, and integrate data systems with intelligent orchestration, creates a level of vertical integration that is unique in the market and difficult to replicate.

The Data Platform is Google’s Underrated Advantage

That said, we continue to believe an under-appreciated part of Google’s strategy is its data platform. While much of the industry remains rightly focused on models, the real control and differentiation point for enterprise AI is data. Google brings together one of the strongest portfolios in the market, combining BigQuery for analytics, Spanner/Spanner Omni for globally distributed transactions, and AlloyDB for modern relational workloads. This convergence of analytical and transactional capabilities positions Google to play a defining role in what we see as a key part of the new data stack.

Below is a picture of how we see the future software stack evolving:

A Nuance Worth Calling Out – System of Intelligence vs Systems of Action

Where we see an important nuance – largely a matter of terminology rather than substance – is in how Google frames the concept of the its System of Action – the layer we call the Cognitive Surface shown above; or what we’ve referred to as the System of Intelligence in previous Breaking Analysis posts. Google tends to position the System of Intelligence as legacy infrastructure associated with the modern data stack (Snowflake, BigQuery, Databricks), emphasizing instead a direct progression to systems of action. However, in our view, the capabilities we associate with the System of Intelligence – namely the harmonization of data, metadata, relationships, and context – remain essential as the foundation for agentic systems.

In practice, these capabilities are very much present within Google’s architecture, particularly through its Knowledge Catalog – which will undoubtedly mature. We often describe this capability in terms of a knowledge graph – or a four-dimensional map of the enterprise that integrates data, relationships, time, and context. This is effectively embedded within Google’s system of action framework. In other words, while the terminology may differ, there is alignment in underlying philosophy. The System of Intelligence has not disappeared; it has been subsumed into the operational fabric of what Google calls the systems of action.

This distinction is important because it highlights a broader industry dynamic. As enterprises move toward agentic architectures, the need for contextualized, governed, and continuously updated data does not diminish – it becomes more critical. Systems of action cannot function without a robust intelligence layer (cognitive surface or SoI) that provides the necessary context for decision-making. Google’s Knowledge Catalog represents a meaningful step in this direction, and in our view, it is one of the more strategically important components of its long-term AI platform.

V1 of Knowledge Catalog will Evolve

The choice of the term Knowledge Catalog indicates to us that it’s early days for this critical element of the data stack. The north star for firms is to have a real-time digital representation of enterprise state. Essentially a “digital twin” of an organization. As such, Google’s Knowledge Catalog has to be a container that harmonizes not only harmonizes the operational and technical metadata, but also the glossary, the business intelligence metrics, and the underlying application logic, including all the application semantics. Getting to our vision of an enterprise digital twin suggests that the catalog must evolve into an application platform. In this scenario, it becomes the source of truth itself, comprisingnot just the application logic, but the state of the enterprise in one data store.

Security as a First-Class Layer

Security also emerges as a key differentiator in Google’s evolving strategy. The integration of Mandiant and Wiz signals a move toward embedding security directly into the platform rather than treating it as an afterthought. This is essential in an AI-driven environment where data exposure is amplified and agentic systems introduce new and less predictable attack surfaces. Google appears to understand that trust, governance, and control must scale alongside intelligence.

Execution Risk and Ecosystem and Messaging

Even with these strengths, the mandates we outlined earlier remain fundamental. NVIDIA’s dominance in AI compute is not just about hardware performance – it is about ecosystem gravity. Enterprises have standardized on NVIDIA’s software stack, and Google must continue to interoperate seamlessly with that environment. Similarly, ecosystem expansion is not optional. Enterprise buyers demand choice, and Google must continue to support heterogeneous architectures and open standards if it is to maximize adoption. This was a strong message at Google Next 2026 and Google’s commitment to open choice is clear in our view.

Another area where we believe Google still has work to do is in communicating its vision. The company consistently demonstrates technical depth and architectural clarity. However, it often approaches the market from a technology-first perspective, which can make it harder to connect with business executives focused on outcomes. The companies that ultimately win in enterprise AI will be those that can clearly articulate not just how their systems work, but how they drive measurable business value.

Momentum and Market Position

Despite these challenges, we remain highly optimistic about Google Cloud’s trajectory. As we noted in our Google Next preview, the company is gaining momentum, supported by strong technology, AI leadership, and long-term investment discipline. While it remains in third place as a public cloud player, it’s cloud business is massive – ~$60B+ all in with $40B+ IaaS/PaaS unit growing faster than any hyperscaler. Google’s differentiated capabilities – particularly in AI and data – are increasingly resonating with customers — especially technically savvy firms. AWS won this cohort with its cloud, Google is winning with AI.

Bottom Line

Google is making a credible and strategic bet on full-stack AI. Its combination of silicon, infrastructure, models, data platforms, and security gives it a strong foundation to compete in the next phase of enterprise computing. If it can execute against the mandates of ecosystem expansion and interoperability, and improve its ability to communicate business value, it is well positioned to play a leading role in shaping the future of enterprise AI.

In our view, the most compelling aspect of Google’s strategy is its alignment with a broader architectural shift. The industry is moving from systems of record to systems of analytics and now to systems of agency. Google may frame the middle layer differently, but the underlying capabilities remain critical. If successful, Google will not just participate in the next era of AI – it will help define the control plane that underpins it.

Action Item

Google’s leading technology and true full stack integration make it one of the most compelling platforms for practitioners pursuing AI. Business technology executives must put Google on its short list if only to tap what we see as a leading edge platform for enterprise AI. It’s open posture allows organizations to maintain relationships with both cloud and on-prem providers, while accessing Google’s best in class capabilities.

Article Categories

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
"Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. "
John Furrier
Co-Founder of theCUBE Research's parent company, SiliconANGLE Media

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well”

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content