Humor, tech stars, reality checks and candid conversations highlight this year’s AI Summit
The Cisco AI Summit 2026 was a gift to the industry. There was no registration page, no product announcements, only very subtle Cisco marketing and some really excellent and unscripted conversations. All open. All free. A huge shoutout to Jeetu Patel, Cisco’s President & Chief Product Officer, CEO Chuck Robbins and the Cisco team behind them. Jeetu in particular did an outstanding job moderating the AI Summit 2026 and grinding through the day. Jeetu and Chuck Robbins elevated the entire event with their preparation, sharp insights and ability to draw out candid perspectives from an all-star lineup.
The event brought together leaders shaping the AI supply chain end to end – Jensen Huang, Sam Altman, Dr. Fei-Fei Li, Lip-Bu Tan, Marc Andreesen, Aaron Levie, Matt Garman and more. The most useful parts of the day were the places where the conversation focused on enterprise constraints around infrastructure, security, governance, cost, software pricing and the operating model changes required to scale agentic systems.
Cisco deserves major credit for compiling such a substantive agenda– one that Jeetu joked would be hard to top. He’s right. Importantly, the summit highlighted how early the AI enterprise still is, how many hard problems remain unsolved but it gave practical advice as to how to move forward. As we said in our 2026 predictions post, we believe 2026 will be the year of AI ROI.
What follows is a summary of our curated thoughts on the key takeaways from the day.
A new software model – Agentic workflows, new economics and risks
Jeetu’s discussion with Sam Altman pushed the conversation toward agents that actually do work inside real systems. The most interesting question Jeetu asked Sam was: “What are the non-obvious constraints” (to AI). Sam answered with all the obvious constraints, including security, but it led to a discussion of how software architectures are changing.
Altman laid out the promise of agents that book, execute, automate, and coordinate across tools. The risk of course is that enterprise AI is action-oriented and these systems bring risk. Once agents can take steps in finance, HR, supply chain, and customer workflows, governance becomes a non-negotiable systems requirement.
But it leads to a new software architecture that generally aligns with a framework theCUBE Research team has thought about deeply. Our analyst and colleague David Floyer depicts this in the diagram below.

The premise is that enterprise IT is shifting from an application-centric model to an intelligence-centric topology, where intelligence separates from software and consolidates into a small number of rapidly improving Frontier Models. This creates a four-layer operating framework as shown above – Frontier Model, Cognitive Surface, Transactional Substrate, and Edge – that reshapes cost, latency, governance, and infrastructure design. Enterprises that adopt this topology can lower total cost and speed decision cycles, while those that force AI into legacy architectures risk higher costs, brittle systems, and weaker returns.
The point is in our view, the discussion between Patel and Altman hinted that the likes of OpenAI, Anthropic and Google have a massive opportunity to execute on this architecture and build ecosystems around it. It’s one of the reasons we’re more sanguine on OpenAI than many of the naysayers.
Today, technology spend at the macro averages around 4% of revenue. We this increasing to 10% of more over the next decade as investments shift from legacy IT management to token generation via APIs. According to Altman, it’s critical that the US leads in frontier model development and deployment. We agree and Floyer’s model above underscores the role of FMs in the new stack.
AI adoption: Not as fast as you might think
Another notable aside with Altman was his surprise at how slowly AI adoption has occurred, despite the narrative that things are moving faster than ever before. One way we think about this is we are entering the late early innings of AI. Where the first inning was around 2017 with academic papers on diffusion and transformers. The second inning was the “wow factor” of the ChatGPT moment. Now we’re entering the phase of mass adoption in both consumer and enterprise, which we believe will be the real productivity driver.
Our friend and colleague David Moschella has pointed out in previous Breaking Analysis episodes that despite all the hype, technology adoption is perhaps not compressing as much as the headlines would suggest. The chart below underscores this point:

We would estimate that 2025 was the year where US household of AI adoption crossed 50%, so 8 years on – similar to mobile phones.
Intel: The great American hope of advanced semiconductor manufacturing
Balancing design and foundry while managing a complex supply chain
Lip-Bu Tan brought a hardware-and-manufacturing perspective to the Summit. His comments on Intel’s investment in glass substrates and IP licensing to Samsung underscored not only Intel’s contribution to the industry but also the constraints in packaging, memory, and materials. And the challenges of managing an extremely complex supply chain.
The real test for Intel in our view is the viability of an integrated business model- i.e. design plus foundry, while serving external customers that are also competitors. Jeetu probed this with LBT who said 14A would be in volume by ’29, which is sooner than we’ve projected. But no news on external customers for the node. 14A combines backside power, gate all around and NA EUV technology all in one package. 18A, which is ramping combines the first two of those innovations and the hope is 14A will allow Intel to make major strides relative to TSMC.
As such, we’ve always felt that 14A is the node that really matters. But Intel needs a wafer volume customer to prove it can truly build an external foundry business. This is a huge challenge – i.e. leading in both design and external foundry.
LBT added that “the biggest constraint for customers is memory…and he sees no relief until 2028.” This aligns with another one of our predictions that memory constraints will persist well past 2026.
Dr. Fei-Fei Li: Spatial intelligence and trust architecture become gating factors
Dr. Fei-Fei Li’s “3D & AI” session took the summit in a different direction toward what makes AI usable in the real world. Her focus is spatial intelligence and what it implies about the next frontier – i.e. systems that can understand and operate in three dimensions, reason about environments, and interact with the physical world through what she called richer “world models.”
In her view progress won’t be gated by model size. Rather the challenge is the ability to build systems that map to real-world physics and interaction, and by the ability to deploy those systems in ways that preserve human agency.
Li also addressed the trust gap directly. Her argument centered on governance, ethical design, privacy, and permissioned data as table stakes. The message for enterprises is that trust in AI must be engineered. Compute and models are critical but without a trust architecture is compulsory and must integrate policy controls and safeguards that scale.
In our view, her perspective will determine the pace of adoption. It’s not just the model’s capabilities but how enterprises can deploy it with confidence.
Andreessen’s PoV: AI drives a reallocation of capital, power and hopefully, productivity
Marc Andreessen approached the “Venture & AI” discussion with his staccato speaking style, stripping out the marketing and going right after value realization. The takeaway from his discussion with Jeetu Patel was that AI is a capital allocation chess match and highly competitive wave that pulls the industry toward infrastructure-scale commitments. He emphasized the scale of the buildout required, parroting the multi-trillion-dollar estimates driving the industry narrative today; and tied that investment directly to geopolitical stakes – particularly the U.S. versus China dynamic. In his view, policy is now one of the most important competitive variables, and overly restrictive regulation risks shifting advantage to other regions. While this is a self-serving position for the tech industry to take, it’s also more true than not in our view.
Andreesen also addressed productivity growth. He said since he was born in the early 1970s, productivity growth has been tepid compared to previous eras of technological innovation. He’s right. We haven’t had impressive productivity growth despite Moore’s Law, the cloud, mobile, social media and so-called Big Data. Two periods stand out in the graphic below – one driven by a manufacturing boom, the other from the PC revolution. Andreesen’s observation highlights the fact that this performance is still not remarkable.

The promise of AI is to drive double digit productivity growth, which would be a breakthrough in economic terms.
Andreessen also pushed on where enduring value sits in the application layer. His argument was that the best AI application companies increasingly look like deep tech – owning differentiated technology and, in many cases, building models rather than relying exclusively on third-party APIs. Patel and Andreesen discussed at length the open versus closed model debate and with pricing models that are still in flux. He pointed to the tension between usage-based pricing and value-based pricing, and the strategic question of who has the leverage to price to outcomes as AI becomes embedded in workflows. The broader implication is that incumbents and startups will not win the same way in this cycle. Winners will be the companies that understand infrastructure economics, build durable differentiation, and treat AI as a power shift rather than a product line extension.
We found Andreesen’s comments regarding Linux as interesting but not necessarily applicable. He noted that Linux killed all the proprietary Unix versions from the likes of IBM, Sun, Digital, HP, etc. But per the conversation and Floyer mental model above, we see frontier model companies as adding value above well above core infrastructure. While this is highly speculative, it makes sense from an economic perspective, because if LLMs are Linux they will become disintermediated by open source models.
AWS CEO Matt Garman: AI reality check from the creator of the modern cloud
Can AWS keep up with the pace of Nvidia?
Matt Garman’s session with Jeetu Patel was grounded from the operator ‘s perspective. AWS sits on the largest installed base of cloud infrastructure and has a front row seat to AI adoption. The tone was typical Garman pragmatic, showing enthusiasm about the pace of AI adoption, but with sober concerns around cost, energy, and what is actually deployable at scale.
A few points stood out.
First, Garman pushed back on the “space-based data center” narrative. He acknowledged the theoretical appeal – solar power, space cooling – but argued the economics and logistics are nowhere close. Servers are heavy, payload costs remain prohibitive, and launch capacity is not even remotely sufficient for anything resembling planetary-scale compute. His line about not having enough rockets to launch “a million satellites yet” captured the attention of the audience. The AI buildout is a problem that will be solved on earth for the foreseeable future. AWS is scaling where it knows it can scale – on the ground – and the constraints are power, water/cooling, land, and supply chain, not rocket scientists.
Second, he offered a perspective on internal productivity. Garman described teams inside AWS being directed to “write no lines of code,” relying entirely on prompts. He claimed results in the 10X range and, in some cases, up to 100X. Those numbers are impressive and similar to other comments at the summit, where AI is moving from pilots to production. The cynic in us would say that these gains will not apply evenly across companies and even across domains outside of coding. Moreover, not every team can adopt a zero code mandate. But this message is important. In our view, developer productivity is where AI’s ROI shows fastest. AWS and other leaders are leaning into that but the question remains how this translates to other disciplines.
Once again Patel went to core issues that we care about – AWS’ internal silicon efforts such as Trainium as part of the path to improving economics for customers. We remain skeptical that AWS can keep up. Garman talked about an 18-24 month innovation cycle. Patel pushed on this point arguing that the time to tape out was compressing but Garman was being conservative on the cycle. The issue to us is the “tokens per watt per dollar” narrative playing out across the industry. In our view, Nvidia’s annual cadence is uncatchable. Just last month it claimed 5X performance, 10X throughput and 1/10th the cost for Vera Rubin relative to Blackwell. This will create a 15X increase in demand by our modeling. The supply/demand imbalance is, in our opinion, the only hope Nvidia silicon competitors have in capturing market share. In other words, what Nvidia can’t deliver is the market for competitors.
We had the same questions for Kevin Scott of Microsoft. He discussed Maia and Microsoft’s internal semiconductor initiative with Patel. We think all the hyperscalers will have difficulty competing with Nvidia on cost.
This pushed the opportunity to edge-based inference use cases – but even there, we see Nvidia being highly competitive, especially given the deal it just did with Groq, which addresses a potential gap in Nvidia’s portfolio.
We’ve had a lot of feedback from our community on this topic, suggesting that the competition will be able to compete. While we believe that’s true due to the market size we remain steadfast in our advice to operators that Nvidia will be the low cost supplier and you’d better secure allocation or you’ll be less competitive.
Jensen Huang: Globetrotting, sleep deprivation and a few glasses of wine expose the playbook for AI success
Chuck Robbins moderated a late night sitdown with Jensen that was humorous, insightful and a bit punchy.
Jensen Huang’s best guidance at the summit was “let a thousand AI flowers bloom, then curate the garden.” We believe that approach is right for enterprises that are still searching for the handful of use cases that will matter. Broad experimentation will show off winners faster than handwringing. The trap Jensen articulated is stopping there. The implication is that discovery is cheap but you have to take the next step. Focus deeply on what matters to your company. Don’t fret about ROI – it’s there – go get it.
Huang reinforced the “AI factory” construct as the unit of scale – purpose-built systems that manufacture intelligence. That naturally pulls networking, power, and cooling into the conversation. As always, Jensen is gracious to his host -whether Michael Dell, Antonio Neri, Frank Slootman or Chuck Robbins. He frames the conversation in a way that creates excitement and tailwinds for the partner du jour. It’s quite brilliant and endearing.
Jensen laid out the idea that the constraint is not just compute availability. It is the infrastructure required to feed it and run it reliably. Network throughput, east-west traffic patterns that explode as agents call tools and coordinate work, and the operational tooling that keeps these environments stable under heavy load. This is where Cisco has an advantage. If agents increase workflow traffic and tool calls, the network becomes a multiplier, not just background plumbing, and security has to be designed into that fabric rather than bolted on later.
The AI will kill software narrative is lazy
Huang pushed back on the market sentiment that AI will “kill software,” calling it illogical. We agree. He said if you have an AGI robot and it needs a screwdriver, would you use a screwdriver or reinvent the tool? His point is the tool use does not disappear when intelligence improves. It increases the use of the tool. Agents thrive by calling tools – APIs, workflows, systems of record – not by rebuilding everything from scratch. The implication is not that software goes away. Rather that software becomes more automated and more outcome-oriented, which forces a monetization rethink.
We’ve said that seat-based pricing is misaligned as agents act on behalf of users. Consumption and outcome-based pricing models will evolve and vendors will fight to preserve revenue while customers push for value alignment. This by the way is a source of constant debate on theCUBE Research team. Looking at Floyer’s model above, it suggests that major disruption is coming in the software industry. Ultimately, we think that both camps can be right – new software vendors will emerge and existing ones that shitf to an AI native mindset will also thrive. But not all incumbents will make it and M&A will accelerate.
What would a sitdown with Jensen be without talk of physical AI?
Finally, Huang leaned hard into physical AI – robotics, autonomy, industrial automation – as the next expansion wave. We agree this is where the addressable market expands well beyond the data center, but it also raises the stakes. Physical systems require tighter reliability, safety, and cost discipline than enterprise chat and search. The same infrastructure story perhaps applies, but the tolerance for error is much lower and the consequences of failure higher.
The bottom line is that Huang’s message lines up with what operators like Matt Garman emphasized from the hyperscaler perspective. Specifically, that AI scales on earth, under real power and cost constraints, and the winners are the ones who can create solutions from the piece parts of the stack – compute, networking, security, and operations – while turning experimentation into repeatable production outcomes.
Cisco’s edge is real – but execution is the test
Chuck Robbins and Cisco’s leadership are in a credible position. Cisco sits at the intersection of networking and security, which are both becoming more important as agents proliferate. If the future is connected intelligence, Cisco has the right model.
The risk is cohesion and speed. The enterprise wants integrated outcomes – not a portfolio of parts. As a builder, Jeetu Patel was elevated specifically to get Cisco back to its roots and address this opportunity. Cisco is showing signs that it can deliver a unified control plane for agentic systems across networking, security, observability, and operations, combined together in a way that reduces friction for customers. The fact that it could put together such an impressive content program is testament to its rejuvenation as an industry leader. Three years ago, Cisco’s AI strategy was bespoke, disjointed and frankly difficult to articulate. Today it’s coming together as a value and solutions-based story with unique IP, partnerships and a massive channel.
Key takeaways from Cisco AI Summit 2026
- Agentic workflows are forcing a new software model. Sam Altman and Jeetu Patel kept coming back to the non-obvious constraints – governance, security, and operating model friction. These present themselves the moment agents move from providing answers to taking actions inside finance, HR, supply chain, and customer systems.
- Enterprise adoption is slower than the headlines suggest. Altman’s surprise on adoption speed matched the broader reality that we are still in the late-early innings, with consumer penetration rising faster than enterprise adoption. But we see that changing with 2026 as a pivotal turning point.
- AI is pulling enterprise IT toward a new topology. The summit discussions align with the four-layer model developed by David Floyer (Frontier Model, Cognitive Surface, Transactional Substrate, Edge), and reinforce why OpenAI, Anthropic, and Google have an opportunity to build durable ecosystems if they can abstract complexity, operationalize governance and distribution via token generation through APIs.
- Hardware and supply chain constraints are real and persistent. Lip-Bu Tan emphasized packaging and materials, including Intel’s focus on glass substrates and licensing, and argued memory remains the biggest customer constraint with limited relief until 2028. The big question for Intel is how will it get the volume in foundry necessary to close the gap with TSMC and at the same time fund its design business.
- Trust architecture is becoming a gating factor for deployment. Dr. Fei-Fei Li pushed the idea that progress won’t be gated by model size alone – spatial intelligence and systems that protect privacy, permissioning, and human agency will determine the pace of real-world adoption.
- AI is a capital allocation chess match with geopolitical stakes. Marc Andreessen argued this wave rewards infrastructure-scale commitments, and that policy decisions materially influence competitive positions in the U.S. versus China race.
- AWS delivered the reality check on what is deployable at scale. Matt Garman dismissed space-based data center narratives as uneconomic in the near term and reinforced that the AI buildout is a terrestrial problem constrained by power, cooling, land, and supply chain.
- Developer productivity is the first place AI ROI shows up. Garman cited internal teams operating with “no lines of code” mandates and reported 10X to 100X speedups, raising the question of how quickly similar gains move beyond software engineering.
- Custom silicon is improving economics, but Nvidia’s cadence sets the bar. Garman highlighted Trainium and an 18–24 month innovation cycle; Jeetu pressed on whether that cadence compresses. The broader issue remains tokens per watt per dollar, and Nvidia’s annual innovation cycle is reshaping the cost curve. We remain skeptical that hyperscalers can successfully compete with Nvidia the way they complemented and essentially replaced x86.
- Jensen’s playbook is focus on high impact projects, then industrialize. “Let a thousand AI flowers bloom, then curate the garden” is the model. The scaling approach is based on AI factories, with networking, power, cooling, and operational tooling moving into the mainstream. We have entered a new era of computing, created by Nvidia. Jump on the curve. You don’t have to be first but don’t be last.
- The “AI kills software” narrative misses the point. Jensen argued tool use increases as intelligence improves, and that pushes software toward automation and outcome-based models. We think seat-based pricing gets stressed as agents act on behalf of users.
- Physical AI expands the market and raises reliability requirements. Jensen’s focus on robotics and autonomy broadens inference demand beyond the data center, but also increases risks around safety, cost, and error rates.
- Cisco’s positioning is credible, and execution is the test. The summit reinforced Cisco’s advantage at the intersection of networking and security, but the market will judge whether Cisco can deliver a cogent control plane for agentic systems rather than a portfolio of parts.
Bottom line
We believe the Cisco AI Summit 2026 captured the true and current state of AI. The models are improving, but enterprise adoption is constrained by infrastructure, governance, and the cost of operating agentic systems at scale. Jensen Huang’s “thousand flowers” mindset is right for discovery but focuses on value to your specific enterprise. Sam Altman’s push toward action is where value ultimately lives. Lip-Bu Tan’s supply chain perspective is a reminder that hardware economics will shape the pace of progress. Cisco has a credible role because networks and security sit in the critical path – but the company will be judged on whether it can turn that position into a unified, operable stack that enterprises can deploy without breaking accountability.

