We believe IBM is attempting one of the more underappreciated pivots in enterprise AI. Specifically, we’re talking about a shift from traditional labor-arbitrage services to technology arbitrage, where value is created by packaging agents, durable platforms, and domain IP into repeatable outcomes. The story isn’t that “IBM has AI.” Everyone has AI. IBM’s promise is to deliver outcomes at the workflow level across regulated, complex environments. In particular, situations where time-to-value, policy, and sovereign constraints dominate buying decisions. Our assessment based on recent conversations with the company and several of its customers indicates this messaging aligns with enterprise priorities within demanding industries. IBM is focusing on mission-critical workflows, blending deterministic systems of record with probabilistic systems of intelligence.
In our view, gaps still exist. Specifically, we’d like to see a more powerful platform story across data, systems and software. IBM created one of the most durable platforms in tech history with the mainframe and looks to repeat that strategy with Quantum, which remains at least 3-5 years out. But the core of the company’s offerings lack a more clear unifying middleware layer reminiscent of Websphere but for AI.
In this special Breaking Analysis we review our takeaways from a recent day-long meeting with top IBM executives, including a substantive AMA with CEO Arvind Krishna.
From labor arbitrage to technology arbitrage
IBM’s operating mix still contains a healthy mix of consulting but software growth is outpacing the mean with a double digit growth target. As shown below, IBM’s financial framework calls for software to contribute approximately 50% of the revenue mix (bright royal blue below).

This will continue to favor steadily improving margins. In addition, the company’s strategy is to take labor out of consulting delivery, reinvest into R&D, productization, and agentic capabilities; as well as returning free cash flow to investors. Internally, IBM is cutting costs. It has mapped 490 workflows, instrumented the top 70 with four digital workers each, and claims $3.5B in productivity gains as “Client Zero.” In our view, leading tech companies must provide compelling internal proof points to have any credibility selling AI to customers. Firms must show AI working at scale in their own houses across forecasting, close, procurement, supply chain, go to market, quote-to-cash, call center, etc. The larger point is enterprises don’t buy GPUs, models, or frameworks, they buy business value measured in compressions of cycle times, lower cost, and reduced risk. IBM is now selling those value points and is on the path to deliver steadily more visible outcomes – both internally and externally.
The Hybrid AI Story: Agentic AI that meets the enterprise where it lives
The center of marketing gravity in the industry today is agentic AI. IBM claims it has codified patterns in three high-yield areas – customer care, back-office productivity, and IT/SDLC – and wrapped these disciplines with lifecycle governance (“AgentOps”) so leaders can evaluate, approve, observe, and optimize agents as they move into production. “Project Bob” extends this into the SDLC via a developer co-pilot built on proprietary Java/COBOL assets that, by IBM’s accounting, is building parts of itself and boosting developer throughput by 20-30% or more. Importantly, IBM posits that not everything should be agentic. In other words, its premise is that high-stakes workflows require a blend of deterministic control and probabilistic reasoning.
Underpinning this thesis is IBM’s hybrid focus, anchored by Red Hat and a “Switzerland” strategy in its data stack. Specifically, Red Hat forms the basis of IBM’s platform approach at the infrastructure software level and provides the substrate for IBM to operationalize its many IT operations tooling.
A workload-first hybrid stack
We’ve long argued that workloads determine the stack requirement and this will extend to more tightly incorporate data and process logic in the agentic era. Once again, Red Hat is an enabler with a workload-first, run anywhere, govern centrally, and abstract infrastructure model. Red Hat OpenShift AI plus vLLM to run open-weight models on any silicon (NVIDIA, AMD, Groq, etc.) – is designed to keep inference close to the data, the policy, and the digital enterprise. IBM shared early results in operations (e.g., Deutsche Telekom patching times down sharply for example), development velocity (ANZ), and contact centers (with Groq acceleration). These customer examples reinforce that inference in the enterprise is crucial to mainstream economics – not just training at scale. This is where enterprises will realize value in our view. IBM’s forthcoming cluster-scale inference (LLMB) is a logical next step that can keep the “run anywhere” promise in place while upping capabilities without forcing customers onto a single public-cloud.
Optionality is the key watchword in IBM’s strategy. Below is a look at how IBM envisions its emerging stack. The key issue to us is the degree to which these piece parts are integrated. Red Hat is the platform glue but it’s unclear at this point how IBM ingests process logic, harmonizes data silos via a semantic layer within data services and serves this data up to agents in a governed fashion. We believe the pieces are there but more clarity is needed with respect to integration strategies.
IBM Consulting – Deep industry expertise combined with improved operating leverage
In our view, IBM Consulting’s pivot from people-led projects to “human + digital labor” is the right strategy and key to the future of its business. The firm’s Consulting Advantage assets – which it claims have been sold into more than 500 deals over the past year – are the mechanism IBM is leveraging to turn expert know-how into reusable software, moving clients from technology pilots to outcomes. Under the leadership of Mohamad Ali, IBM’s own “client zero” story adds to its credibility where management decomposed 490 workflows, instrumented ~70 with digital workers, unified ~300TB of finance/ops data, and claims $3.5B in productivity gains alongside a revenue inflection (from –3% to +5%) and 200 bps margin expansion in 1H.
IBM Consulting strategy
- Our research indicates many enterprises are stuck in PoCs purgatory because they stay in labor or tool arbitrage. IBM is pushing to technology arbitrage via packaged agents, data integration, and operating-model change.
- For IBM, it is implementing a novel AI delivery platform, tied into Red Hat/IBM Tech, which lets teams ship agent classes (not just playbooks). As an example IBM cites a 15-agent SoC investigation that cut cycle times from ~45 to ~15 minutes and is live at multiple clients, deeply integrated with Palo Alto SIEM.
- Pricing is shifting to “licensed skills” and outcome commitments, not hours, which is consistent with asset-based consulting.

IBM Proof points and client outcomes
- 150+ clients running agentic patterns in production;
- Large healthcare – Persistent recruiting uplift via digital labor;
- L’Oréal: Domain LLM on decades of R&D data aims to halve formulation cycles – Company’s CEO expects competitive outperformance;
- Global chemical company is committing to 50% cost take-out; 90% PO processing automated; bespoke LLM from scratch;
- Finance ops at IBM: FP&A productivity up ~40%; recurring journal entries compressed from 30 days to ~1 hour; pricing automation credited with material revenue uplift.
The data suggests IBM is converting scale (75 countries, 115 centers, deep certifications) into improved operating leverage. We believe the playbook – assetize expertise, wire to operational data, price the “skill,” and measure outcomes – will separate winners from also rams in the consulting business.
Durable platforms have always mattered: Z, Power, Storage as AI infrastructure
The mainframe remains IBM’s most durable platform and is being modernized with AI MIPS – attaching accelerators and subscription models to strategic fit use cases like fraud detection and actuarial analysis. IBM claims that its Power platform has rebounded on the strength of Oracle/SAP estates where performance, licensing, and placement are critical. We don’t have validated data on this claim so for now, we’ll take IBM at its word and do more research.
Storage has long been a “cobbler’s child” inside IBM. IBM management has gone with the fashion and de-emphasized storage in its corporate marketing over the past decade, choosing instead to message high level themes and allow infrastructure to fend for itself in the field. New management is trying to change this and position storage as no longer a bystander. IBM claims GPFS powering Fusion is seeing high double-digit growth because virtualized data paths are the fuel to supercharge agents. Our take is IBM’s playbook is to acknowledge that customers won’t abandon proven systems. Rather they want to instrument those systems with agents, observability, and policy to drive outcomes without ripping and replacing years of investment.
IBM’s advantage is its captive installed base. A frequent criticism of IBM storage and systems is that it is overly reliant on attach rates to existing IBM customers. We see this as both a strength and opportunity because to the extent IBM can grow its overall customer footprint, storage will be dragged along and positively impact gross margin.
Some key metrics from IBM on its infrastructure business include the following:
- IBM sees a 3-4X revenue uplift for every dollar of Z hardware sold;
- 70% of the world’s transaction value runs through IBM Z;
- 220,000 IBM all-flash storage arrays deployed globally;
- 35,000 enterprise customers for distributed infrastructure.
The challenge for IBM in infrastructure is to continue to show growth and drive margin expansion. New leadership under Ric Lewis has shown a commitment to taking costs out of the business while at the same time doubling down investments in growth areas. Strong linkages between hardware and software have shown to deliver tangible value to customers and IBM has an opportunity to improve its posture in this regard.
Data: The “AND” architecture
For twenty years, the industry has cycled through “one place for all your data” narratives – i.e. warehouse, lake, lakehouse – only to rediscover that gravity, cost, and governance rarely allow this dynamic to thrive. IBM’s marketing stance has shifted to an “AND” approach – i.e. federate where data sits, virtualize aggressively, and push policy and lineage into the application/agent layer. In our opinion, this is a reasonable starting point, especially for regulated industries and cross-border operations where sovereignty, latency, and contractual data boundaries can’t be ignored.
Data is decentralized and as such IBM has this right. However, even if you can consolidate data into a single physical or logical construct, the silos remain. Why is that? Because the star schema (i.e. data cubes) of the sales, CRM, HR, supply chain, finance and data in siloed divisions and SaaS platforms are each different. So essentially, until data is harmonized via a semantic layer, silos remain. IBM gave little insight as to how it plans to rationalize this data dissonance and help customers create a digital representation of their enterprises. We see this as a TBD chapter in the IBM data story. It has many of the pieces in our view but as we’ve reported extensively, the success of agents depends on the quality of data and the ability for agents to tap into a harmonized data source to take action on behalf of, or with, humans. This is non-trivial as it requires access to process logic, business metadata, governance frameworks, lineage, authentication, security at the core and a way to represent people, places, assets and processes in either a 4D map or similar expressive construct that can speak “human.”
Below is our view of the new software architecture that is emerging. In future analyses we will map IBM’s portfolio into this view. For this post, suffice it to say that IBM touches many parts of this stack, particularly, the systems of record, the governance pieces, the data platform and the underlying infrastructure supporting it. But the path to the SoI layer lacks clarity for us and is a major opportunity for IBM in our opinion.

A New Posture on Partnerships
At IBM Think this past May, in an analyst AMA with the CEO, Arvind Krishna explicitly conveyed a new philosophy on partnerships. Paraphrasing, his words conveyed that in previous regimes, IBM would often look at the market as a zero sum game where one company’s success was a loss for IBM. Krishna’s new philosophy that he’s bringing to IBM is why not partner with the company, and help them succeed by doing what IBM does best – whether consulting, data or software. Several examples were shared at Think including the likes of Salesforce, ServiceNow, and Palo Alto Networks; among others.
At this year’s analyst event, IBM’s deeper integration with Anthropic was on display in agent lifecycle and governance in what is the Achilles’ heel for many pilot-happy programs. The Groq partnership addresses another opportunity – namely ultra-low-latency, cost-efficient inference at scale for real-time use cases like contact centers. We think these are complementary in that Anthropic helps harden the how of safe, observable agent systems; while Groq helps optimize the where and how fast they run.
On balance, however, we’d like to see more clarity in IBM’s ecosystem strategy, particularly how it will partner with ISVs and leverage its data platform to create flywheel effects. In addition, we often see IBM’s consulting competitors prominently on display at various events – e.g. cloud shows, security events, industry collaborations where in our view IBM could be more prominent.
Quantum: Milestones with a CUDA-like library path
IBM’s quantum story is low on hype but filled with promise in our view. IBM’s new head of research, Jay Gambetta laid out accomplishments and a roadmap, stressing: Utility achieved (2023), advantage demonstrated (target 2026), fault-tolerance viability (target 2029). The key in our view is verticalization. In other words, a pipeline of domain problems (finance, materials science, logistics) and a strategy to evolve Qiskit from winning academia to shipping vertical libraries that look and feel more like CUDA stacks do in AI. Essentially, our takeaway is IBM is following Jensen’s CUDA playbook with an open source twist. We believe the tell sign for success will be when we move beyond qubit count and focus on the number of uniquely solvable domain problems that cross over from science project to repeatable advantage with time-to-production and commercial adoption.
Regardless, we are encouraged by IBM’s Quantum investments, strategy and roadmap transparency (see below), which in our view can lead to a new era of vertical integration for the company.

What IBM still must address
IBM seems committed to compress its time-to-value for its clients from months to weeks to days. The company must focus on ease of use, templating, packaging, and pricing simplicity. These are table stakes in our opinion. The message, while improving, still wrestles with its breadth, incongruency across the portfolio and platform coherency. In our view, the proof will be outcome specificity – e.g. “X% improvement in Y days” – by vertical and workflow, backed by transparent commercial examples (including outcome-tied pricing) and a partner motion that recruits ISVs, not just resellers, into the Red Hat/Watsonx flywheel.
Data is the most obvious of opportunities as it is the linchpin of good AI. That said, we struggle to answer the question: “What is Watson’s superpower?” Moreover, IBM lost its product leadership in most domains when it decided to become a services-led company in the 1990s. Other than mainframe, objective observers struggle to cite a meaningful domain where IBM has the “best product.” Other firms lead where IBM could: Database (Oracle), data platforms (Snowflake, Databricks), apps (Salesforce, ServiceNow, Workday, Oracle, SAP, etc.), servers (Dell), storage (Dell, Pure, NetApp). Red Hat is the exception that in some ways proves the rule.
IBM mastered vertical integration with mainframe. It appears to be on a path to repeat this dynamic with Quantum. But there’s a giant middle of the portfolio that lacks that coherence and represents a platform opportunity for IBM. As we wrote when Arvind Krishna took the helm – The company’s future relies on its innovation engine. IBM has played the labor arbitrage game for years, offshoring much of its critical product development. We’d like to see an intensive focus on developing a stronger platform story beyond mainframe and Red Hat. In particular, we’d be excited to get more clarity on IBM’s data platform as an underpinning to its agentic future.
All that said, IBM’s corporate development execution appears to be back on track (see below). We’re not only encouraged by IBM shedding non-strategic assets but its M&A strategy is building on the 2019 acquisition of Red Hat, much of which is designed to automate many aspects of IT and further take labor out of technology management.

The investor angle
The investor priority regarding IBM is a laser focus on sustained double-digit software growth (with Red Hat as the core platform), rising contribution from agentic solutions sold on outcomes, expanding AI MIPS attach on Z, and proof that partnerships like Groq and Anthropic translate to measurable and monetizable latency/cost advantages in production. If IBM proves it can repeatedly take customers from “weeks” to “days” with pre-canned agent blueprints, we believe the business sees expanded operating leverage in both Software and higher-margin Consulting.
Returning free cash flow is attractive to Wall Street but ultimately that innovation engine must present itself to justify the higher valuations, which IBM aspires to reach. With more companies eyeing trillionaire valuation status each day, IBM has the opportunity to reach that milestone. It won’t, in our opinion, come from stock buybacks and dividends. That will help but the real value will come from sustained, proprietary advantage, applying AI internally and gaining outsized operating leverage in its business.
Bottom line – Enabling customers to scale without labor
In our opinion, IBM has found a differentiated path in enterprise AI by productizing workflow-level outcomes across a stack that, while sometimes confusing, customers generally trust. Client Zero proof, a workload-first hybrid design, durable platforms reimagined for agents, and a pragmatic quantum roadmap create a credible basis for sustained advantage. A key factor is simplicity, not only of product experiences but business interactions. If IBM can operationalize days-not-weeks time-to-value with transparent packaging and repeatable business patterns, our view is that IBM will compound share in the agentic era even as consolidation hits some of the players in the field. We assign a ~70% probability that this technology-arbitrage thesis delivers above-market growth over the next 12–24 months, with upside if the company can demonstrate it can help customers scale revenue faster than labor costs. IBM appears to be doing so internally, and if it can point its learnings to clients it’s a winning strategy in what we increasingly see as a winner-take-most market for end customers.