At MWC in Barcelona, an invitation-only Senior Leadership Exchange roundtable brought together C-level leaders across telecom, banking, government, industrial, energy, and the investment community to discuss “Bridging AI and Quantum to stay ahead in the new era.” Under Chatham House rules, the conversation was organized by IBM and candid but non-salesy by design. The session was structured to deepen peer networks, compare industry directions, and validate what is real versus aspirational as AI moves into production and quantum moves from research into planning.
The session opened with a practical discussion around AI readiness and maturity within each of the firms represented. As it pertains to Quantum, we held that to the end of the conversation to address the critical question for leaders – i.e. not why quantum matters, but when they should prepare for disruption and what “readiness” means in operational terms. IBM’s Luq Niazi co-hosted with this author and grounded the discussion with context on the scale of activity in the market and in the ecosystem. He shared that IBM has a global footprint of 20+ quantum systems available via cloud access and 80+ built historically, plus accelerating collaboration activity across industries. The most important takeaway from the room on quantum wasn’t enthusiasm or skepticism; it was sequence of events. These leaders are trying to operationalize AI now while avoiding a future scramble on quantum-safe security and a longer-cycle transition to quantum-enabled advantage.
What the room said about AI at scale
The AI portion quickly converged on a similar pattern we’ve been hearing repeatedly in the market. Specifically, the organizations making meaningful progress are not starting with models, they are starting with data foundations and governance. A telecom executive described AI as an enabler for building “data products,” emphasizing that the hardest part is reorganizing the enterprise around reusable data assets rather than siloed operational vs. analytical stacks. The same executive noted that while data mesh was directionally right as a concept, execution has been inconsistent, and the move toward more agentic and cross-domain workflows exposes the seams in today’s architectures.
A financial services leader reinforced that the early AI journey remains heavily weighted toward efficiency rather than new business revenue, with attention focused on productivity gains, workload management, client servicing, and process optimization. Another participant described a practical maturity model that begins with assessing data readiness before AI readiness, then tailoring use cases to specific personas (executive decision support vs. product manager vs. operations leader). The theme was consistent in that AI is becoming real when it is tied to a measurable operational outcome, not when it is deployed as a generalized capability.
Note: While data maturity at the firm level exhibits a wide range of quality and fidelity, our research with leading AI practitioners suggests that getting your data infra in order (e.g. database, vectors, LLMs, network, etc.) and letting the AI help mature your data is the fastest path to value.
We also saw clear evidence that the “agentic moment” is underway, but still fragmented. Our poll on agents adoption: “which of the following best describes your use of agents in your organization?” – revealed that outright blocking agentic development was essentially absent in this cohort. Most participants indicated either sanctioned experimentation or a mix of internal builds and SaaS vendor-provided agents. That said, the room also acknowledged that multi-agent orchestration across core enterprise systems is still early, difficult and risky. Participants pointed to both maturity constraints and risk concerns, including the lack of robust testing environments and the need for stronger identity and context management as agents begin to operate across sovereign and non-sovereign domains.
One of the sharper insights came from a telecom leader who argued that agentic systems cannot scale without “being clever enough” to understand context and policy boundaries, particularly when serving customers with different sovereignty constraints. That implies a new architectural requirement in that enterprises need a more unified approach to identity, permissions, and policy enforcement across multiple agent frameworks and multiple data planes. We believe this is a major, under appreciated point in the next 12–24 months.
Hybrid architectures: control meets economics
Hybrid came up repeatedly as a practical response to sovereignty, latency, and unit economics. One telecom leader described building a more controlled infrastructure approach to maintain agility and avoid excessive dependence on third parties – indicating it’s less about on-prem dogma and more about flexibility – i.e. control and the ability to tailor capabilities to different customer realities and to iterate faster. Another participant emphasized token economics (“tokenomics”) if inference costs rise unpredictably, proximity and optimization become strategic knobs that can be turned.
An equipment vendor executive broadened the conversation suggesting: AI is “at least four distinct things” inside large organizations – i.e. office productivity, R&D tooling, telco-grade AI in network operations, and silicon-level optimizations. This individual highlighted the training vs. inference split and the constraints of edge inference in environments where power consumption and latency are existential. In our view, this is where AI strategy becomes fundamental from infrastructure design. In other words, not all AI workloads will move to the public cloud, and not all AI economics require centralized architectures.
The bridge to quantum: belief is high, readiness is low and the focus right now is AI
The quantum transition began with IBM’s “Enterprise 2030” survey insight that created quiet in the room: a majority of executives expect quantum-enabled AI to transform their industries, yet only a minority expect to be using quantum computing by 2030. That gap kicked off the discussion. When the co-hosts asked the room to classify attitudes toward quantum – ranging from “too immature to worry about” to “threat requiring quantum-safe action” – the most common response was insufficient clarity and perceived immaturity, with significant interest in quantum safe as the first imperative.
Several participants stressed that the missing piece is the mapping from quantum capability to enterprise use cases, and then to a cost-effective consumption model. Multiple executives asked variations of the same question: How would quantum actually be consumed as a service? What are the practical limits, the commitment model, and the economics? We believe this is the right question at the right time. In AI, adoption accelerated when tools became tangible, priced, and accessible to practitioners. Quantum is not there yet for most enterprises, particularly outside a handful of research, defense, and regulated domains. Quantum has not had its ChatGPT moment and perhaps never will.
Quantum safe: the near-term trigger, QKD the longer-cycle infrastructure play
If there was a consensus item, it was that post-quantum cryptography (PQC) is the most actionable near-term path, while quantum key distribution (QKD) remains constrained by cost and infrastructure realities. A telecom leader noted that QKD’s economics are challenging at scale, with practical deployment limitations in fiber networks and the need for repeaters over distance. The implication was that 2030 may be too optimistic for broad QKD rollout beyond targeted corridors, even though pilots are happening.
At the same time, the group recognized that policy and mandate can change the timeline. A leader from a fast-growing international telecom group described how national cyber authorities can force the issue for critical infrastructure sectors – defense, aviation, utilities, and government – driving test-and-pilot programs with explicit deployment targets. IBM added real world evidence that national efforts are already underway in parts of Europe, and that leading carriers have demonstrated quantum transport over live commercial fiber in city-scale environments. We believe this “mandate-driven adoption” could be a catalyst but it’s early. Quantum safe will not be adopted like consumer AI in our view; it will advance through regulation, critical infrastructure priorities, and sovereign investment decisions.
A sovereign and energy-shaped future
A compelling late-session point was the intersection of quantum, sovereignty, and energy. Participants raised the concern that AI at scale is on a collision course with power availability and grid capacity, and that the constraints of quantum – physics, cost, and operational requirements – could naturally push deployments toward sovereign or national infrastructure models. Quantum, of course, requires much less from the grid than AI. One host hypothesis was that telecom operators may evolve into providers of dual-use networks – classical connectivity plus quantum-safe services, monetized for governments, banks, utilities, and other critical sectors. That model implies not only a new value pool, but it also raises the obvious question asked by an attendee: Who pays for it?
One host argued that leaving the outcome purely to free market forces risks missing the opportunity, especially in regions facing tighter energy constraints. Our view is that this is a realistic point. AI industrialization is largely capital-market driven; quantum infrastructure has a reasonable path to become policy-driven. Over the next three to five years, the winners may be the ecosystems that align policy, energy planning, and industry alliances, not just the ones that build the best technology.
Where the session closed
The roundtable did not produce a single “aha” that quantum is suddenly imminent. Instead, it produced something more valuable for CxOs, the clarity on sequencing. The group is deep in the operational realities of scaling AI – e.g. data products, governance, hybrid design, token economics, and early agentic systems – while simultaneously acknowledging that quantum readiness will arrive through security first, then targeted use cases, and only later through broader commercial advantage.
We believe the most notable conclusion is: quantum education on what’s possible, when and where it applies is what’s required to make the initiative a prerequisite. This will not happen in a “big bang.” It will be phased, shaped by critical infrastructure priorities, and potentially accelerated by quantum-safe mandates before it is pulled forward by quantum advantage use cases.
Top takeaways
- AI maturity is uneven but advancing: this cohort has moved beyond experimentation, yet data readiness, AI trust and governance remain limiting factors for scale.
- Agents are here; orchestration is not: internal efforts and vendor agents are proliferating, but multi-agent and cross-system orchestration with context-aware identity remain early and risky.
- Quantum is believed, but not on the operational radar: leaders expect impact, but use-case clarity is lacking and the consumption model is still too vague for most to budget meaningfully.
- Quantum safe is a potential bridge: PQC is the near-term action path; QKD is real but cost- and infrastructure-constrained outside targeted applications and mandated environments.
- Sovereignty and costs will shape outcomes: policy-driven investment and critical infrastructure priorities may determine who captures advantage as the adoption curve for quantum accelerates.
In our opinion, this session captured a pivot-year dynamic: AI has shifted from “whether” to “how fast,” while quantum has shifted from “why” to “when” – with quantum safe acting as a practical adoption catalyst. The next three to five years will reward organizations that treat readiness as an operating discipline – AI maturity, architecture discipline, ecosystem alignment and cryptographic agility.
Thanks to IBM for organizing and all the executives that participated and shared insights. It was a pleasure to host.

