Formerly known as Wikibon

Cisco Enables “Scale-Across” AI environments

Cisco has unveiled its most powerful and efficient routing platform this week, the Cisco 8223, powered by the company’s new Silicon One P200 chip. The announcement, made ahead of the Open Compute Project (OCP) Global Summit, marks a significant step forward in AI-era networking, as hyperscalers and enterprises increasingly face the limits of scaling within single data centers. The 8223 system enables organizations a third dimension of scale called “scale-across”. The prior two dimensions are “scale-up” and “scale-out”. The “scale-across” architecture securely connects multiple AI clusters across geographically distributed data centers with the requisite performance, programmability, and efficiency.

Distributed AI Infrastructure

AI models like OpenAI’s ChatGPT have grown exponentially in complexity, requiring thousands of GPUs and increasingly larger datasets. As Martin Lund, Cisco’s EVP of the Common Hardware Group, explained, “AI compute is outgrowing the capacity of even the largest data center, driving the need for reliable, secure connection of data centers hundreds of miles apart.”

The 8223 is designed precisely for that environment. It’s the first 51.2 terabit-per-second (Tbps) deep-buffer fixed router optimized for inter-data-center connectivity, providing the high throughput, low latency, and reliability that large-scale AI training and inference demand. The platform is built to handle over 20 billion packets per second, scale to 3 exabits of interconnect bandwidth, and deliver line-rate encryption with post-quantum resilience, enabling both performance and security at unprecedented levels.

Cisco’s new Silicon One P200 chip sits at the heart of the 8223. It’s the latest in Cisco’s unified networking silicon architecture, which supports use cases across hyperscaler, AI/ML, enterprise, and service provider environments. The P200 combines the deep buffering and programmability of routing silicon with the power efficiency of switching silicon, which Cisco believes is a breakthrough that will allow customers to build large-scale AI backbones without compromising energy efficiency.

Addressing Power, Space, and Scalability Constraints

The announcement comes amid a fundamental shift in data center design. Power and cooling constraints are forcing hyperscalers to build data centers in new regions, often farther from population centers, and distribute workloads across multiple facilities. Cisco executives described this as the “great data center migration” driven by AI’s massive power and bandwidth demands.

Traditional “scale-up” and “scale-out” architectures have reached their limits; hyperscalers, and neoclouds now need to “scale across” data centers. This shift dramatically increases interconnect traffic, Cisco estimates up to 14 times more bandwidth than traditional WAN architectures, and requires robust, lossless connectivity across hundreds or even thousands of kilometers.

To meet these demands, the 8223 integrates deep-buffer capabilities to handle congestion, coherent 800G optics supporting up to 1,000 km reach, and programmable silicon that adapts to evolving AI traffic patterns and new protocols without hardware changes. Despite these capabilities, the system remains highly efficient: it delivers 65% lower power consumption and 70% less rack space than traditional modular chassis systems delivering similar bandwidth.

A Flexible Platform for Hyperscalers, Service Providers, and Enterprises

Cisco is taking a multi-pronged approach to deployment flexibility. The 8223 will initially ship with open-source SONiC software to meet hyperscaler demand for open, customizable environments. IOS XR support, Cisco’s flagship routing OS, will follow shortly, expanding use cases into WAN core, backbone, and aggregation routing. The same P200 silicon will also appear in upcoming Nexus data center systems running NX-OS, providing architectural consistency from AI training clusters to enterprise data centers.

This flexibility allows Cisco to serve a broad range of customers, from hyperscalers like Microsoft and Alibaba Cloud to global service providers such as Lumen. As Dave Maltz, Corporate Vice President for Azure Networking, noted, “The increasing scale of the cloud and AI requires faster networks with more buffering to absorb bursts. The common ASIC architecture in Silicon One has made it easier for us to expand from our initial use cases to multiple roles in DC, WAN, and AI/ML environments.”

Alibaba Cloud’s Dennis Cai echoed this sentiment, highlighting how the P200 will “replace traditional chassis-based routers with clusters of P200-powered devices,” improving scalability and reliability across Alibaba’s expanding eCore architecture.

The Reliability Imperative

One of the most significant technical differentiators Cisco emphasized during its analyst briefing was deep buffering, a capability often misunderstood in the AI networking community. Critics argue that deep buffers can introduce latency or jitter, but Cisco’s engineers countered that in synchronous AI workloads (like those used in GPU training), job completion time depends on the slowest path, not average latency.

As Rakesh Chopra, Distinguished Engineer at Cisco, explained, “At the scale we’re talking about, failures are the norm, not the exception. When you have a failure and the network needs to adapt, you need bigger buffers to absorb that traffic while congestion control algorithms react.” In essence, deep buffers ensure that AI training workloads remain synchronized and resilient in the face of unpredictable congestion and long-haul transport delays.

Cisco also stressed the security dimension of inter-data-center networking. As AI clusters expand geographically, the attack surface grows. The 8223 mitigates this risk with line-rate MACsec encryption, cloud-scale telemetry, hardware root-of-trust, and post-quantum key management, all integrated directly into the silicon and control plane.

Efficiency and Programmability at Scale

The 8223’s most striking achievement may be its balance of performance and power efficiency. Cisco compared the 8223’s 51.2T routing capability in a compact 3RU form factor to a traditional 10RU chassis system requiring nearly 100 separate chips to achieve equivalent throughput. The result: massive savings in power (65% less), space (70% less), and complexity, all while maintaining full programmability through Cisco’s P4 language support and adaptive packet processing pipeline.

Cisco also underscored its commitment to optical innovation. The system supports a wide range of 800G pluggable optics, enabling flexible deployments from data center interconnect (DCI) to metro and long-haul applications, with reach extending beyond 1,000 km.

Industry Implications

Cisco’s 8223 and Silicon One P200 extend the company’s leadership in unified networking and underscore its strategic pivot toward AI infrastructure enablement. Since its launch in 2019, the Silicon One portfolio has expanded across five product series, all built on a common architecture. This unified approach simplifies deployment, validation, and innovation across enterprise, cloud, and service provider environments, which is valuable for customers looking to consolidate networking architectures for AI workloads.


OurANGLE

AI adoption is reshaping data center and network architectures. Traditional “scale-up” and “scale-out” architectures now require the ability to “scale-across” and interconnect AI clusters across regions. This shift magnifies the importance of high-capacity, power-efficient, and secure routing systems that can handle massive, synchronized data flows without interruption.

Cisco’s 8223 directly addresses this new reality. By combining deep buffering, programmability, and switch-like efficiency in a single ASIC, Cisco provides a foundation for the next generation of distributed AI infrastructure, one that is as flexible as it is secure. For hyperscalers and enterprises alike, this means faster model training, better resilience, lower power costs, and an architecture ready for the next wave of AI innovation.

As AI continues to drive compute and data intensity, networking is becoming the critical connective tissue of intelligence. With the 8223 and Silicon One P200, Cisco has positioned itself well to take advantage as organizations transition to the third dimension of scale, “scale-across”.

Article Categories

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
"Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. "
John Furrier
Co-Founder of theCUBE Research's parent company, SiliconANGLE Media

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well”

You may also be interested in

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content