At the Networking for AI Summit, we conducted three in-depth interviews with HPE Networking leaders, highlighting how the combined HPE–Juniper organization is positioning itself to help enterprises scale AI securely, efficiently, and with greater autonomy. We spoke with Jeff Aaron, VP Product Marketing to get an overview, Bob Friday, Chief AI Officer, to discuss self-driving networks and then hosted a panel data center AI factory discussion with Bharath Ramesh, Head of Product for AI Factory Solutions, Praful Lalchandani, Vice President of Products for Data Center Business Unit and Jon Green, Chief Technology Officer and Chief Security Officer.
The HPE Networking Portfolio
In his session, Jeff Aaron positioned the combined HPE–Juniper portfolio as addressing both “networking for AI” and “AI for networking.” The portfolio unites switching, routing, operations, and security for AI data centers, while also extending agentic and self-driving AIOps across campus, branch, and WAN environments.
Aaron stressed that AI factories intensify reliance on low-latency, high-performance connectivity, both within data centers and across sites. He described self-driving networking as a journey, where organizations move from telemetry collection to proactive actions, and eventually to autonomous operations. Conversational interfaces and agentic AI are expected to accelerate this progression. Early customer experiences demonstrate that adopting HPE’s self-driving capabilities leads to fewer tickets, faster deployments, lower OPEX, and reduced errors, enabling IT staff to focus on strategic business outcomes.
The merger of HPE and Juniper strengthens both sides of this equation. For AI for networking, it delivers advanced AI/ML, access to the industry’s largest data lake, and agentic capabilities. For networking for AI, it provides rack-scale systems, GreenLake delivery, and validated designs.
Trust on the Path to Self-Driving Networks
Bob Friday framed the industry’s transition to self-driving networks through the lens of a car analogy. Similar to how consumers became familiar with technologies such as lane assist and adaptive cruise control prior to adopting autonomous vehicles, IT teams are implementing automation in gradual steps as they move toward fully self-driving networks. He noted that AI-driven operations are already helping organizations shift from reactive “firefighting” toward proactive management, freeing IT staff to focus on higher-value initiatives. HPE’s Marvis Actions 2.0 reinforces this trust by providing audit logs, feedback loops, and measurable outcomes such as reduced Zoom or Teams “bad minutes.” Importantly, Friday emphasized that human-in-the-loop oversight remains essential, with experienced engineers validating outcomes and refining models as adoption scales.
Data Center AI Factories
In a panel discussion, Bharath Ramesh, Praful Lalchandani, and Jon Green explored how data center architectures must evolve to support AI factories. Training and inference workloads create intense east-west traffic, requiring high bandwidth, ultra-low latency, and lossless delivery. The panel highlighted how Ethernet—with capabilities such as 800G, ROCEv2, and AI-optimized techniques like RDMA-aware load balancing—is catching up to and, in many cases, surpassing InfiniBand in scalability and manageability.
Operational complexity was another focus area. HPE’s Apstra provides intent-based automation that extends visibility from fabric to NIC, surfaces congestion points, and, together with Mist AIOps, delivers automated tuning and self-driving recommendations. This capability can compress troubleshooting timelines from “infinite” to just minutes. Building on HPE’s Cray and SGI heritage, validated reference architectures are designed to accelerate deployments, scaling from pilot clusters to hundreds of thousands of nodes while helping close skills gaps.
Security was also front and center. Jon Green emphasized that multi-tenant AI services raise integrity risks, while on-premises AI factories allow for tighter control. Protecting training data pipelines from adversarial inputs and maintaining transparency across clusters were identified as critical requirements for AI at scale.
Our ANGLE
Across the three sessions and the Networking for AI Summit, a consistent theme emerged: networking in the AI-era is about building trust, maximizing GPU utilization, and simplifying operations at scale. Enterprises should expect Ethernet-based fabrics with intent-driven automation and workload awareness to become the default for AI factories. Human-in-the-loop governance, audit trails, and security embedded from the outset will be essential to maintaining operator confidence.
The integration of HPE and Juniper positions the combined organization to deliver end-to-end AI-ready solutions that span data centers, WAN, and edge while helping accelerate the industry’s progression toward self-driving networks. The bottom line is clear: HPE Networking views AI as both the driver of new infrastructure requirements and the engine of operational simplicity. With validated blueprints, AI-optimized Ethernet, closed-loop AI and Agentic Ops, and a trust-first approach, enterprises are better positioned to safely embrace the era of self-driving networks.
For more information on HPE Networking solutions, visit its website.