The public cloud, despite its inherent security benefits, is now perceived as the riskiest part of the technology stack by 70% of IT and security leaders. This is not due to the cloud itself being insecure, but because the confluence of exploding AI workloads, fragmented security tools, and the complexity of hybrid/multi-cloud environments has created significant blind spots.
In a conversation with Chaim Mazal, Chief AI and Security Officer at Gigamon, the imperative to move beyond traditional logs and network performance monitoring (NPM) emerged as critical. The solution lies in deep observability, using immutable network telemetry to unify visibility and deliver actionable intelligence, to secure the new era of autonomous, AI-driven applications.
Why Traditional Tools are Failing the AI Test
Traditional security and observability tools were purpose-built for siloed environments (on-prem, virtualization, public cloud). This fragmentation is now causing a massive breakdown in visibility as network traffic surges (in some organizations, by nearly 200% in the last two years) largely driven by commercial and self-hosted AI initiatives.
Mazal points to several fundamental reasons why traditional tooling is failing to detect rogue AI behavior or manage the risk of the new environment:
- Disparate Data Sets: Security teams are ingesting various, disparate logs and telemetry that lack singular language or congruity, making it nearly impossible to have informed, unified conversations across NetOps, SecOps, and CloudOps teams.
- Unstructured Traffic: The traffic generated by LLMs and proprietary AI models is often not formalized or structured, making it hard to find repeatable patterns for detection, leaving many companies “flying blind.”
- Overly Permissive Access: To enable rapid innovation, organizations often have to be overly permissive with the non-human access granted to GenAI agents and LLMs. Since data is frequently extracted from legacy systems and transferred, it can lose the security integrity of its original system, creating massive gaps in Identity and Access Management (IAM) and permissioning that traditional tools cannot track holistically across environments.
The Imperative for a Single Source of Truth
The 70% risk perception around public cloud is rooted in three key challenges: skill gaps, complexity, and access management.
- Skill Gap: A shortage of specialists is forcing organizations to hire generalists, which increases pressure on vendors to reduce complexity and simplify deployments.
- Complexity and Lateral Traffic: As organizations scale across on-prem, virtualization, and multi-cloud, the intertwining mesh of communication (ingress/egress/lateral traffic) creates risk. Mazal emphasizes that the solution is not a “single pane of glass” for the tools themselves, but a “singularity in the data set.”
This is the core mandate of deep observability: augmenting logs and traces with immutable network traffic, the raw data that cannot be tampered with or forged, to create a unified, single source of truth.
“The real value of deep observability is being able to plug in this immutable network traffic across all of your environments and have those same representative data sets being fed into whatever tool that that team needs to be successful.”
This network intelligence is the key that enables teams to:
- Identify and Segment Traffic: Quickly identify GenAI traffic spikes.
- Apply Zero Trust Controls: Implement consistent policies across hybrid environments.
- Achieve Data Parity: Normalize conversations across all operational teams by validating their large data sets against an unimpeachable source.
Security as the Driver of AI Strategy
The race for innovation is causing organizations to willingly accept risk. Currently, only 36% of security leaders have a seat at the AI strategy table; a number that Mazal predicts will “exponentiate quite quickly.”
The reason being, security is poised to become the primary driver and innovator for AI across the enterprise. Security leaders are the only executives whose domain inherently touches all technology, strategy, and risk elements of the organization. Because AI deployment is impossible without proper controls, security cannot be an afterthought, just as developers learned they must “slow down to go faster” by embedding security into the development process.
This shift will converge the goals of observability and security. Deep observability is now a must-have for securing autonomous AI solutions because it delivers the non-negotiable data point: actionable security insights. When a performance or availability issue surfaces, security is now consistently ranked as the number one or number two actionable insight. By using immutable network telemetry to fill the security blind spots created by the AI explosion, organizations can finally manage the public cloud risk and ensure that innovation is both fast and secure.