Executive Summary
The enterprise data center is undergoing a fundamental transformation. What was once a general-purpose computing environment is being re-architected into an AI factory – i.e. a system designed to continuously transform data into intelligence at industrial scale. We believe this shift represents a structural reorientation of infrastructure, software, and operating models around accelerated compute/networking/storage, high-speed data pipelines, and agent-driven workflows.
While this transformation is widely understood from a performance and economic perspective, in our view, its security implications are significantly underappreciated.
AI factories introduce a new class of risks that extend beyond traditional cybersecurity models. In this world, data is not static, bounded, or easily classified. Rather it is continuously generated, transformed, and consumed across distributed environments. Models are supported by deterministic systems that behave predictably; but they are also adaptive, probabilistic engines that evolve over time. Agents increasingly act autonomously, interacting with enterprise systems, executing workflows, and making decisions with limited – or sometimes no – human intervention.
These characteristics render traditional security models that are built around perimeter defense, static policy enforcement, and human-speed response as completely insufficient.
Our core thesis is as follows:
The AI factory cannot be secured using legacy approaches. It requires a new control plane – i.e. one that governs data, models, and agents in real time.
Organizations that fail to make this transition will experience incremental risk increases. They will also operate in environments where they lack visibility into what their AI systems are doing, what data they are accessing, and what exposures they are creating.
The AI Factory Moves Data Centers From Infrastructure to Intelligence Manufacturing
The concept of the AI factory changes how we think about enterprise computing. Rather than supporting applications, these systems are designed to produce intelligence through tokens. Energy, compute and data flow into the system, then processed through training and inference pipelines, to create outputs – e.g. text, code, decisions, predictions, or automated actions; and these directly feed and even reformat business processes.
This transformation is enabled by a shift in the underlying tech stack. Compute is no longer only CPU-centric; it is built around GPU-accelerated, massively parallel architectures. Networking is optimized for ultra-low latency and high bandwidth to support distributed training and real-time inference. Storage is disaggregated, memory heavy and more efficient, allowing high-performance data access at scale. Above this infrastructure sits a system of intelligence (cognitive surface) that integrates data, metadata, and policy into a coherent framework.
The result is a system that operates continuously, at machine speed, across multiple domains.
The AI factory completely transforms the notion of a data center. It becomes a production environment for intelligence, and it introduces a fundamentally different risk profile.

Key Points:
- AI factories transform raw data into intelligence through continuous pipelines of training, inference, deployment, and improvement.
- These systems operate at scale and require integrated governance across all layers.
The Stack Change and Its Security Implications
In traditional enterprise environments, applications defined the architecture. Infrastructure was provisioned to support specific workloads, and security controls were layered on top– often as an afterthought. Boundaries were more clear – i.e. networks could be fenced off, data could be classified within known systems, and access could be managed through well-defined identity frameworks.
The AI factory completely disrupts this model.
Each layer of the stack is being rebuilt to support intelligence production rather than application hosting. Specifically, compute is dynamic and elastic, scaling across clusters and regions. Networking becomes part of the compute fabric, with traffic flowing in complex patterns across environments. Storage is no longer a passive repository but an active participant in data movement, transformations and intelligence production. The data layer itself becomes the most critical – and most volatile – component of the system.
Security controls that depend on static definitions struggle in this environment. There is no stable perimeter to defend. There is no fixed dataset to classify. There is no predictable workflow to monitor.
Instead, organizations are faced with systems that continuously evolve, generate new data, and interact with other systems in ways that are difficult to anticipate.
Why Traditional Security Models Fail
The limitations of legacy security approaches become more clear when examined against the characteristics of AI factories.
First, the notion of a perimeter collapses. Data moves across cloud environments, on-premises systems, and edge deployments. Agents access multiple services, tools and applications through APIs and intermediate layers. The idea that sensitive data can be contained within a defined boundary is increasingly unrealistic.
Second, identity becomes significantly more complex. Traditional models focus on human users, human behaviors and service accounts. AI factories introduce non-human actors – i.e. agents, pipelines, and automated systems – that operate with varying levels of autonomy. These entities may inherit permissions, interact with multiple systems, and generate actions that are difficult to trace back to a single source.
Third, the attack surface shifts toward data. While traditional cybersecurity has focused on protecting systems and networks, attackers targeting AI environments increasingly aim to manipulate or extract data. Techniques such as data poisoning, model inversion, and prompt injection are all methods of exploiting data pathways rather than infrastructure vulnerabilities.
Finally, the speed of operations outpaces human response. AI systems operate in milliseconds, executing tasks and making decisions faster than traditional monitoring and response mechanisms can react. This creates a gap between detection and action that adversaries can exploit.
The Emerging AI Risk Surface
These dynamics create what can be described as an AI risk surface—a multi-dimensional exposure model that extends beyond traditional attack surfaces.
At the data layer, risks include unauthorized access, incomplete lineage tracking, and the inability to classify or govern continuously changing datasets. At the model layer, vulnerabilities arise from inference leakage, model theft, and bias manipulation. At the agent layer, autonomous systems may execute unintended actions, access inappropriate resources, or create cascading failures.
The infrastructure layer introduces additional concerns, including resource abuse, network congestion attacks, and distributed denial-of-service scenarios. Overarching all of these is the control plane, where misconfigured policies, lack of observability, and governance gaps can amplify risks across the entire system.
The critical insight is that these risks are interconnected. A failure in one layer can propagate across the system, creating complex and difficult-to-diagnose exposures.

Key Points:
- Legacy security models rely on static policies, perimeter boundaries, and human-driven response.
- AI systems introduce dynamic data flows, distributed workloads, and autonomous actions that break these assumptions.
Security as the Control Plane
In AI factories, security can no longer be treated as a bolt-on. It must become the control plane that governs how intelligence is produced.
This requires a shift from static enforcement to dynamic, real-time governance. Policies must be embedded into the runtime environment, influencing how data is accessed, how models behave, and how agents operate. Observability must extend across the entire lifecycle, providing visibility into data flows, model interactions, and system behavior.
Human oversight remains essential, but it must be integrated into a system that operates at machine speed. Rather than attempting to manually review every action, organizations must design frameworks that enable humans to set boundaries, define policies, and intervene when necessary.

Key Points:
- Security must evolve into a real-time control plane governing data, models, and agents.
- This includes continuous monitoring, adaptive policies, and embedded governance mechanisms.
The Role of Data Governance
A central theme emerging from AI security discussions is the importance of data governance. While models receive significant attention, it is the data they consume that ultimately determines both their value and their risk.
Organizations must establish mechanisms to continuously classify data, track lineage, and enforce access controls. This is an ongoing exercise. As data is generated and transformed, governance evolves in parallel.
Without this capability, organizations cannot answer fundamental questions:
- What data is being used by AI systems?
- Where did it originate?
- Who has access to it?
- How is it being used?
The inability to answer these questions represents a critical vulnerability.
Service-as-Software and the New Risk Model
AI factories enable a shift from traditional software delivery models to what we describe as Service-as-Software. In this model, outcomes are delivered directly by intelligent systems, often with minimal human involvement.
This has profound implications for security. When systems act autonomously, the consequences of errors or malicious activity are magnified. Decisions are executed in real time, often without the opportunity for manual review.
As a result, security must extend beyond protecting systems to governing outcomes. Organizations must ensure that the actions taken by AI systems align with policy, regulatory requirements, corporate edicts and business objectives.
The Economic Reality
The scale of investment in AI factories underscores the urgency of addressing these challenges. Data center spending is accelerating, with significant capital directed toward accelerated compute, high-speed networking, and supporting infrastructure. On average, organizations spend about 4% of their revenues on technology. We believe in the next ten years this percent will expand into the double digits. Organizations will shift technology spend toward accessing intelligence in the form of tokens via APIs.
Security investment must keep pace. Many organizations continue to rely on fragmented tools and inconsistent governance models. This creates a mismatch between the scale of AI deployment and the maturity of security controls.
The result is an environment where risk accumulates faster than it can be managed.
Toward a Secure AI Factory Architecture
A secure AI factory requires an integrated approach that spans multiple layers.
At its core is a system of intelligence that models business state and provides context for decision-making. This system must integrate data, metadata, and policy into a unified framework. Above this sits a governance layer that enforces policies in real time, supported by observability mechanisms that provide continuous insight into system behavior.
Agent control frameworks are needed to manage autonomous systems, ensuring that tasks are executed within defined boundaries. The data plane must be secured to control how data flows through the system, while the observability fabric provides the visibility needed to detect and respond to anomalies.
This architecture is not optional. It is the foundation for operating AI systems at scale.

Action Items
Organizations should begin by treating AI security as a core infrastructure challenge rather than a tooling problem. This means prioritizing investments in data governance, observability, and control plane capabilities.
Security must become continuous, with policies enforced dynamically rather than through periodic reviews. Systems should be designed with failure in mind, incorporating mechanisms for detection, containment, and recovery.
Finally, organizations must recognize that AI systems will evolve over time. Security frameworks must be adaptable, capable of responding to new threats and changing conditions.
Conclusion
The transition to AI factories represents a defining moment in enterprise technology. It offers unprecedented opportunities for innovation and efficiency, but it also introduces new risks that cannot be addressed using legacy approaches.
The organizations that succeed in this environment will be those that recognize the need for a new security paradigm. They will invest in control planes that govern data, models, and agents. They will build systems that are resilient, observable, and adaptable.
Most importantly, they will understand that securing the AI factory is not just about protecting infrastructure. It is about ensuring that the intelligence produced by these systems can be trusted.
If you cannot secure the AI factory, you do not control the outcome.

