Formerly known as Wikibon

The ART of Taming Agents: A CISO’s Framework for Managing Enterprise Risk in the Age of Agentic AI

The Current State of Agentic AI

Let’s be blunt: agentic artificial intelligence (AI) isn’t just the next evolution of generative AI, it’s a completely different beast. We’re moving from reactive chatbots to proactive, autonomous systems that can execute complex, multi-step tasks with minimal human oversight. While this creates incredible opportunities for innovation, it also rips a hole in traditional security playbooks. It creates a dynamic new attack surface defined not by static servers and endpoints, but by the autonomous actions and identities of non-human agents.

Your existing security perimeter and model-centric guardrails are not enough. This shift introduces a new class of high-impact threats, from sophisticated prompt injections and unauthorized tool misuse to agent memory poisoning and uncontrolled agent-to-agent communications. tl;dr it’s a mess, and to get a handle on this chaos, enterprises need a structured, strategic approach. 

The ART of Risk Management framework I proposed earlier this year, built on the core tenets of Avoid, Reduce, and Transfer, provides a practical, no-nonsense lens for this challenge. A layered defense is non-negotiable and no single vendor has a silver bullet (or pain in the glass) for this. Organizations must adopt a holistic methodology that integrates proactive prevention, robust detection and response, and smart risk distribution.

The ART of Agentic AI Security – Technology Categories

Agentic AI Security Technology Framework

Key technology categories organized by the Avoid, Reduce, and Transfer (ART) security framework.

Avoid

AI Security Posture Management (AISPM)

AI Bill of Materials (AI-BOM)

Automated Red Teaming

Input/Output Firewalls (AI Gateways)

Agent Identity Governance

Secure AI Development (AI SDLC)

Reduce

Runtime Observability & Threat Detection

Real-Time Data Loss Prevention (DLP)

Managed Detection & Response (MDR) for AI

Incident Response (IR) Services for AI

Transfer

AI Liability Insurance

Managed Services (MDR/MSSP)

“As-a-Service” Offerings

The security market is scrambling to respond, splitting into two camps. On one side, established giants like Palo Alto Networks, CrowdStrike, Zscaler, and Fortinet are aggressively retrofitting their platforms to extend identity management, cloud security, and data protection to this new domain. On the other, a vibrant ecosystem of AI-native startups, including Zenity, Mindgard, Lasso Security, and Straiker, is emerging with solutions built from the ground up to tackle the unique vulnerabilities of autonomous agents.

The CISO’s role has to evolve from a technical gatekeeper to a strategic business partner, embedding security into the agentic AI lifecycle to enable innovation, not inhibit it.

CISO Recommendations for Agentic AI Security

CISO-to-CISO Analysis: Agentic AI Security

This report dissects the new threats from agentic AI and provides a clear roadmap for secure adoption, structured through the ART (Avoid, Reduce, Transfer) framework.

Key Recommendations for Immediate Action

1. Prioritize “Shadow AI” Visibility

Immediately implement tools and processes to discover and monitor all unsanctioned AI usage across the enterprise. You cannot secure what you cannot see.

2. Govern Non-Human Identities

Extend modern identity and access management (IAM) frameworks to treat AI agents as a new class of non-human identity, enforcing strict principle of least privilege.

3. Shape AI Liability Coverage

Start strategic conversations with your insurance providers now. Proactively shape the nascent market for AI-specific cyber liability coverage before an incident forces your hand.

Introduction: The New Frontier of Agentic AI Risk

For the last couple of years, all the chatter has been about generative models like OpenAI’s GPT and Google’s Gemini. But that was just the warm-up act. Agentic AI is a profound leap forward, moving beyond simple question-and-answer to a full cycle of perception, reasoning, action, and learning. These systems can pursue long-term goals and execute complex workflows on their own. This autonomy is the source of their transformative business potential and, for us, the origin of a formidable new risk profile.

Defining Agentic AI: More Than Just a Chatbot

Think of it this way: a generative AI is a brilliant intern who can answer your questions. An AI agent is that same intern, but you’ve given them the company credit card, the keys to the server room, and a mandate to “go fix the supply chain”. What could possibly go wrong?

Defining Agentic AI

Beyond Generative Chatbots

An AI agent is an intelligent software system designed to operate autonomously within a digital or physical environment to achieve specific objectives. Unlike a simple chatbot, which processes an input and provides an output, an agentic system is architected with distinct, interacting components that mimic cognitive functions. These typically include:

Perception

The ability to gather and process data from its environment, using sources like APIs, databases, IoT sensors, or user interactions to build contextual awareness.

Reasoning

A cognitive engine, often powered by one or more Large Language Models (LLMs), that interprets tasks, formulates plans, generates potential solutions, and coordinates the use of other tools or sub-agents.

Action

The capacity to execute its decisions by interacting with other systems. This is commonly achieved through API integrations with enterprise software, robotic process automation (RPA) tools, or external platforms.

Learning

A feedback mechanism, often based on reinforcement learning, that allows the agent to improve its performance over time by analyzing the outcomes of its actions, enabling it to adapt to new information and changing conditions without constant manual retraining.

This architecture allows a single agent to monitor support channels, identify a product issue, query a knowledge base, file a bug report, and draft a personalized response to customers, all without direct human guidance. It’s this capacity for multi-step, unsupervised action that defines the agentic paradigm and creates our new security challenge.

The New Attack Surface: A Taxonomy of Agentic Threats

A Taxonomy of Agentic Threats

The New Attack Surface: A Taxonomy of Agentic Threats

The autonomy of AI agents creates a dynamic attack surface where the primary vulnerabilities lie not just in the AI model’s code, but in its behavior, permissions, and interactions. A comprehensive understanding of these threats is the first step toward effective risk management. These risks extend far beyond the well-documented OWASP Top 10 for LLMs and can be categorized as follows:

Input and Logic Manipulation

Prompt Injection and Intent-Breaking

Malicious actors can craft inputs that trick an agent into performing unintended actions, bypassing its safety guardrails, or manipulating its core goals. This is no longer a theoretical risk, with success rates for basic “jailbreak” attempts remaining significant even on commercial-grade agents.

Goal Manipulation

An advanced form of injection where an attacker subtly alters an agent’s long-term objectives, causing it to work toward malicious ends while appearing to operate normally.

Resource and Tool Exploitation

Tool Misuse

Agents are granted access to external tools and APIs (e.g., sending emails, accessing databases, executing code). An attacker can manipulate the agent into misusing these tools, such as deleting critical records or exfiltrating confidential data through a legitimate API endpoint. In one benchmark, over 63% of open-source agents misused APIs when given ambiguous tasks.

Resource Overload and Runaway API Calls

An agent caught in an unintentional loop can generate a massive volume of API calls, leading to denial-of-service conditions or catastrophic, unpredictable costs. This “agentic traffic” often bypasses traditional infrastructure monitoring, creating a significant blind spot.

Memory and State Vulnerabilities

Memory Poisoning

Attackers can inject false or malicious information into an agent’s short-term or long-term memory. Once this corrupted data is stored, it can force the agent to make flawed decisions, ignore security protocols, or act against the user’s interests indefinitely.

Cascading Hallucination Attacks

In a multi-agent system, if one agent’s memory is poisoned or it begins to “hallucinate” (generate factually incorrect information), it can pass this flawed data to other agents. This can trigger a chain reaction, corrupting the decision-making of the entire system.

Identity and Access Risks

Identity Spoofing and Privilege Compromise

As agents become a new class of “non-human identity” on the network, they become targets for impersonation. An attacker who compromises an agent’s credentials can gain access to all the systems and data that agent is authorized to use. This is a critical vulnerability, as agents may require access to highly sensitive infrastructure to perform their tasks.

Insecure Agent-to-Agent (A2A) Communication

Emerging protocols like Model Context Protocol (MCP) and A2A are designed to facilitate collaboration between agents. However, these communication channels create new vectors for attack if not properly secured, allowing one compromised agent to influence or control others.

This isn’t just another app-sec problem. It’s an identity crisis, literally. The critical question is no longer just “What can the model know?” but “What is this autonomous entity allowed to do on our network?” This transforms the problem into an enterprise identity and access management (IAM) challenge. We’re already seeing this play out in the market. Palo Alto Networks has explicitly linked agent security to identity, even exploring acquisitions of IAM leaders. Similarly, CrowdStrike is extending its Falcon platform to govern the NHIs of AI agents. The future of agentic AI security lies in platforms that unify application security, zero trust, and robust identity governance.

The ART of Risk Management: A Strategic Lens for a New Era

The ART of Risk Management Funnel

The ART of Risk Management: A Strategic Lens for a New Era

The complexity of agentic threats demands a structured framework. The ART of Risk Management provides this essential lens, organizing security strategies into three logical pillars.

Total Risk
A

Avoid (Risk Avoidance)

“Building fences before the cattle escape.” These are proactive measures to prevent incidents, like securing the AI development lifecycle and implementing strong access controls.

R

Reduce (Risk Reduction)

“Limiting the blast radius when an incident happens.” This includes real-time threat detection, rapid autonomous response, and robust incident management.

T

Transfer (Risk Transfer)

“Knowing who else is on the hook.” This involves shifting financial and operational burden through specialized cyber insurance and outsourcing security functions.

Accepted Risk

By applying this framework, we can move beyond a reactive, tool-based approach and build a comprehensive, defense-in-depth strategy for the age of agents. We can also force our leadership teams to document that they ACCEPT the risk that comes along with a project.

Applying Risk Avoidance to Agentic AI: Building Proactive Defenses

The cheapest breach is the one that never happens. For agentic AI, risk avoidance means building a robust set of proactive defenses that secure the agent’s entire lifecycle, from its underlying model to the governance of its real-time actions.

Technology CategoryDescriptionKey VendorsPrimary Risks Mitigated
AI Security Posture Management (AISPM)Discovers and assesses AI assets, configurations, and models for vulnerabilities and governance gaps.CrowdStrike, Zenity, Check Point, Wiz, Axonius, Witness AIShadow AI, Misconfigurations, Sensitive Data Exposure, Model Vulnerabilities
AI Bill of Materials (AI-BOM)Provides visibility into the entire AI supply chain, including open-source components and external AI services.Hopper SecuritySupply Chain Attacks, Vulnerable Dependencies, Model Poisoning
Automated Red TeamingSimulates adversarial attacks against AI models to proactively identify and remediate vulnerabilities before deployment.Mindgard, CalypsoAI, CrowdStrike, StraikerJailbreaking, Prompt Injection, Model Extraction, Evasion Attacks
Input/Output Firewalls (AI Gateways)Inspects all prompts and responses in real time, blocking malicious inputs and filtering harmful outputs.Akamai, Zscaler, Lasso Security, CloudflarePrompt Injection, Data Leakage, Toxic Content, PII Exposure
Agent Identity GovernanceEstablishes and manages a verifiable identity for each agent, enforcing fine-grained, context-aware access policies.Palo Alto Networks, Astha.ai, CrowdStrike, Delinea, Zero Networks, Andromeda, Oasis Security, Blue FlagPrivilege Compromise, Identity Spoofing, Tool Misuse, Unauthorized Access
Secure AI Development (AI SDLC)Integrates security scanning and vulnerability remediation directly into the development workflow for AI-assisted coding.Checkmarx, Lasso Security, Pensar, Minimus, RapidFort, Cycode, LineajeInsecure AI-Generated Code, Vulnerable Dependencies, Secrets Exposure
Table: Summary of Agentic AI Risk Avoidance Solutions

Securing the Agent’s Foundation: AISPM and Model Integrity

Before you trust an agent, you have to vet its foundation. AI Security Posture Management (AISPM) has emerged to provide continuous visibility into your entire AI landscape, discovering unsanctioned “shadow AI,” assessing configurations, and identifying risks like sensitive data exposure. CrowdStrike has integrated AI-SPM into its Falcon platform, Wiz helps eliminate risks in cloud and AI environments, and Axonius provides visibility to uncover shadow AI. WitnessAI also provides security and governance controls to monitor AI activity and risk.

Understanding the AI supply chain is also critical. Hopper Security addresses this with its AI Bill of Materials (AI-BOM), giving teams visibility into every component, from open-source packages to embedded models. Proactive testing is also essential. Automated Red Teaming platforms like Mindgard and Straiker stress-test AI models against thousands of attack scenarios, including jailbreaking and data poisoning, hardening them before they ever see the real world.

Controlling the Conversation: AI Gateways

Once an agent is deployed, you need a layer of defense to inspect all communications in real time. This is the role of AI Gateways or AI Firewalls, a market segment now recognized by Gartner. These solutions act as a secure proxy to enforce security policies. Akamai’s Firewall for AI detects and blocks malicious inputs while filtering responses to prevent data leakage or toxic content. Cloudflare’s AI Gateway offers similar capabilities with added reliability features, while Zscaler provides “AI Guardrails” through its Zero Trust Exchange to block unsanctioned AI apps and prevent PII exfiltration. Startups like Lasso Security offer gateways that can be integrated with a single line of code to block, mask, and log violating interactions.

Governing Agent Identity: Zero Trust for Non-Human Actors

Perhaps the most profound challenge is the creation of a new class of autonomous, non-human actors. Securing them is fundamentally an identity problem requiring a zero-trust architecture. The conceptual framework for this is the “Triangle of Trust”: a verifiable identity for every agent, fine-grained access policies, and continuous behavioral monitoring. Zero Networks exemplifies this by automating least-privilege access through identity-based microsegmentation to prevent lateral movement. Oasis Security specializes in this area, offering an enterprise platform for managing the entire lifecycle of NHIs.

Leading vendors are building this vision into their platforms. Palo Alto Networks’ Prisma AIRS is a SaaS platform designed to secure agents at runtime by preventing tool misuse and memory poisoning through strict identity controls. Delinea enhances this with an AI-powered platform that delivers real-time intelligent authorization, adjusting access dynamically based on risk. Andromeda provides an AI-powered identity security platform that automates the maintenance of least privilege. Zscaler is focused on securing agent-to-agent (A2A) communication, extending its zero-trust platform to govern traffic over new inter-agent protocols.

Securing AI-Driven Development: Shifting Left

AI is also transforming software development, but AI code assistants can generate insecure code. Security must be embedded directly into the developer’s workflow. Checkmarx One Assist provides AI security agents that operate within the developer’s IDE, preventing and fixing vulnerabilities in real time. Cycode provides an AI-native Application Security platform that integrates scanning directly into CI/CD pipelines. Lineaje AI uses agentic AI to autonomously find and fix software supply chain risks.

Other startups focus on reducing the attack surface from the start. Minimus provides secure, minimal container images that eliminate over 95% of common vulnerabilities, while RapidFort removes unused components to shrink the attack surface.

What we’re seeing is a new security stack forming right before our eyes, mirroring the structure of traditional network security. AI Gateways are the new NGFWs, AISPM is the new Vulnerability Management, and Agent Identity Governance is the successor to IAM. For CISOs, this is good news. It means you can map your existing playbook onto this new battlefield and suggests a future market where integrated “AI Security Platforms” will hold a significant long-term advantage.


Applying Risk Reduction: Limiting the Blast Radius

Let’s be real: your defenses will fail. Someone, or something, will get through. Risk reduction focuses on minimizing the damage when a preventive control is bypassed. For agentic AI, this means detecting malicious behavior in real time, containing the threat before it can spread, and responding effectively.

Technology CategoryDescriptionKey VendorsPrimary Risks Mitigated
Runtime Observability & Threat DetectionProvides deep, real-time visibility into agent behavior, execution paths, and tool usage to detect anomalies.Zenity, Palo Alto Networks, Darktrace, Dune Security, Wiz, Delinea, Elastic Security, Bitdefender, Skyhawk, ExtraHop, Varonis, StraikerTool Misuse, Privilege Compromise, Memory Poisoning, Goal Manipulation, Insider Threats
Real-Time Data Loss Prevention (DLP)Monitors and blocks the exfiltration of sensitive data by or through AI agents and generative AI applications.CrowdStrike, Zscaler, Lasso Security, Forcepoint, SecloreData Leakage, PII Exposure, Intellectual Property Theft
Managed Detection & Response (MDR) for AIProvides 24/7 outsourced monitoring, threat hunting, and incident response tailored to the AI attack surface.CrowdStrike, SentinelOne, Sophos, Arctic Wolf, Coalfire, DeepwatchAdvanced Persistent Threats (APTs), Sophisticated Evasion Techniques, Skill Gaps
Incident Response (IR) Services for AIOffers expert-led services to contain, investigate, and remediate complex security incidents involving AI systems.CrowdStrike, Optiv, DeloitteActive Breaches, System Compromise, Regulatory Reporting
Table: Summary of Agentic AI Risk Reduction Solutions

Real-Time Observability and Runtime Threat Detection

To reduce risk, you have to see what your agents are doing in real time. Zenity has built its platform around agent-centric observability, tracking behavior and intent to detect malicious outcomes. Wiz provides runtime observability for AI models in production to monitor for anomalies like toxic outputs, and Palo Alto Networks’ AI Runtime Security establishes intelligent behavior profiles to flag anomalies in real time. Varonis AI Security monitors how AI interacts with data and uses behavioral analytics to detect deviations from normal activity.

Darktrace offers a unique approach with its self-learning AI, which learns the unique “pattern of life” for an organization’s environment. This allows it to detect subtle deviations that signal a threat and take autonomous action to contain it. This is a critical capability when dealing with machine-speed attacks. Delinea uses AI to surface anomalies in privileged sessions, while vendors like ExtraHop provide network detection and response (NDR) to uncover evasive tactics. Endpoint leaders like Bitdefender extend EDR and XDR to provide runtime threat detection for the systems where agents operate, and Elastic Security is exploring how to embed security directly into LLM workflows.

Preventing Data Exfiltration in Real Time

One of the biggest risks is large-scale data leakage. Real-time Data Loss Prevention (DLP) controls are essential. CrowdStrike’s Falcon Data Protection provides visibility into employee interactions with GenAI tools and can actively block the uploading of sensitive documents. Zscaler’s platform offers similar AI Data Protection to block data sharing to and from AI applications. Forcepoint’s Data Security Cloud unifies DSPM, DLP, and CASB to safely enable GenAI, and Seclore provides data-centric security, applying persistent protection to digital assets.

Leveraging Managed Security Services for AI Expertise

The rapid evolution of agentic AI has created a massive skills gap. Managed Security Service Providers (MSSPs) and MDR providers play a critical risk-reduction role by augmenting internal teams with advanced technology and seasoned threat hunters. Specialist MSSPs like Optiv are building dedicated AI Security practices, and firms like Coalfire offer AI risk management and penetration testing services. Leading MDR providers like Arctic Wolf, SentinelOne, and Sophos are extending their services to cover the AI attack surface. Deepwatch’s Guardian MDR platform is another example, using a combination of human expertise and AI to provide 24/7 protection. CrowdStrike’s Falcon Complete is being expanded to provide 24/7 expert monitoring for threats involving AI agents.

Human-led SOCs are about to hit a brick wall. You can’t fight machine-speed attacks with human-speed clicks. The only viable counter is an AI-driven defense that also operates at machine speed. This reality is accelerating the shift toward the “autonomous SOC” and creating a new market for specialized MSSPs who can deliver “AI Security as a Service,” allowing enterprises to outsource the complex, continuous monitoring of their agentic fleet.


Applying Risk Transfer: Sharing the Financial and Operational Burden

The final pillar, Risk Transfer, is your financial and operational backstop. It involves strategically shifting some of the consequences of a security failure to a third party, primarily through specialized insurance and outsourcing high-risk functions.

MechanismDescriptionKey Providers/Market PlayersType of Risk Transferred
AI Liability InsuranceFinancial instruments designed to cover costs arising from an AI-related security incident or failure.AIG, AXA XL, Corvus, TravelersFinancial Risk (Legal Fees, Data Recovery, Business Interruption)
Managed Services (MDR/MSSP)Outsourcing 24/7 security monitoring, threat hunting, and incident response to a third-party provider.CrowdStrike, SentinelOne, Sophos, Optiv, Arctic Wolf, DeepwatchOperational Risk (Failure to Detect/Respond, Staffing, Expertise)
“As-a-Service” OfferingsContracting with specialized firms for high-stakes security functions like red teaming or incident response.CrowdStrike (AI Red Team), Mandiant, Coalfire, DeloitteOperational Risk (Failure to Identify Vulnerabilities, Ineffective Incident Containment)
Table: Summary of Agentic AI Risk Transfer Solutions

The Nascent Market for AI Liability Insurance

Agentic AI introduces novel liabilities that traditional insurance policies weren’t designed to cover. An agent that hallucinates and provides harmful advice, an autonomous system that causes financial loss, or a model that exhibits bias could all lead to significant legal repercussions. This has created a critical need for specialized AI liability insurance.

The market is in its early stages, but leading cyber carriers like AIG, AXA XL, and Corvus by Travelers are beginning to adapt their policies. Enterprises should be in active discussion with their brokers to understand how their current policies would respond to an AI-driven event and what new endorsements are available.

Outsourcing Risk Through Specialized Services

Beyond insurance, you can transfer operational risk by outsourcing highly specialized functions. Hiring CrowdStrike for its AI Red Team Services transfers the risk of failing to identify critical vulnerabilities before an attacker does. Engaging a firm like Coalfire or Deloitte for AI penetration testing shifts the burden of identifying model-specific risks to outside specialists.

Similarly, contracting with a leading MDR provider like SentinelOne or Sophos is a powerful form of operational risk transfer. You are shifting the burden of detecting and containing a sophisticated breach to a specialist provider, a strategic necessity when facing machine-speed attacks.

Pay attention, because the insurance industry is about to become your new CISO. As the AI insurance market matures, insurers will begin to mandate specific security controls (AISPM, runtime monitoring, and AI gateways) as prerequisites for coverage, just as they did with MFA and EDR. This will transform these controls from “best practices” into financial necessities, creating a powerful business case for investing in a comprehensive, ART-based security strategy.

Strategic Outlook and Recommendations for the Enterprise

Securing agentic AI isn’t about finding a silver bullet. It requires a layered, defense-in-depth strategy, as encapsulated by the ART framework. Avoidance reduces the likelihood of an incident, Reduction limits the damage if one occurs, and Transfer provides a financial and operational safety net.

When evaluating vendors, look beyond the “AI-powered” sticker slapped on every product. The market is noisy. Incumbents like Palo Alto Networks and CrowdStrike offer the promise of an integrated platform, while agile startups like Zenity and Mindgard offer deep, purpose-built solutions. Scrutinize their specific capabilities and map them to the ART framework to distinguish hype from reality.

Consider the practical scenario of deploying an AI agent to automate customer service and sales order processing. A layered ART strategy would look like this:

  • Avoid: Before deployment, the underlying LLM is tested by an automated red teaming platform like Mindgard to find and fix prompt injection vulnerabilities. The agent’s configuration and permissions are scanned by CrowdStrike’s AISPM to ensure it follows the principle of least privilege. An AI gateway from Akamai is placed in front of the agent to inspect all incoming customer queries in real time, blocking malicious inputs. Checkmarx One Assist is used to ensure the code connecting the agent to the ERP system is free of vulnerabilities.
  • Reduce: Once live, the agent’s behavior is continuously monitored by a runtime observability platform like Zenity, which establishes a baseline of normal activity. If the agent suddenly starts trying to access unusual customer records or misuse the ERP API, Darktrace’s autonomous response can instantly sever its connection to critical systems, containing the threat. Zscaler’s real-time DLP prevents any sensitive customer PII from being included in the agent’s responses. The entire incident is flagged in the SOC, where an MDR service from SentinelOne begins an immediate investigation.
  • Transfer: The financial impact of any data that was exposed before containment is mitigated by a cyber liability policy from AIG or AXA XL, which covers the costs of customer notification, credit monitoring, and potential legal fees. The operational burden of the 24/7 monitoring and initial response was transferred to the MDR provider, allowing the internal team to focus on strategic remediation and communication. Alternatively, ‘active insurance’ solutions like Coalition offer a combination of MDR and insurance.

A Roadmap for CISOs: Secure AI Adoption

To lead your organization through this transition, here is a concrete roadmap:

  1. Establish an AI Governance Committee: This isn’t just a security problem, it’s an enterprise risk challenge. Form a cross-functional committee with representatives from legal, risk, privacy, and key business units to create and enforce clear policies.
  2. Get a Grip on “Shadow AI.” Now: You can’t secure what you can’t see. Your first technical priority must be to gain comprehensive visibility into all AI usage across the enterprise. Deploy AISPM or similar discovery tools to map your landscape.
  3. Extend Your Zero Trust Architecture to Agents: Treat AI agents as a new identity category. The principles of Zero Trust, never trust, always verify, and enforce least privilege, are perfectly suited for managing them. Integrate agent identity into your broader IAM and segmentation strategies.
  4. Engage Your Insurance Broker and Legal Counsel Early: Don’t wait for an incident. Review your existing cyber liability policy now. Understand its limitations regarding AI and inquire about new, specialized products.
  5. Invest in Continuous Training: The human element remains critical. Educate developers on secure AI coding practices and train all employees on the dangers of sharing sensitive data with public generative AI tools, leveraging platforms like KnowBe4. Certifications like ISC2’s “Securing AI: Cybersecurity Strategy” badge can also help formalize expertise within your teams.

By adopting this strategic, framework-driven approach, organizations can move forward with confidence, embracing the transformative power of agentic AI not as an unmanageable risk, but as a secure and well-governed engine for growth.

Article Categories

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
"Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. "
John Furrier
Co-Founder of theCUBE Research's parent company, SiliconANGLE Media

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well”

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content