Formerly known as Wikibon

Research Note: Bridging the Security and Data Gap

Collaboration Between Security and Data Teams for Secure AI Development

This research note is intended to guide security and data teams in navigating the complexities of AI security, promoting proactive collaboration for the secure development of AI-driven technologies.

As organizations rapidly adopt AI technologies to enhance business operations, the gap between data and security teams has become a growing concern. The integration of AI models, particularly Large Language Models (LLMs), into business-critical systems introduces unique security risks. In a previous theCUBE Research exclusive conversation with Kevin Mandia, CEO of Mandiant (now part of Google Cloud), we explored how companies must adapt their cybersecurity strategies in response to these evolving threats.

This note examines the key challenges, collaborative opportunities, and strategies that security and data teams must employ to develop secure AI solutions, drawing insights from our industry research and frameworks like Open Worldwide Application Security Project’s (OWAS) Top 10 for LLMs and NIST’s AI security guidelines.

The Security Risks of AI Integration

AI systems, particularly LLMs, are vulnerable to various attack vectors, including prompt injection, model extraction, and context poisoning. These vulnerabilities can result in malicious actors manipulating models to generate harmful outputs, steal sensitive data, or disrupt business operations. As highlighted by our team, community experts, and Mandia, nation-state actors such as China have significantly advanced their offensive cyber capabilities, making the threat landscape increasingly complex.

Some key risks identified by OWASP include:

  • Prompt Injection Attacks: Attackers manipulate LLMs through carefully crafted inputs, causing unintended actions such as data leaks or unauthorized operations. For example, adversaries can feed inputs designed to override intended controls, leading to system compromise.
    Supporting Insight: These kinds of attacks have been observed in security incidents where malicious actors compromise AI systems by embedding code or commands in the input fields of chatbots, causing the system to leak sensitive information or execute unauthorized actions. For instance, leveraging techniques like “injection chaining” makes these attacks especially dangerous.
  • Training Data Poisoning: Malicious actors tamper with AI training data to introduce vulnerabilities, biases, or backdoors, affecting the integrity of the models. This could lead to long-term systemic vulnerabilities within AI-driven processes, affecting critical business decisions.
    Supporting Insight: Poisoned training datasets have been shown to create model biases, leading to discriminatory results in sectors like finance or healthcare. Threat actors target the integrity of training data by subtly modifying a small fraction of training inputs, allowing them to affect outcomes without detection.
  • Sensitive Information Disclosure: LLMs may inadvertently reveal confidential information through their outputs, leading to privacy breaches. This is particularly critical when LLMs are trained on large, mixed datasets that could include sensitive or proprietary information.
    Supporting Insight: In several documented cases, LLMs inadvertently memorized sensitive training data (such as user passwords or confidential corporate information) and reproduced it during interaction. Implementing secure data-handling processes during training can mitigate this risk.

Collaboration Between Data and Security Teams

To mitigate these risks, security and data teams must work together throughout the AI development lifecycle. As we pointed out in this note, collaboration is essential in addressing the asymmetry between cyber offense and defense. AI can reduce this asymmetry by enhancing detection and response capabilities, but only if security considerations are embedded in the AI design and deployment process.

Key areas for collaboration include:

  • Early Security Involvement: Security teams should be involved from the earliest stages of AI model development, ensuring that risk assessments and threat modeling are integral parts of the design process.
    Actionable Step: Establish cross-functional development teams that include both security experts and data scientists. Involve security teams during the architecture design phase to perform threat modeling specific to AI systems, ensuring that vulnerabilities are addressed before deployment.
  • Continuous Monitoring and Feedback: AI models are dynamic and require ongoing monitoring to identify emerging vulnerabilities. Security and data teams must establish feedback loops to address vulnerabilities as they arise, such as during model updates or after incidents.
    Actionable Step: Implement continuous monitoring tools that automatically scan AI models for anomalies and suspicious behaviors. This includes setting up real-time alerts for unusual patterns, which can signal a compromised model.
  • Guardrails and Input/Output Controls: Both security and data teams should implement guardrails to ensure models handle inputs and outputs securely. This includes techniques like reinforcement learning with human feedback (RLHF) and input sanitization to prevent attacks such as prompt injections.
    Actionable Step: Use AI-specific security layers, such as input validation and output filtering mechanisms, to prevent exploitation. RLHF can be leveraged to teach models to reject suspicious inputs, while automated output sanitization prevents data leakage.
  • Cross-Team Training: Security teams should educate data scientists and engineers on secure coding practices, while data scientists can inform security teams about AI-specific risks, such as the implications of model overfitting or memorization of sensitive data.
    Actionable Step: Organize regular joint workshops and training sessions where both teams can learn about each other’s domains. Create a knowledge-sharing platform where both teams can post guidelines and incident learnings in real-time.

Strategic Recommendations

  • Adopt AI-Specific Security Frameworks: Utilizing frameworks like OWASP’s Top 10 for LLMs and NIST’s AI security standards can provide structured guidance on securing AI models. These frameworks highlight key vulnerabilities and offer practical mitigation strategies, from secure plugin design to protecting against excessive model agency.
    Actionable Step: Conduct a thorough review of AI projects against the OWASP Top 10 and NIST guidelines at the start and during key project milestones. Ensure that all team members are familiar with these frameworks, and create compliance checklists based on these recommendations.
  • Integrate AI-Driven Cybersecurity Solutions: AI can help accelerate breach detection and response times, as Mandia emphasized in our conversation. Organizations should explore AI-powered cybersecurity tools, such as those that leverage LLMs to scan for anomalies in real-time, reducing the operational toil of responding to security incidents.
    Actionable Step: Pilot AI-driven security platforms like Mandiant’s real-time monitoring tools or Google Cloud’s Chronicle with LLM integrations. These tools can drastically reduce detection and response times by flagging unusual behaviors across networks.
  • Cross-Departmental Governance: Establishing governance models that involve both data and security teams can enhance accountability. These models should define roles and responsibilities, particularly concerning model updates, patching vulnerabilities, and responding to incidents.
    Actionable Step: Set up a joint governance board that regularly reviews AI model performance and security postures. Define clear protocols for decision-making in case of security breaches or when vulnerabilities are found in deployed models. This governance body should also oversee audit trails and model documentation to ensure regulatory compliance.

Summary

As AI systems continue to evolve, bridging the gap between security and data teams is essential for the secure development of AI-driven technologies. By collaborating early and continuously throughout the AI lifecycle, organizations can reduce security risks and ensure that AI models contribute to, rather than compromise, business operations. Following the strategic recommendations outlined in this research note will help organizations proactively address vulnerabilities, leveraging both human expertise and AI-powered tools to secure their most critical systems.

Our goal at theCUBE Research is to provide the commentary ,video, event coverage and the research data to the innovative enterprises so they can confidently navigate the introduction of new technologies without compromising their organization’s security posture.

-John Furrier, theCUBE Research, X/Twitter: @furrier, LinkedIn: https://www.linkedin.com/in/furrier/


Thank you for reading.

If you appreciate our research, we encourage you to engage with it—share, comment, direct message us, or tell a colleague. You can also become a part of our growing CUBE Collective, a community of expert contributors dedicated to amplifying innovation and fostering collaboration.

At theCUBE Research, we are committed to providing high-quality, free research content on this site to serve our growing community. Over the past 15 years, we’ve worked to elevate the conversation around technology trends and innovation, and we’re honored to be recognized globally for our dedication. Our goal is to stay humble and aligned with our audience and top innovators, fostering trust through insightful, reliable content.

If you want to be part of our community just engage or if you’re an expert and/or and active technology focused content creator then join our growing Cube Collective.

What is theCUBE Collective

TheCUBE Collective is a unique, open community designed to elevate high quality content creators by collaborating with and amplifying their work. At its core, theCUBE Collective applies open-source principles to content creation, enabling creators to share their work freely while leveraging the reach of a global platform.

For creators, the Collective provides unparalleled visibility across theCUBE’s extensive network of tech coverage, interviews, and research publications. It offers an opportunity to showcase high-quality content and gain exposure to an audience that values expert insight and innovation.

For our audience, theCUBE Collective offers free access to valuable content from thought leaders and experts—no paywalls. This curated collection of expert interviews and research ensures professionals stay connected to cutting-edge developments in technology.

In a nutshell, theCUBE Collective is a win-win platform where creators gain recognition and audiences receive high-quality insights, all within a community-driven environment.

Article Categories

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
"Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. "
John Furrier
Co-Founder of theCUBE Research's parent company, SiliconANGLE Media

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well”

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content