Formerly known as Wikibon

Four Generative AI Cyber Risks that Keep CISOs Up at Night — and How to Combat Them

In this episode of the SecurityANGLE, host Shelly Kramer, managing director and principal analyst at theCUBE Research is joined by analyst, engineer, and theCUBE Collective community member, Jo Peterson, for a conversation about the top four generative AI cyber risks that keep CISOs up at night. In addition to discussing the evolution of the AI threat and generative AI cyber risks, we also cover some cybersecurity best practices for using generative AI and also highlight some vendors and solutions in the AI security space that we think you should know about. 

Let’s start with some backstory. According to a Riskconnect survey of 300 risk and compliance pros, a whopping 93% of companies anticipate significant threats associated with generative AI, but (gulp) only 17% of companies have trained or briefed their entire organization on gen AI risks. Even more alarming: only 9% say they are prepared to manage the risks that are a part of the equation with the adoption of gen AI. Why are these numbers so low in the face of the risks associated with gen AI? That’s likely because while AI is all the rage these days, the reality is that we are still in the early days, and while people might be thinking about risks, they likely aren’t yet feeling the impact of those risks.

Here’s more reality: Generative AI is expected to reach some 77.8 million users in 2024, which is more than double the adoption rate of smartphones and tablets over a comparable period of time. To our way of thinking, these adoption numbers, combined with a wait-and-see attitude, are a risky business strategy—or perhaps best characterized as no strategy at all.

Similar research from ISACA surveying some 2,300 pros working in risk, security, audit, data privacy, and IT governance published in the fall of 2023 showed that a measly 10% of companies had developed a comprehensive generative AI policy. Shockingly, more than a fourth of the ISACA survey respondents had no plans to develop an AI policy.

Our List of Top Four Generative AI Cyber Risks

This leads us to our conversation today, and our list of the top four generative AI cyber risks that we know are keeping CISOs up at night. We’ve narrowed our list to the four risks we feel are most pressing, which include:

  Model Training and Attack Surface Vulnerabilities

  Data Privacy

  Corporate IP Exposure

  Generative AI Jailbreaks and Backdoors 

Watch the full episode of Four Generative AI Cyber Risks that Keep CISOs Up at Night, here and stream it wherever you stream your podcasts:

Model Training and Attack Vulnerabilities

Data is collected throughout an organization in various ways. In many instances, data is unclean, poorly managed, and often underutilized. Generative AI also stores this data for unspecified periods of time, often in unsafe environments. This combination is dangerous and can lead to data access and manipulation. In addition, it can lead to potential bias, which is equally problematic.

Data Privacy

The framework around data collection is … thin, and all too often, almost nonexistent. The same is true of the rules around the type of data that can be input into generative AI models. The challenge here is that without an enforceable data exfiltration policy, there is the potential for models to learn and replicate private corporate information in output. And yes, you guessed it, this data breach is just waiting to happen.

Corporate IP Exposure

 Corporate data privacy is foundational to business success. Without a strategic, well-thought-out policy around generative AI and corporate data privacy, it is not uncommon for models to be trained on corporate codebases. This can result in the exposure of intellectual property, API keys, and other corporate information.

Generative AI guardrails — the limits that AI developers put on their language models to prevent them from providing dangerous, biased, anti-Semitic, or racist information, to name just a few, are meant to protect organizations — until they don’t.

So how and why are AI guardrails being circumvented? The easy answer: because they can be!

In the summer of 2023, researchers from Carnegie Mellon University and the Center for AI Safety announced they had found a way to successfully overcome the guardrails of every LLM out there. The researchers found that they could get models to do what they want, including engaging in racist or sexist dialogue and/or writing malware and using those LLMs for nefarious purposes. They found that fooling an LLM is not all that difficult, and online forums and hacker tools are an easy-to-find resource for learning tips and tricks to circumvent guardrails for generative AI that have been established. These are often called “jailbreaks” and attackers use these jailbreaks to launch targeted attacks and/or generate deceptive content.

Cybersecurity Best Practices for Using Generative AI

Now that we’ve made you nervous, we will share some cybersecurity best practices for generative AI. The four best practices we suggest include:

  Build an AI Governance Plan in Your Organization

  Train Your Employees, Create a Culture of AI Knowledge

  Discover and Classify Corporate Data

  Understand How Your Data Governance and Security Tools Work Best Together

With that introduction, let’s dive in.

Build an AI Governance Plan in Your Organization

The process of building technical guardrails around how an organization deploys and engages with AI tools is called AI Governance. When researching this topic, I came across the Artificial Intelligence Governance and Auditing (AIGA) program, an undertaking of the University of Turku, developed around AI governance. The AIGA program is a partner network comprised of academic and industry partners created to study and develop governance models for AI, as well as the services and business ecosystem emerging around responsible AI. Their goal with this program is to study and develop governance models for AI and services and the business ecosystem emerging around responsible AI.

The AI governance framework they have developed consists of three layers: environmental, organizational, and the AI system itself, and each layer contains a set of governance components and processes linked to the AI system lifecycle and it’s definitely worth checking out.

 Building an AI governance framework is a strategic undertaking that starts where you might expect: with an assessment of your organization’s unique needs. At the top of the list is an assessment of your organization’s ability to safely and responsibly handle sensitive data. In this exercise, transparency and algorithm regulation are important, as is accountability within the team and the organization. In addition, auditability is a critical part of the equation, as is ensuring there’s a process in place for facilitating ongoing monitoring and adaption. I think of it as the universal formula of:  Launch, monitor, measure, tweak, monitor some more, measure again, tweak more — and on ad infinitum. 

AI governance is critical if you’re using generative AI throughout the organization. Applied at the code level, effective AI governance helps organizations observe, audit, manage, and limit the data going into and out of AI systems. Today that is table stakes.

Employee Training is Key

There are many lessons to be learned from Shadow IT, which is the creation and/or use of technology and software without the knowledge or approval of IT. IT teams have long tried to rein in shadow IT, which remains an ongoing battle. Gartner reported that in 2022, some 41% of employees acquired, modified, or created technology outside of the visibility of IT. With the rapid rise of generative AI, it’s safe to say those numbers have skyrocketed. Capterra’s 2023 survey on shadow IT and project management found that 57% of small to midsized businesses reported high-impact shadow IT instances.

What’s the solution? Employee education plays a key role. Employee training based on education around data types and potential risks is crucial. Employees must be taught the difference between a gen AI model and a propriety AI model. They also need to know that while gen AI may be the latest, new, shiny thing, and it’s easy and fun to use, it is also incredibly easy to misuse. The repercussions of adding sensitive data to a generative AI model can be long-lasting. Limiting access and implementing strict protocols related to the management of sensitive data should be at the top of every IT team’s agenda.

Lastly, working to create a culture embracing AI knowledge and providing continuous learning opportunities is the key to building expertise among your employees and embracing the AI-driven path of the future.

An Imperative for Data Discovery and Classification

Classifying data helps define who gets access to what, ensures that employees have the information they need to do their jobs effectively and minimizes the risk of accidental data exposure or unauthorized use.

 Understanding and managing data appropriately is paramount in an age where data is both an asset and a potential liability. If you’ve not yet begun your data discovery, classification, and management processes, it’s time to pick up the pace. Better data classification can result in better data management and a finer-tuned approach to access.

The Role of Data Governance and Security Tools

Policies and education are great, but data governance and security tools not only work together but also enable organizations to enforce adherence. DLP, threat intelligence, cloud native application protection platforms (CNAPP), and Extended Detection and Response (XDR)—all of which we’ve discussed previously in this series—are tools that help prevent unwanted exfiltration and provide a layer of protection.

A Quick Cybersecurity / AI Tool Roundup

The global AI in cybersecurity market is expected to reach $38.2 billion by 2025, and it’s estimated that 50% of organizations are already actively relying on AI-driven security tools in some way or another. Additionally, 88% of cybersecurity pros think that AI will be essential for performing security tasks more efficiently. In comparison, 71% think it will be used for conducting cyberattacks within a very short time.

One of the things we like to do in this SecurityANGLE series is to highlight vendors we are tracking. Here are seven cybersecurity solutions/vendors offering solutions securing generative AI that we think you should know about. These include:

Google Cloud Security AI Workbench. Built with Duet AI in Google Cloud, the Security AI Workbench offers AI-powered capabilities that help assess, summarize, and prioritize threat data across proprietary and public sources. Security AI Workbench is built on the Vertex AI infrastructure and leverages threat intelligence from Google, Mandiant, and Virus Total. It’s powered by Sec-PaLM 2, which is a specialized security LLM, incorporates threat intelligence into the solution, and features extensions that allow partners and customers to build on to of the platform while keeping their data isolated but providing control over that data. As we would expect, Security AI Workbench provides both enterprise grade security as well as compliance support.

Microsoft Copilot for Security. Microsoft bills its Copilot for Security as providing the ability to “protect at the speed and scale of AI,” and the solution is now generally available. Copilot for Security is integrated with Microsoft’s security ecosystem and has interoperability with Microsoft Sentinel, Defender, and Intune. Copilot leverages AI to summarize vast data signals into key insights, detect cyber threats proactively, enhance threat intelligence, and automate incident response. The solution is also designed to be easily used by more junior staffers, providing easy-to-follow, step-by-step guidance along the way, empowering them to learn and do so quickly, without the need for intervention by senior staffers. 

Crowdstrike Charlotte AI. Crowdstrike’s Charlotte AI utilizes conversational AI to help security teams move and respond quickly. Charlotte AI is built on the Falcon platform and boasts NLP capabilities, allowing customers to Ask, Answer, and Act. Crowdstrike estimates that the tool allows customers to complete security tasks up to 75% faster, absorb thousands of pages of threat intelligence in seconds, reduce analyst workload, and improve efficiency. Equally compelling, Crowdstrike estimates Charlotte AI can help write technical queries some 57% faster, even for users who are new to cybersecurity.

Howso. Howso (formerly known as Diveplane), is a company I’ve been watching closely. Founded by Dr. Michael Capps, Dr. Chris Hazard, and Mike Resnick, Howso has doubled down on advancing trustworthy AI as the global standard. For the team at Howso, the focus is on AI you can trust, audit, and explain. The Howso Engine is an open source ML engine that provides exact attribution back to input data, allowing for full traceability and accountability of influence, which is the foundation of everything they build.

The Howso Synthesizer is digitally generated data that behaves like you would expect, is built on the Howso Engine, and boasts no privacy or compliance risks. Think about it: synthetic data you can trust. There are myriad use cases in healthcare, government, fintech, and beyond, where organizations need to securely analyze and share data internally and with other agencies. High-performance AI you can trust is the holy grail for Howso, and I am here for it. I expect big things from this team and this company. 

Cisco Security Cloud. Cisco Security Cloud is an open, integrated security platform for multicloud environments built on zero-trust principles. This integrates generative AI into the Cisco Security Cloud, providing improved threat detection, making policy management easier to administer, and simplyfing security operations with the help of advanced AI analytics. Cisco Security Cloud includes the Cisco User Protection Suite, Cisco Cloud Protection Suite, and the Cisco Breach Protection Suite. 

SecurityScorecard. SecurityScorecard solutions extend to Supply Chain Cyber Risk, external Security & Risk Operations solutions, and forward-looking threat intelligence through the Threat Landscape product line. Conveniently, the company also provides cybersecurity insurance plans. SecurityScorecard’s AI-driven platform uses Chat GPT4 to deliver detailed security ratings that uniquely understand an organization’s overall security posture. The tool uses NLP queries and customers receive actionable insights they can immediately use. 

Synthesis AI. Synthesis AI’s Synthesis Humans and Synthesis Scenarios leverage a proprietary combination of gen AI and cinematic DGI pipelines that are an extension of the company’s data generation platform. The Synthesis platform can programmatically create perfectly labeled images for ML models, which we expect to see more of moving forward. Teams can also use Synthesis Humans for realistic security simulation and cybersecurity training.

That’s a wrap for this episode of the SecurityANGLE. We appreciate you watching, listening, and/or reading. As always, if you have something you want us to cover or a unique or innovative security solution, we are always interested in hearing from you.

Find and connect with us on social media here:

Shelly Kramer on LinkedIn | Twitter/X

Jo Peterson on Linkedin | Twitter/X

 Image credit: cottonbro studio

Article Categories

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
"Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. "
John Furrier
Co-Founder of theCUBE Research's parent company, SiliconANGLE Media

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well”

You may also be interested in

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content