In this episode of the SecurityANGLE, I’m joined by my fellow analyst and member of theCUBE Collective community, Jo Peterson for a conversation about the rise of AI-enhanced phishing. smishing, and vishing and how to combat that with Bil Harmer, the operating partner and CISO at Craft Ventures. Prior to joining Craft Ventures, Bil was the head of security and the global privacy officer for SuccessFactors pre-IPO, and through that public offering into the acquisition by SAP. From there, he went to Zscaler pre-IPO for about five years, then to SecureAuth for a stint, and then joined Craft Ventures in 2022.
See the full conversation in this SecurityANGLE episode here:
AI-Enhanced Phishing, Smishing, and Vishing are Incredibly Popular Attack Tactics
According to a study by Deloitte, a whopping 91% of all cyberattacks begin with a phishing email and 32% of successful breaches involve the use of phishing techniques. We’ve talked a lot about this here, and it’s a given that humans are our weakest link in the security chain or, as Bil says in a different way: humans are the easiest target. As we all rush to embrace AI, threat actors are also wasting no time leveraging AI-powered tech to help supercharge their efforts at wreaking havoc, getting access to data, and infiltrating networks by way of phishing, smashing, and vishing.
Email is an incredibly popular attack vector, thus the allure of phishing attacks and AI-enhanced tactics are on the rise. Threat actors are focusing their efforts on:
– Developing more intelligent ways to craft phishing attacks based on previous failed attempts.
– Leveraging automation and machine learning (ML) technology to send large volumes of customized phishing attacks (spear-phishing) to target enterprises to increase the likelihood of infection.
– And they are using three primary methods for phishing attacks: link-based attacks, malicious attachments, and natural language threats.
How Phishing Attacks are Getting a Big Boost from Generative AI
Phishing attacks are getting a big boost from generative AI in a few different ways, which we explored in this conversation with Bil. A few years ago, we would see phishing attacks with misspellings or poorly worded messages. While it might seem like that was sloppy work, threat actors were simply looking for folks who weren’t aware enough about this kind of attack to know not to read closely and avoid clicking on links.
Now that we have generative AI in the mix, threat actors can ingest enough data around tone and cadence and words that are used by, for instance, CEOs and other senior leaders to help craft messages that are not only specific to the target but which they can do at scale and at volume. In the past, they were going after one person and one cyber criminal was working on the attack. Today, with AI-enhanced phishing, threat actors can focus their efforts strategically: for instance, targeting all CEOs across the Oil & Gas Industry, for example, and start building those phishes and firing them out.
In the same way we are incorporating generative AI across our organizations to increase efficiencies and spur productivity, threat actors are employing it to speed up and scale out their attacks.
Bil shared thoughts on AI-enhanced phishing attacks and the reality that businesses could eliminate many phishing attacks by simply implementing DMARC across the board. If you’ve got an AI that can read through emails in bulk and find the suspicious ones, it can take a big load off the plate. We already see that today in GSuite and Microsoft Outlook, and Bil predicted that in the not-too-distant future, hackers will begin to abandon the email game and start getting more into SMS and voice attacks because neither of these are well-governed.
AI is Also Effective at Powering Vishing and Smishing Attacks
While AI-enhanced phishing attacks are the most popular today, SMS and voice attacks are becoming increasingly common. Vishing is voiced-based campaigns (phone calls), and smishing is text-based attacks. AI has changed vishing attacks so they can be launched live, calling victims and doing a deep fake voice live. These are the kinds of attacks you hear about when someone gets a call from someone claiming to be their child or a partner who has had an accident or some emergency, and they need you to send them money immediately.
Bil shared that a good rule of thumb for families and work teams is to create a code word. That way, if you get a call or a message, it’s super easy to simply ask, “What’s the code word?” If they can’t answer correctly, you know you’ve nailed a hacker.
Bottom line is that adding generative AI into the equation means that these attacks will quickly become faster and cheaper, and we expect to see these attacks start to increase in velocity.
We also discussed the reality that in addition to these threat vectors, we will also see a rise in video phishing with fake video. For people like the three of us, who are public personalities and create a ton of video content, our images and voices are all over the web, making it relatively easy for nefarious hackers to experiment with creating fake video. Yay! One more thing to worry about, right?
Resemble.ai and Deepfake Audio Detection Capabilities
During our conversation, Bil introduced us to Resemble.ai, which can create high-quality custom AI voices, clone any voice, and generate dynamic, iterable, and unique voice content for the web or API.
Following the show, I did some additional research and walked away very impressed with Resemble’s capabilities. Not only can it be used to create AI voices, but it also has robust deepfake audio detection capabilities built in.
Image source: Resemble.ai
Bil shared that a few years ago, the team at Resemble AI started thinking that if they could build these things (AI voice generators), they could also use technology to detect them. In milliseconds, the Resemble AI algorithm can detect whether a voice is real or fake, without something like 90-95% accuracy.
Think about use cases here, like in contact centers. An agent might think they are getting a call from Shelly Kramer asking for some specific information about her account. But to test the caller’s identity, an agent can quickly use technology like Resemble’s to feed the voice on the call into it and quickly see whether they are being scammed. I can see some great potential here in contact center operations and the benefit of creating alliances with Webex, Google, RingCentral, Five9, and other big players in the contact center market.
Other use cases here include the political realm to help stop the spread of misinformation and fake news, as well as celebrities (and executives) who need to be able to find and flag fake audio being used without their consent. There are many use cases here, and getting excited about these capabilities is easy.
AI-Powered Risk-Based Authentication and Why It Matters
Authentication today is problematic at best. In our conversation with Bil, we explored what has been called “the mother of all breaches” (MOAB), revealing some 26 billion records and a whopping 12 terabytes of information. While no doubt the massive leak contains information from past data breaches, it is thought to contain LinkedIn, Twitter, Weibo, Canva, Adobe, MyFitnessPal, Tencent, and other site user credentials, and the leaked data set is extremely dangerous. Threat actors could be — and likely are — using this data to launch identity theft attacks and phishing attacks, gain unauthorized access to personal accounts, and launch highly targeted cyberattacks. Bil shared that this appears to be 10 years of compromised credentials brought into a single database, cross-referenced, and beautifully indexed. This is where two-factor authentication becomes critically important because you can no longer be considered “safe” simply by using long, complex passwords. If they are compromised, it doesn’t matter how long and complex they are; hackers have got you.
Want to blow your mind? Here’s a look from Cybernews at the brands with 100M+ leaked records:
Image source: Cybernews
Think your data is in this breach? There is every reason to believe that the answer is a resounding “yes.”
That’s where using AI for risk authentication comes in. It allows us to move faster and always authenticate, which will be the future. We’ve been talking about MFA for a long time now, and we collectively urge anyone reading this to turn on any form of MFA immediately. This is a critical step for keeping credentials safe until we get to the point where we no longer use passwords.
As Jo and I have discussed on the show before, AI can help detect anomalies during authentication, which is another superpower in the CISO’s toolkit.
Digital Identity Can Act as a Single Truth
As our conversation shifted to digital identity, Bil shared that as we move to an immersive digital identity, we need to get to a place where we have a digital identity that is inextricably tied to the human. That does not mean that we do away with anonymous access to things, but instead that there should be a way to say when we need to, “Yes, this is me. You’re seeing my video, you’ve got my email, and here is my digital identity to validate that I am who I say I am.” He shared there is talk and exploration of this use of digital identity as a single truth in both the U.S. government and the Canadian government, and we hope we’ll get there sooner rather than later.
The Biggest Challenges to CISOs and their Teams
Being a CISO today has got to be one of the most stressful jobs — we talk about this often on this show. As our conversation addressed some of the biggest challenges to CISOs and their teams, Bil noted the legal impact of being a CISO and the consideration folks are giving to whether or not they want to take on these roles. In what has been a watershed moment for cybersecurity, we’ve seen Tim Brown, the former CISO of SolarWinds, charged by the SEC with fraud and internal control failures and he wasn’t even the CISO of the company at the time of the breach. Ironically, Brown was named CISO of the year by Globee Cybersecurity Awards in April of 2023, and charged by the SEC in October of that same year. It’s important to note that these charges stem from allegations of Brown making false and misleading statements about the company’s cybersecurity practices and allegations of having inadequate internal controls in place to prevent cyberattacks.
Bil said that the crux of the SolarWinds case against Brown is allegations that he was saying the company didn’t have the resources to do what they say they do, and everyone knows what these budget fights are like. In far too many instances, security budgets aren’t prioritized. The case against Brown alleges that he and the company put out statements to the SEC and put statements on their website about the efficacy of their security program.
This has become an industry reality today as CISOs are trying to decide whether they truly want to take on the responsibility and the liability that comes with the title since they don’t report to the CEO or the board but instead to the CIO or someone further down the line. Bil recommends every CISO have a year’s salary banked, so they can walk away when a situation happens that they disagree with. He opined that every CISO will find themselves in a situation sooner or later where they don’t agree with a policy or a stance and are being told to shut up and move forward. At that point, many CISOs will simply decide that it’s a legal gray area and they are not interested in crossing a line and potentially having the SEC or the FBI come back on them.
He shared he is seeing colleagues ditch the “chief” title and simply take on a “head of security” role reporting to a CIO, and the CIO can be on the hook in the event of a cyber event. That’s a very viable option. Others are choosing to fight to change the reporting structure and take responsibility but also demand the authority (and the budget) that goes along with that. Harmer says authority, reporting structure, and budget must coexist in every situation. Sometimes that’s possible, and other times it’s a dream. It’s no wonder the average tenure of a CISO is a whopping two years — my hair turns grayer just thinking about the stress these folks deal with.
Bil Harmer’s Best Advice to CISOs
As we wrapped the show, we asked Bil to provide his best advice to CISOs and IT pros working to get their arms around AI security in this new age. His response: “Don’t sweat the small stuff. Lots of things are going to fall by the wayside, and they are going to fail. But you’ve got to find your true north, and the best way to do that is to immerse yourself in the business side. Understand the business, speak to the executive team at a business level, and avoid getting mired in the details. Talk the business talk. Show that you know the business goals and can help them achieve them by reducing and mitigating the risk to the business — that’s what a CISO’s job is.”
And that? Perfect advice to wrap our show.
If you enjoyed this episode of theCUBE Research’s the SecurityANGLE focused on AI-enhanced phishing and beyond, be sure and take a minute to hit the subscribe button over there on your right and also hop over and subscribe to our YouTube channel so that you don’t miss an episode.
Find and connect with us here:
Shelly Kramer on LinkedIn | Twitter/X
Jo Peterson on LinkedIn | Twitter/X
Other recent episodes of the SecurityANGLE:
Cybersecurity: Evolution of the AI Threat, Three Stages to Watch in 2024
The Role of Data Loss Prevention Tools in Protecting Against AI Data Exfiltration
Top 5 Enterprise Risk Management Trends We’re Tracking in 2024
Disclosure: theCUBE Research is a research and advisory services firm that engages or has engaged in research, analysis, and advisory services with many technology companies, which can include those mentioned in this article. The author holds no equity positions with any company mentioned in this article. Analysis and opinions expressed herein are specific to the analyst individually, and data and other information that might have been provided for validation, not those of theCUBE Research or SiliconANGLE Media as a whole.