AI-powered malware is a growing threat, one that businesses like yours need to take seriously. More and more attacks are built and executed using AI, which is changing the way these attacks look and act.
This week, we’re exploring what these AI-driven changes mean for your business, plus how smart businesses are fighting fire with artificial-intelligence fire. Let’s dive in!
The Scope of the Problem Is Overstated, But Real
First up we need to clarify: recent reports that some 80% of malware is built using AI are all based on a research paper that has been criticized and now retracted. So the last thing we want to do is propagate bad information. There’s no clear evidence that a whopping 4 out of 5 malware attacks are now AI-driven — but there’s plenty of evidence that bad actors are using AI to soup up their attacks.
Think of the consumer-grade AI tools you’ve heard of and maybe even used, like ChatGPT for chat and Claude Sonnet for coding. These tools have safeguards so that everyday people can’t tell them “hey, build me a virus, thanks.” (Well, you can tell it that, but it won’t cooperate.)
But the technology underlying those systems definitely can create harmful stuff. Let’s imagine a darkest-timeline version of both, let’s call them ScammerGPT MalSonnet.
The core of the problem is that the bad guys can use these tools to create and iterate attacks faster. The names here are fictional, but the concept — LLM tools without safeguards — definitely isn’t.
So let’s look at a few examples of how the bad guys are using AI to create and refine their attacks.
1. Better Email Scams
First up, LLM tools empower anybody to write at a fairly proficient, convincing level. It used to be that scam emails and phishing attempts were kind of terrible — full of typos and obvious giveaways that something wasn’t right. Now, with a GPT for everything from text generation to graphics creation, scammers can create content that sounds and looks more legit.
By eliminating some of the red flags that help people spot these scams, AI tools can make the scams more effective.
2. More Effective Authentication Bypass
AI tools can also help the bad guys crack passwords and bypass certain other authentication methods, like CAPTCHAs (those annoying picture quizzes that ask you to find all the boats or stop signs).
The same thing goes here — AI can both improve the ways skilled threat actors do these things, and it can help others create malicious apps and utilities to do them.
3. More Convincing Audio and Video Scams
AI-powered audio generators and even video avatars have come a long way: text to speech tools sound more natural than they ever have, meaning you can get a very professional-sounding, reassuring voice to say just about anything.
Much has been written about scamming Grandma with realistic depictions of grandchildren asking for money, and these attacks do have business corollaries. But pulling off that kind of impersonation requires lots of details that everyday scammers may not have. Even without targeting these scams, an extremely reassuring and professional-sounding “IT admin” or “Microsoft employee” could convince many workers to give up passwords — or worse!
The Problem: It’s All Faster and More Frequent
The problem with all of this goes beyond the specific techniques used. The bigger issue is how fast scammers can iterate this stuff, giving them the ability to send more attacks, more often. The odds go up considerably that something eventually works.
What You Can Do
The good news is that AI can do good in this arena, too: AI-driven cybersecurity tools can analyze threats more dynamically than traditional software and even predict certain threats faster than humans might have picked up on them.
Now is a great time for a cybersecurity review, and we can help. Reach out to our team to get started.