written by
Zack Calloway

The Dark Sides of AI Revisited: 3 New Tactics Criminals Are Already Using

AI Phishing Cyber Attacks 3 min read
Criminals are exploiting AI to create more convincing scams

We wrote a couple of months ago about the dark side of generative AI. In that post we showed you a few ways that these new chatbots and other AI-powered generative technologies could do harm.

Some of the things we talked about — like damaging your search engine optimization (SEO) or how generative AI generally has a loose relationship with telling the truth — aren’t exactly malicious or nefarious. They’re just things you need to know before you jump into using these tools.

But other concerns do have some potential for malicious use and could be abused by criminals and bad actors.

A few months ago, “could” was the operative word — but not anymore. Now it’s happening pretty much like we predicted.

Here’s what you need to know about 3 new tactics criminals are already using to weaponize generative AI.

1. Phishing Emails Are Getting Harder to Spot

In the past, most phishing emails had some tell-tale signs, like questionable grammar, misspellings, or just plain odd word choices.

But now with ChatGPT, criminals don’t have to be expert wordsmiths to fool you. They can get ChatGPT to spit out something that’s pretty convincing (and almost always spelled correctly).

Another problem here is that, in the past, the best phishing emails still had a shelf life. Fool enough people with the same exact message, and email providers will eventually catch that wording and block it.

But with ChatGPT and similar tech, scammers can generate unique messages whenever they want, rather than rely on the same ones over and over.

We’re already seeing an improvement in the “quality” of phishing emails. So make sure your team is trained and you have the right tools in place to block malicious links.

2. Misinformation at Scale

Misinformation is a threat to society — and in business contexts. The thing is, until now misinformation has been tough to do at scale. A post here or there might go viral without actually being true, but creating and sustaining a misinformation campaign is much harder. It would take practical armies of keyboard warriors churning out content (that may or may not read well in the target country, in the case of state-sponsored misinformation) to do misinformation at scale.

But ChatGPT can craft 25 variations on a fake news story, in near-perfect English, in just a minute or two. The possibility for misinformation at scale is a sobering one.

Of course, the folks that make these tools do build in some safeguards. But criminals and good guys alike are constantly finding ways around those safeguards (language and content warning, given those links are all showing how to get ChatGPT to do things it’s not supposed to do).

3. Malicious Computer Code

We tend to focus on how good ChatGPT is at writing human-sounding text, but that’s far from the only thing it can do.

ChatGPT can also write code in several programming languages. And it’s reasonably good at it!

We warned you before that sometimes it makes mistakes — and that’s still a concern. But even scarier is that it could be manipulated to write malicious code, or malware.

In the same sorts of ways that people have bypassed controls on offensive language, scammers are attempting to do the same in the world of code.

The good news here is that it takes a decent amount of knowledge to manipulate ChatGPT into writing malicious code — so your everyday scammers probably can’t do it.

Makers of generative AI tools are aware that their products can be abused, and they are trying to create tools to prevent this. But it’s a cat-and-mouse game, and we’re pretty sure the bad guys are going to keep finding exploits for years to come.

Concerned that your team might not be ready for a new wave of AI-powered cyber threats? We can help. Reach out today to discuss your concerns.

malware Network Security ransomware