Doriel Abrahams, Head of Risk, Forter, takes a look at how Generative AI is going to have a massive impact on fraud
Generative AI and, more specifically AI based chatbots such as ChatGPT, have continued to dominate headlines and amaze consumers this year. With their rise in popularity and usage, there are a lot of interesting – and important – conversations to be had around AI. Unfortunately, one of those is their potential role in how to fuel fraud.
ChatGPT, Meet Social Engineering
The reality of online crime is that the weakest link is often a human one. Humans may be bored, worried, stressed, inattentive, desperate, and scared — and a clever fraudster can exploit all of these emotions. ChatGPT and its generative AI friends will inevitably become fraudsters’ latest weapon in the ongoing fraud battle.
Here are just a few of the ways I see this playing out:
Pig butchering scams: A nasty term for an ugly scam, where people are tricked into investing in fake stocks or on fake investment apps. Some victims lose thousands or even hundreds of thousands of pounds to these scams. The victims are lulled into a false sense of security by relationships developed with a scammer via text messages. ChatGPT and similar AI bots are friendly, conversational, and convincing. They’re ideal for building, at the very least, the initial relationships for pig butchering, especially since these scams typically follow a script.
Romance scams: Working on a similar principle, clever generative AI chatbots are a good substitute for low-grade human scammers in a romance scam. Much of the chat is formulaic, as you’ll see if you search for examples of victims describing their experiences. You could have one human supervising several chatbots, probably without losing much in terms of the scam’s success.
Business Email Compromise (BEC) schemes: An old favourite with fraudsters, BEC is still going strong. It’s evolved over the years, and today’s scam emails are often personalised to match the target’s company, role, and the tools or programs their company uses. Generative AI will have no trouble generating precisely this kind of email — and it’ll create new ones for every prompt, making searching for reports of the email you’ve just received more difficult.
Deepfake phishing: I’m thinking of tailored phishing. You know those times when an employee is tricked into sending large sums of money because they believe their boss or the CEO told them to? How much more convincing will those attempts be when the fraudster can ask generative AI to create an email or message or even voice message in the style of that executive? The one whose written opinions, interviews, and panel discussions are easy to find and manipulate.
ChatGPT has already been used “in the wild” to quickly create all the materials for a fraud scam. Like other uses of ChatGPT and its competitors, “prompt engineering” is critical: you must know what to ask for. But, like using search engines effectively, that skill can be learned. Moreover, it’s a skill that doesn’t require any special technical knowledge or ability and can be done by anyone.
In some ways, this is largely the expansion of the Crime-as-a-Service industry that already dominates the online criminal ecosystem, with fraudsters able to buy (or buy access to) stolen data, bots, scripts, apps to simplify identity switching, and more.
The difference is this is all “homegrown.” Someone doesn’t need much understanding of the ecosystem to be able to use generative AI to make their fraud faster, easier, more effective and expansive.
The Real Worry of the Reality Check
The enticing thing about tools such as ChatGPT is that they feel ripe with possibility and potential. They are buggy, inaccurate, and unreliable – despite being impressive and fun to use. Chat-based AI reports nonsense or “hallucinates” imaginary findings. Image-based generative AI struggles to draw human hands.
But the question everyone is asking is – if this is now, what will they be able to do next year?
It’s an exciting thought but also a frightening one, with concerns around the potential power of AI culminating in a open letter, signed by more than 1,100 AI experts and technology executives, including Elon Musk, and Steve Wozniak. The sense of danger is clearly present and causing alarm.
With fraud prevention, we know that correctly balancing machine learning and technology is only half the battle. What makes fraud prevention genuinely effective is when the technology is guided by and informed by the research and intuition of human experts.