‘People are just not worried about being scammed’
Jane Wakefield,Technology reporter
When Clark Hoefnagelsâ grandmother was scammed out of $27,000 (ÂŁ21,000) last year, he felt compelled to do something about it.
âIt felt like my family was vulnerable, and I needed to do something to protect them,â he says.
âThere was a sense of responsibility to deal with all the things tech related for my family.â
As part of his efforts, Mr Hoefnagels, who lives in Ontario, Canada, ran the scam or âphishingâ emails his gran had received through popular AI chatbot ChatGPT.
He was curious to see if it would recognise them as fraudulent, and it immediately did so.
From this the germ an idea was born, which has since grown into a business called Catch. It is an AI system that has been trained to spot scam emails.
Currently compatible with Googleâs Gmail, Catch scans incoming emails, and highlights any deemed to be fraudulent, or potentially so.
AI tools such as ChatGPT, Google Gemini, Claude and Microsoft Copilot are also known as generative AI. This is because they can generate new content.
Initially this was a text reply in response to a question, request, or you starting a conversation with them. But generative AI apps can now increasingly create photos and paintings, voice content, compose music or make documents.
People from all works of life and industries are increasingly using such AI to enhance their work. Unfortunately so are scammers.
In fact, there is a product sold on the dark web called FraudGPT, which allows criminals to make content to facilitate a range of frauds, including creating bank-related phishing emails, or to custom-make scam web pages designed to steal personal information.
More worrying is the use of voice cloning, which can be used to convince a relative that a loved one is in need of financial help, or even in some cases to convince them the individual has been kidnapped and needs a ransom paid.
There are some pretty alarming stats out there about the scale of the growing problem of AI fraud.
Reports of AI tools being used to try to fool banksâ systems increased by 84% in 2022, accounting to the most recent figures from anti-fraud organisation Cifas.
It is a similar situation in the US, where a report this month said that AI âhas led to a significant increasing the sophistication of cyber crimeâ.
Given this increased global threat, youâd imagine that Mr Hoefnagelsâ Catch product would be popular with members of the public. Sadly that hasnât been the case.
âPeople donât want it,â he says. âWe learned that people are not worried about scams, even after theyâve been scammed.
âWe talked to a guy who lost $15,000, and told him we would have caught the email, and he was not interested. People are not interested in any level of protection.â
Mr Hoefnagels adds that this particular man simply didnât think it would happen to him again.
The group that is concerned about being scammed, he says, are older people. Yet rather than buying protection, he says that their fears are more often assuaged by a very low-tech tactic – their children telling them simply to not answer or reply to anything.
Mr Hoefnagels says he fully understands this approach. âAfter what happened to my grandmother, we basically said âdonât answer the phone if it’s not in your contacts, and donât go on email anymoreâ.â
As a result of the apathy Catch has faced, Mr Hoefnagel says he is now winding down the business, while also looking for a potential buyer.
While individuals can be blasĂŠ about scams, and scammers increasingly using AI specifically, banks cannot afford to be.
Two thirds of finance firms now see AI-powered scams as âa growing threatâ, according to a global survey from January.
Meanwhile, a separate UK study from last December said that âit was only a matter of time before fraudsters adopt AI for fraud and scams at scaleâ.
Thankfully, banks are now increasingly using AI to fight back.
AI-powered software made by Norwegian start-up Strise has been helping European banks spot fraudulent transactions and money laundering since 2022. It automatically, and rapidly, trawls through millions of transactions per day.
âThere are lots of pieces of the puzzle you need to stick together, and AI software allows checks to be automated,â says Strise co-founder Marit Rødevand.
âIt is a very complicated business, and compliance teams have been staffing up drastically in recent years, but AI can help stitch this information together very quickly.â
Ms Rødevand adds that it is all about keeping one step ahead of the criminals. âThe criminal doesnât have to care about legislation or compliance. And they are also good at sharing data, whereas banks canât share because of regulation, so criminals can jump on new tech more quickly.â
Featurespace, another tech firm that makes AI software to help banks to fight fraud, says it spots things that are out of the ordinary.
âWeâre not tracking the behaviour of the scammer, instead we are tracking the behaviour of the genuine customer,â says Martina King, the Anglo-American companyâs chief executive.
âWe build a statistical profile around what good normal looks like. We can see, based on the data the bank has, if something is normal behaviour, or anomalistic and out of kilter.â
The firm says it is now working with banks such as HSBC, NatWest and TSB, and has contracts in 27 different countries.
Back in Ontario, Mr Hoefnagels says that while he was initially frustrated that more members of the public donât comprehend the growing risk of scams, he now understands that people just donât think it will happen to them.
âItâs led me to be more sympathetic to individuals, and [instead] to try to push companies and governments more.â