AI-powered scams and what you can do about them
AI is here to help, whether youâre drafting an email, making some concept art, or running a scam on vulnerable folks by making them think youâre a friend or relative in distress. AI is so versatile! But since some people would rather not be scammed, letâs talk a little about what to watch out for.
The last few years have seen a huge uptick not just in the quality of generated media, from text to audio to images and video, but also in how cheaply and easily that media can be created. The same type of tool that helps a concept artist cook up some fantasy monsters or spaceships, or lets a non-native speaker improve their business English, can be put to malicious use as well.
Donât expect the Terminator to knock on your door and sell you on a Ponzi scheme â these are the same old scams weâve been facing for years, but with a generative AI twist that makes them easier, cheaper, or more convincing.
This is by no means a complete list, just a few of the most obvious tricks that AI can supercharge. Weâll be sure to add news ones as they appear in the wild, or any additional steps you can take to protect yourself.
Voice cloning of family and friends
Synthetic voices have been around for decades, but it is only in the last year or two that advances in the tech have allowed a new voice to be generated from as little as a few seconds of audio. That means anyone whose voice has ever been broadcast publicly â for instance, in a news report, YouTube video or on social media â is vulnerable to having their voice cloned.
Scammers can and have used this tech to produce convincing fake versions of loved ones or friends. These can be made to say anything, of course, but in service of a scam, they are most likely to make a voice clip asking for help.
For instance, a parent might get a voicemail from an unknown number that sounds like their son, saying how their stuff got stolen while traveling, a person let them use their phone, and could Mom or Dad send some money to this address, Venmo recipient, business, etc. One can easily imagine variants with car trouble (âthey wonât release my car until someone pays themâ), medical issues (âthis treatment isnât covered by insuranceâ), and so on.
This type of scam has already been done using President Bidenâs voice! They caught the ones behind that, but future scammers will be more careful.
How can you fight back against voice cloning?
First, donât bother trying to spot a fake voice. Theyâre getting better every day, and there are lots of ways to disguise any quality issues. Even experts are fooled!
Anything coming from an unknown number, email address or account should automatically be considered suspicious. If someone says theyâre your friend or loved one, go ahead and contact the person the way you normally would. Theyâll probably tell you theyâre fine and that it is (as you guessed) a scam.
Scammers tend not to follow up if they are ignored â while a family member probably will. Itâs OK to leave a suspicious message on read while you consider.
Personalized phishing and spam via email and messaging
We all get spam now and then, but text-generating AI is making it possible to send mass email customized to each individual. With data breaches happening regularly, a lot of your personal data is out there.
Itâs one thing to get one of those âClick here to see your invoice!â scam emails with obviously scary attachments that seem so low effort. But with even a little context, they suddenly become quite believable, using recent locations, purchases and habits to make it seem like a real person or a real problem. Armed with a few personal facts, a language model can customize a generic of these emails to thousands of recipients in a matter of seconds.
So what once was âDear Customer, please find your invoice attachedâ becomes something like âHi Doris! Iâm with Etsyâs promotions team. An item you were looking at recently is now 50% off! And shipping to your address in Bellingham is free if you use this link to claim the discount.â A simple example, but still. With a real name, shopping habit (easy to find out), general location (ditto) and so on, suddenly the message is a lot less obvious.
In the end, these are still just spam. But this kind of customized spam once had to be done by poorly paid people at content farms in foreign countries. Now it can be done at scale by an LLM with better prose skills than many professional writers!
How can you fight back against email spam?
As with traditional spam, vigilance is your best weapon. But donât expect to be able to tell apart generated text from human-written text in the wild. There are few who can, and certainly not (despite the claims of some companies and services) another AI model.
Improved as the text may be, this type of scam still has the fundamental challenge of getting you to open sketchy attachments or links. As always, unless you are 100% sure of the authenticity and identity of the sender, donât click or open anything. If you are even a little bit unsure â and this is a good sense to cultivate â donât click, and if you have someone knowledgeable to forward it to for a second pair of eyes, do that.
âFake youâ identify and verification fraud
Due to the number of data breaches over the last few years (thanks, Equifax!), itâs safe to say that almost all of us have a fair amount of personal data floating around the dark web. If youâre following good online security practices, a lot of the danger is mitigated because you changed your passwords, enabled multi-factor authentication and so on. But generative AI could present a new and serious threat in this area.
With so much data on someone available online and for many, even a clip or two of their voice, itâs increasingly easy to create an AI persona that sounds like a target person and has access to much of the facts used to verify identity.
Think about it like this. If you were having issues logging in, couldnât configure your authentication app right, or lost your phone, what would you do? Call customer service, probably â and they would âverifyâ your identity using some trivial facts like your date of birth, phone number or Social Security number. Even more advanced methods like âtake a selfieâ are becoming easier to game.
The customer service agent â for all we know, also an AI! â may very well oblige this fake you and accord it all the privileges you would have if you actually called in. What they can do from that position varies widely, but none of it is good!
As with the others on this list, the danger is not so much how realistic this fake you would be, but that it is easy for scammers to do this kind of attack widely and repeatedly. Not long ago, this type of impersonation attack was expensive and time-consuming, and as a consequence would be limited to high value targets like rich people and CEOs. Nowadays you could build a workflow that creates thousands of impersonation agents with minimal oversight, and these agents could autonomously phone up the customer service numbers at all of a personâs known accounts â or even create new ones! Only a handful need to be successful to justify the cost of the attack.
How can you fight back against identity fraud?
Just as it was before the AIs came to bolster scammersâ efforts, âCybersecurity 101â is your best bet. Your data is out there already; you canât put the toothpaste back in the tube. But you can make sure that your accounts are adequately protected against the most obvious attacks.
Multi-factor authentication is easily the most important single step anyone can take here. Any kind of serious account activity goes straight to your phone, and suspicious logins or attempts to change passwords will appear in email. Donât neglect these warnings or mark them spam, even (especially!) if youâre getting a lot.
AI-generated deepfakes and blackmail
Perhaps the scariest form of nascent AI scam is the possibility of blackmail using deepfake images of you or a loved one. You can thank the fast-moving world of open image models for this futuristic and terrifying prospect! People interested in certain aspects of cutting-edge image generation have created workflows not just for rendering naked bodies, but attaching them to any face they can get a picture of. I need not elaborate on how it is already being used.
But one unintended consequence is an extension of the scam commonly called ârevenge porn,â but more accurately described as nonconsensual distribution of intimate imagery (though like âdeepfake,â it may be difficult to replace the original term). When someoneâs private images are released either through hacking or a vengeful ex, they can be used as blackmail by a third party who threatens to publish them widely unless a sum is paid.
AI enhances this scam by making it so no actual intimate imagery need exist in the first place! Anybodyâs face can be added to an AI-generated body, and while the results arenât always convincing, itâs probably enough to fool you or others if itâs pixelated, low-resolution or otherwise partially obfuscated. And thatâs all thatâs needed to scare someone into paying to keep them secret â though, like most blackmail scams, the first payment is unlikely to be the last.
How can you fight against AI-generated deepfakes?
Unfortunately, the world we are moving toward is one where fake nude images of almost anyone will be available on demand. Itâs scary and weird and gross, but sadly the cat is out of the bag here.
No one is happy with this situation except the bad guys. But there are a couple things going for all us potential victims. It may be cold comfort, but these images arenât really of you, and it doesnât take actual nude pictures to prove that. These image models may produce realistic bodies in some ways, but like other generative AI, they only know what theyâve been trained on. So the fake images will lack any distinguishing marks, for instance, and are likely to be obviously wrong in other ways.
And while the threat will likely never completely diminish, there is increasingly recourse for victims, who can legally compel image hosts to take down pictures, or ban scammers from sites where they post. As the problem grows, so too will the legal and private means of fighting it.
TechCrunch is not a lawyer! But if you are a victim of this, tell the police. Itâs not just a scam but harassment, and although you canât expect cops to do the kind of deep internet detective work needed to track someone down, these cases do sometimes get resolution, or the scammers are spooked by requests sent to their ISP or forum host.