AI is here to help, whether you’re drafting an email, making some concept art, or running a scam on vulnerable folks by making them think you’re a friend or relative in distress. AI is so versatile! But since some people would rather not be scammed, let’s talk a little about what to watch out for.

The last few years have seen a huge uptick not just in the quality of generated media, from text to audio to images and video, but also in how cheaply and easily that media can be created. The same type of tool that helps a concept artist cook up some fantasy monsters or spaceships, or lets a non-native speaker improve their business English, can be put to malicious use as well.

Don’t expect the Terminator to knock on your door and sell you on a Ponzi scheme — these are the same old scams we’ve been facing for years, but with a generative AI twist that makes them easier, cheaper, or more convincing.

This is by no means a complete list, just a few of the most obvious tricks that AI can supercharge. We’ll be sure to add news ones as they appear in the wild, or any additional steps you can take to protect yourself.

Voice cloning of family and friends

Synthetic voices have been around for decades, but it is only in the last year or two that advances in the tech have allowed a new voice to be generated from as little as a few seconds of audio. That means anyone whose voice has ever been broadcast publicly — for instance, in a news report, YouTube video or on social media — is vulnerable to having their voice cloned.

Scammers can and have used this tech to produce convincing fake versions of loved ones or friends. These can be made to say anything, of course, but in service of a scam, they are most likely to make a voice clip asking for help.

For instance, a parent might get a voicemail from an unknown number that sounds like their son, saying how their stuff got stolen while traveling, a person let them use their phone, and could Mom or Dad send some money to this address, Venmo recipient, business, etc. One can easily imagine variants with car trouble (“they won’t release my car until someone pays them”), medical issues (“this treatment isn’t covered by insurance”), and so on.

This type of scam has already been done using President Biden’s voice! They caught the ones behind that, but future scammers will be more careful.

How can you fight back against voice cloning?

First, don’t bother trying to spot a fake voice. They’re getting better every day, and there are lots of ways to disguise any quality issues. Even experts are fooled!

Anything coming from an unknown number, email address or account should automatically be considered suspicious. If someone says they’re your friend or loved one, go ahead and contact the person the way you normally would. They’ll probably tell you they’re fine and that it is (as you guessed) a scam.

Scammers tend not to follow up if they are ignored — while a family member probably will. It’s OK to leave a suspicious message on read while you consider.

Personalized phishing and spam via email and messaging

We all get spam now and then, but text-generating AI is making it possible to send mass email customized to each individual. With data breaches happening regularly, a lot of your personal data is out there.

It’s one thing to get one of those “Click here to see your invoice!” scam emails with obviously scary attachments that seem so low effort. But with even a little context, they suddenly become quite believable, using recent locations, purchases and habits to make it seem like a real person or a real problem. Armed with a few personal facts, a language model can customize a generic of these emails to thousands of recipients in a matter of seconds.

So what once was “Dear Customer, please find your invoice attached” becomes something like “Hi Doris! I’m with Etsy’s promotions team. An item you were looking at recently is now 50% off! And shipping to your address in Bellingham is free if you use this link to claim the discount.” A simple example, but still. With a real name, shopping habit (easy to find out), general location (ditto) and so on, suddenly the message is a lot less obvious.

In the end, these are still just spam. But this kind of customized spam once had to be done by poorly paid people at content farms in foreign countries. Now it can be done at scale by an LLM with better prose skills than many professional writers!

How can you fight back against email spam?

As with traditional spam, vigilance is your best weapon. But don’t expect to be able to tell apart generated text from human-written text in the wild. There are few who can, and certainly not (despite the claims of some companies and services) another AI model.

Improved as the text may be, this type of scam still has the fundamental challenge of getting you to open sketchy attachments or links. As always, unless you are 100% sure of the authenticity and identity of the sender, don’t click or open anything. If you are even a little bit unsure — and this is a good sense to cultivate — don’t click, and if you have someone knowledgeable to forward it to for a second pair of eyes, do that.

‘Fake you’ identify and verification fraud

Due to the number of data breaches over the last few years (thanks, Equifax!), it’s safe to say that almost all of us have a fair amount of personal data floating around the dark web. If you’re following good online security practices, a lot of the danger is mitigated because you changed your passwords, enabled multi-factor authentication and so on. But generative AI could present a new and serious threat in this area.

With so much data on someone available online and for many, even a clip or two of their voice, it’s increasingly easy to create an AI persona that sounds like a target person and has access to much of the facts used to verify identity.

Think about it like this. If you were having issues logging in, couldn’t configure your authentication app right, or lost your phone, what would you do? Call customer service, probably — and they would “verify” your identity using some trivial facts like your date of birth, phone number or Social Security number. Even more advanced methods like “take a selfie” are becoming easier to game.

The customer service agent — for all we know, also an AI! — may very well oblige this fake you and accord it all the privileges you would have if you actually called in. What they can do from that position varies widely, but none of it is good!

As with the others on this list, the danger is not so much how realistic this fake you would be, but that it is easy for scammers to do this kind of attack widely and repeatedly. Not long ago, this type of impersonation attack was expensive and time-consuming, and as a consequence would be limited to high value targets like rich people and CEOs. Nowadays you could build a workflow that creates thousands of impersonation agents with minimal oversight, and these agents could autonomously phone up the customer service numbers at all of a person’s known accounts — or even create new ones! Only a handful need to be successful to justify the cost of the attack.

How can you fight back against identity fraud?

Just as it was before the AIs came to bolster scammers’ efforts, “Cybersecurity 101” is your best bet. Your data is out there already; you can’t put the toothpaste back in the tube. But you can make sure that your accounts are adequately protected against the most obvious attacks.

Multi-factor authentication is easily the most important single step anyone can take here. Any kind of serious account activity goes straight to your phone, and suspicious logins or attempts to change passwords will appear in email. Don’t neglect these warnings or mark them spam, even (especially!) if you’re getting a lot.

AI-generated deepfakes and blackmail

Perhaps the scariest form of nascent AI scam is the possibility of blackmail using deepfake images of you or a loved one. You can thank the fast-moving world of open image models for this futuristic and terrifying prospect! People interested in certain aspects of cutting-edge image generation have created workflows not just for rendering naked bodies, but attaching them to any face they can get a picture of. I need not elaborate on how it is already being used.

But one unintended consequence is an extension of the scam commonly called “revenge porn,” but more accurately described as nonconsensual distribution of intimate imagery (though like “deepfake,” it may be difficult to replace the original term). When someone’s private images are released either through hacking or a vengeful ex, they can be used as blackmail by a third party who threatens to publish them widely unless a sum is paid.

AI enhances this scam by making it so no actual intimate imagery need exist in the first place! Anybody’s face can be added to an AI-generated body, and while the results aren’t always convincing, it’s probably enough to fool you or others if it’s pixelated, low-resolution or otherwise partially obfuscated. And that’s all that’s needed to scare someone into paying to keep them secret — though, like most blackmail scams, the first payment is unlikely to be the last.

How can you fight against AI-generated deepfakes?

Unfortunately, the world we are moving toward is one where fake nude images of almost anyone will be available on demand. It’s scary and weird and gross, but sadly the cat is out of the bag here.

No one is happy with this situation except the bad guys. But there are a couple things going for all us potential victims. It may be cold comfort, but these images aren’t really of you, and it doesn’t take actual nude pictures to prove that. These image models may produce realistic bodies in some ways, but like other generative AI, they only know what they’ve been trained on. So the fake images will lack any distinguishing marks, for instance, and are likely to be obviously wrong in other ways.

And while the threat will likely never completely diminish, there is increasingly recourse for victims, who can legally compel image hosts to take down pictures, or ban scammers from sites where they post. As the problem grows, so too will the legal and private means of fighting it.

TechCrunch is not a lawyer! But if you are a victim of this, tell the police. It’s not just a scam but harassment, and although you can’t expect cops to do the kind of deep internet detective work needed to track someone down, these cases do sometimes get resolution, or the scammers are spooked by requests sent to their ISP or forum host.



Source link

Share:

administrator