How To Protect Against AI-Generated Fraud

THE LEDE

Most people are familiar with the “kidnapped friend” email scam. In some cases, it’s an email purporting to be from your child, writing that they have been kidnapped and you need to transfer money to their captors to ensure their release. Another version sees an old friend writing you that they are stuck in a foreign country without cash, and need some money. These cons are also run on the phone, but it’s easier to know they are scams because you know your kid’s voice. You can tell, pretty quickly, that it’s not your child or friend.

That is changing. AI is generating more and more realistic pictures and video, and it’s only a matter of time before we see realistic, AI-generated content showing relatives behind bars.

No alt text provided for this image
AI-generated image by Dall-E of "Man being held hostage."

Voice-cloning software has become so advanced that it can duplicate someone’s voice with just a few samples. If a talk of yours has been posted online, that’s more than enough sound the program needs to create an AI duplicate of your voice. If a scammer is calling and you engage with them, all they’ll need is a few moments of your voice to clone.

AI software will soon be able to show convincing hostage videos that look and sound like your friend or relative in distress.

Keeping that in mind, I have a suggestion. Have a family “challenge word.” Come up with a word everyone in your family can remember that is neither typically used in conversation nor so obscure as to give away that it’s a code. 

How this would work in practice:

  1. You get a call from your son. It sounds authentic. He is asking for money. Simply say: “Challenge?” Scammers will hang up without a substantial recording of your voice to clone. 
  2. Your granddaughter has actually been kidnapped. She is told to make a statement. She works in the challenge word, to help you (and the authorities) know it’s her.
  3. You get an email purporting to be from a family member in distress. All they need to do is use the challenge word, perhaps even as part of a sentence.

Why this works:

It’s simple. Even if someone is kidnapped, the bad guys aren’t going to care if you say “challenge” because the correct answer helps their cause.

The most important tip to avoid newly-evolving scams is to stay on top of the technology. Understand that scammers will use the most sophisticated tools they can find.


NEWS AND NOTES

ONLINE PHOTOGRAPHY COURSE’S QUESTIONABLE PRICING: The iPhone Photography School has a course that provides a number of good tips for taking better pictures with your iPhone. According to the company website, the classes are on streaming video, some 62 in all. The company is real, and its tips are sound, particularly for less-experienced shutterbugs. However, it regularly promotes its course’s price as being an “80% discount” off the regular price.

If you go to the site to sign up, you’ll see a countdown clock ticking off the hours left until the offer expires, However, the clock regularly resets. 

No alt text provided for this image
Screengrab from iPhone Photography School website

I contacted the company, asking about its advertising strategy. Its reply?

Scarcity tactics, such as limited-time offers, are indeed commonly used in marketing to create a sense of urgency. However, we understand that it’s essential to maintain transparency and avoid misleading our customers. Rest assured that we take your concerns seriously, and we always strive to provide accurate and clear information to our customers. I will personally convey your comments to our marketing team so that they can carefully evaluate our messaging and consider improvements to ensure our practices align with ethical standards and regulations.

It’s true that we see “scarcity tactics” regularly, especially online. However, this isn’t a scarcity tactic; it’s urgency marketing. “Scarcity tactics” are about numbers, like when you see “ONLY TWO LEFT!” Telling people they have a limited time left is an urgency tactic. And in this case, the “urgency” is misleading.

FACEBOOK RUNS ADS FOR COUNTERFEIT MERCHANDISE: It is against Facebook’s terms of service to allow advertisements for counterfeit products. Yet I regularly see these ads in my feed:

No alt text provided for this image
Ad for a "replica" watch from Facebook

There is no way, as near as I can tell, to report an ad for selling “replica watches,” counterfeit sports jerseys, etc. The few times I have reported the ads as TOS violations, Facebook has either not responded or has said the ads are fine. If any offer catches your eye, a few tips:

1. Check out the website the ad links to. They have suspicious URLs — like wwyhj.top or similar. Some sites will even try to fake you with real-sounding names.

2. Look up when the site was registered. Most of these fakes are hit-and-run - they will build a legitimate-looking website, have it up for a couple of weeks, then drop it. Go to and enter the “store’s” URL. I looked up a site purporting to sell “replica watches,” and it was registered just days ago. Big. Red. Flag.

3. Search reviews of the site. You probably won’t find many - which should also give you pause.

I’m not going to make a speech here regarding the ethics and laws about buying fake Rolexes and Louis Vuitton handbags. (But note: If you get caught selling them, you could face fines or even jail time.) I just want you to be aware of the scam out there.

AI EXPERTS WARN OF POTENTIAL “SOCIETAL-SCALE RISKS”: Did you know there is a Center for AI Safety? It’s a non-profit that is monitoring the use of AI. And it has authored a one-sentence statement warning that AI poses a major risk:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

That’s a pretty loud alarm. But this is not the product of a few hysterics. It’s signed by more than 350 professionals working in AI., including leaders from Google and Open AI. Writes the New York Times:

Recent advancements in so-called large language models — the type of A.I. system used by ChatGPT and other chatbots — have raised fears that A.I. could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs.

It’s hopeful to think that AI will be used for the advancement of knowledge, but it’s unlikely there can be legislation (as some are calling for) regarding its use. Even if Congress were to outright ban it (which nobody is suggesting), AI is worldwide. Curbing its use here won’t stop bad actors from other countries from using it to deceive.

REMOTE NOTES

Newsletter #48/Linked In Edition

Founder/Writer: Steve Safran

Editor: John Cockrell

Copyright 2023

-30-

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics