Warning: The magic method Post_Views_Counter::__wakeup() must have public visibility in /var/www/wp-content/plugins/fox-post-views-counter/post-views-counter.php on line 72
Chatbots, deepfakes, and voice clones: AI deception for sale - StayConnectedNH
electrical connections to brain
AI can be deceiving

Chatbots, deepfakes, and voice clones: AI deception for sale

By Michael Atleson, Attorney, FTC Division of Advertising Practices March 20, 2023

You may have heard of simulation theory, the notion that nothing is real and we’re all part of a giant computer program. Let’s assume at least for the length of this blog post that this notion is untrue. Nonetheless, we may be heading for a future in which a substantial portion of what we see, hear, and read is a computer-generated simulation. We always keep it real here at the FTC, but what happens when none of us can tell real from fake?

In a recent blog post, we discussed how the term “AI” can be used as a deceptive selling point for new products and services. Let’s call that the fake AI problem. Today’s topic is the use of AI behind the screen to create or spread deception. Let’s call this the AI fake problem. The latter is a deeper, emerging threat that companies across the digital ecosystem need to address. Now.

Digital Ecosystem or Toxic Swamp?

Most of us spend lots of time looking at things on a device. Thanks to AI tools that create “synthetic media” or otherwise generate content, a growing percentage of what we’re looking at is not authentic, and it’s getting more difficult to tell the difference. And just as these AI tools are becoming more advanced, they’re also becoming easier to access and use. Some of these tools may have beneficial uses, but scammers can also use them to cause widespread harm.

Generative AI and synthetic media are colloquial terms used to refer to chatbots developed from large language models and to technology that simulates human activity, such as software that creates deepfake videos and voice clones. Evidence already exists that fraudsters can use these tools to generate realistic but fake content quickly and cheaply, disseminating it to large groups or targeting certain communities or specific individuals. They can use chatbots to generate spear-phishing emailsfake websitesfake postsfake profiles, and fake consumer reviews, or to help create malwareransomware, and prompt injection attacks. They can use deepfakes and voice clones to facilitate imposter scamsextortion, and financial fraud. And that’s very much a non-exhaustive list.

FTC ACT Prohibits Deceptive or unfair conduct

The FTC Act’s prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive – even if that’s not its intended or sole purpose. So consider:

Should you even be making or selling it? If you develop or offer a synthetic media or generative AI product, consider at the design stage and thereafter the reasonably foreseeable – and often obvious – ways it could be misused for fraud or cause other harm. Then ask yourself whether such risks are high enough that you shouldn’t offer the product at all. It’s become a meme, but here we’ll paraphrase Dr. Ian Malcolm, the Jeff Goldblum character in “Jurassic Park,” who admonished executives for being so preoccupied with whether they could build something that they didn’t stop to think if they should.

Are you effectively mitigating the risks? If you decide to make or offer a product like that, take all reasonable precautions before it hits the market. The FTC has sued businesses that disseminated potentially harmful technologies without taking reasonable measures to prevent consumer injury. Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors. Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal. If your tool is intended to help people, also ask yourself whether it really needs to emulate humans or can be just as effective looking, talking, speaking, or acting like a bot.

Are you over-relying on post-release detection? Researchers continue to improve on detection methods for AI-generated videos, images, and audio. Recognizing AI-generated text is more difficult. But these researchers are in an arms race with companies developing the generative AI tools, and the fraudsters using these tools will often have moved on by the time someone detects their fake content. The burden shouldn’t be on consumers, anyway, to figure out if a generative AI tool is being used to scam them.

Are you misleading people about what they’re seeing, hearing, or reading? If you’re an advertiser, you might be tempted to employ some of these tools to sell, well, just about anything. Celebrity deepfakes are already common, for example, and have been popping up in ads. We’ve previously warned companies that misleading consumers via doppelgängers, such as fake dating profilesphony followers, deepfakes, or chatbots, could result – and in fact have resulted – in FTC enforcement actions.

While the focus of this post is on fraud and deception, these new AI tools carry with them a host of other serious concerns, such as potential harms to children, teens, and other populations at risk when interacting with or subject to these tools. Commission staff is tracking those concerns closely as companies continue to rush these products to market and as human-computer interactions keep taking new and possibly dangerous turns.

hacker with glasses
Previous Story

Your Cell Phone Can Be Hacked

paint brush, glove, tools
Next Story

Avoid the “Home Improvement” Scam

Latest from Business

National Consumer Protection Week

Tuesday, March 7—2pm EST: Join the FTC, AARP Fraud Watch, and the Association of Bookmobile and Outreach Services for a virtual town hall