Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , Standards, Regulations & Compliance

US FTC Proposes Penalties for Deepfake Impersonators

FTC Says It Should Be Able to Sue Providers Who Know Their Tech Is Used for Fraud
US FTC Proposes Penalties for Deepfake Impersonators
The Man Controlling Trade statue in front of the Federal Trade Commission building in Washington, D.C. (Image: Shutterstock)

The U.S. Federal Trade Commission said it's too easy for fraudsters to launch "child in trouble" and romance scams, so it has proposed rule-making that would give the agency new authority to sue in federal court providers that facilitate impersonation fraud.

Scammers fool victims by pretending to be a family member in an emergency or a legitimate romantic partner who needs cash - cons that artificially generated voices could turbocharge. A Philadelphia father told a Senate panel in November that he had been ready to give $9,000 to fraudsters after receiving a phone call from his son, who was "upset and crying" and said his nose had been broken and he was in jail.

See Also: 40 Ways to Use Splunk in Financial Services

It wasn't until the father, an attorney named Gary Schildhorn, was on his way to transfer money supposedly for a jail bond that the real son called - not crying, not arrested and nose intact.

"Cryptocurrency and AI have provided a riskless avenue for fraudsters to take advantage of all of us. They have no risk exposure," Schildhorn said.

The FTC said it needs new authorities to counter "surging complaints" around individual impersonation fraud as well as public outcry about the harms. The federal government knows of more than 150,000 cases of family and friend impersonation that have caused losses of about $339 million since 2019.

The agency on Thursday initiated a rule-making that would create a new "trade regulation rule" - a regulation that allows the agency to directly sue perpetrators in court. The agency on the same day finalized a rule that allows it to sue fraudsters who impersonate government agencies or businesses.

"Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever," said Lina Khan, FTC chair, in a statement.

One question that may spark controversy is how wide a liability net the FTC could cast. The agency said the rule should apply to anyone who provides the "means and instrumentalities" to conduct unlawful impersonation so long as the provider knew - or had reason to know - that they would be used in illegal impersonations.

"A long line of case law describes a form of direct liability for a party who, despite not having direct contact with the injured consumers, 'passes on a false or misleading representation with knowledge or reason to expect that consumers may possibly be deceived as a result,'" the FTC said.

In comments submitted to the agency, the Internet & Television Association - known as NCTA - urged the FTC to restrict liability to providers who have "actual knowledge" that their services will be used for impersonation fraud.

"Taking the proposed rule on its face, a broadband provider could be liable simply for providing internet service to a customer, without any knowledge that the customer is using the service to perpetrate impersonation fraud," the industry association said.

The Consumer Technology Association said it supports liability for third parties such as those who design imposter websites. Its proposal to limit liability would restrict federal lawsuits to service providers who have knowledge or who "consciously avoid knowing" about impersonation fraud.

The FTC is not the only U.S. government agency cracking down on the fraudulent use of AI technologies. Last week the Federal Communications Commission banned unsolicited robocalls that use voices generated by AI (see: Breach Roundup: U.S. Bans AI Robocalls).

That move came amid concerns that AI could be used to disseminate misinformation about the election. A robocall featuring a deepfake of President Joe Biden urging voters in New Hampshire to stay home on primary day caused controversy in January (see: AI Disinformation Likely a Daily Threat This Election Year).


About the Author

Marianne Kolbasuk McGee

Marianne Kolbasuk McGee

Executive Editor, HealthcareInfoSecurity, ISMG

McGee is executive editor of Information Security Media Group's HealthcareInfoSecurity.com media site. She has about 30 years of IT journalism experience, with a focus on healthcare information technology issues for more than 15 years. Before joining ISMG in 2012, she was a reporter at InformationWeek magazine and news site and played a lead role in the launch of InformationWeek's healthcare IT media site.

David Perera

David Perera

Editorial Director, News, ISMG

Perera is editorial director for news at Information Security Media Group. He previously covered privacy and data security for outlets including MLex and Politico.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing ransomware.databreachtoday.com, you agree to our use of cookies.