Digital consent: Law targets deepfake and revenge porn
The Senate has passed a new bill that will make it a crime to share explicit AI-generated images of minors and adults without consent

"The first major U.S. law tackling AI-induced harm" was born out of the suffering of teenage girls, said Andrew R. Chow in Time. "In October 2023, 14-year-old Elliston Berry of Texas and 15-year-old Francesca Mani of New Jersey each learned that classmates had used AI software to fabricate nude images of them and female classmates." The software could create virtually any image with the click of a button, and it was being weaponized against women and girls. When Berry and Mani sought to remove the so-called deepfake images, "both social media platforms and their school boards reacted with silence or indifference." Now a bill is headed to President Trump's desk that will criminalize the nonconsensual sharing of sexually explicit images—both real and computer generated, of minors or adults—and require that platforms remove such material within 48 hours of getting notice. The Take It Down Act, backed by first lady Melania Trump, passed the Senate unanimously and the House 409-2.
"No one expects to be the victim of an obscene deepfake image," said Annie Chestnut Tutor in The Hill. But the odds of it happening are only increasing. Pornographic deepfake videos are "plastered all over the internet," often to "extort teenagers." With AI, "anyone with access to the internet can turn an innocent photo" into life-shattering pornography, said Kayla Bartsch in National Review. This poses a grave danger to young women. While AI deepfakes have gotten a lot of attention for the role they can play in elections, the risk this technology poses to kids has been "largely neglected." But 15 percent of kids "say they know of a nonconsensual intimate image depicting a peer in their school."
The intentions behind this bill may be good, but be wary of the rush to ban AI fakes, said Elizabeth Nolan Brown in Reason. Nobody is defending "revenge porn." But the 48-hour clock to take down images will "incentivize greater monitoring of speech" and could be gamed the same way that copyright law gets abused now to force quick takedowns of material, like parody, that doesn't really violate the law.
Technology is creating new ways to sexualize kids even as non-consensual AI-generated porn gets banned, said Jeff Horwitz in The Wall Street Journal. You can add a whole new category of fakes to watch out for: "digital personas" generated by AI. Meta now lets you interact with fake celebrities or invent AI-powered personas of your own that are happy to engage in "romantic role-play." Users have quickly used the tools to create "sexualized youth-focused personas like 'Submissive Schoolgirl'" that will play out fantasies or even suggest a meeting. The digital characters might be performed by AI, but the ways people interact with them will seep into the real world and affect real kids.
-
AI: is the bubble about to burst?
In the Spotlight Stock market ever-more reliant on tech stocks whose value relies on assumptions of continued growth and easy financing
-
Your therapist, the chatbot
Feature Americans are increasingly turning to artificial intelligence for mental health support. Is that sensible?
-
Supersized: The no-limit AI data center build-out
Feature Tech firms are investing billions to build massive AI data centers across the U.S.
-
Digital addiction: the compulsion to stay online
In depth What it is and how to stop it
-
AI workslop is muddying the American workplace
The explainer Using AI may create more work for others
-
Jaguar Land Rover’s cyber bailout
Talking Point Should the government do more to protect business from the ‘cyber shockwave’?
-
Prayer apps: is AI playing God?
Under The Radar New chatbots are aimed at creating a new generation of believers
-
iPhone Air: Thinness comes at a high price
Feature Apple’s new iPhone is its thinnest yet but is it worth the higher price and weaker battery life?