Rise of the deepfakes
New technology makes it alarmingly easy to make realistic videos of people saying and doing things they've never done. Soon, we won't be able to trust our own eyes.
New technology makes it alarmingly easy to make realistic videos of people saying and doing things they've never done. Here's everything you need to know:
What is this technology?
It's a sophisticated type of software that makes it possible to superimpose one person's face onto another's body and manipulate voice recordings, creating fake videos that look and sound real. Hollywood studios have long used computer-generated imagery (CGI) to, say, create fleeting appearances of dead actors. But the process used to be prohibitively expensive and laborious. Today, the technology has improved so much that highly realistic visual and audio fakery can be produced by anyone with a powerful home computer. This has already resulted in a cottage industry of fake celebrity porn. But fears are growing over how else "deepfake" videos could be used — from smearing politicians in elections to inciting major international conflict. Earlier this year, BuzzFeed created a "public service announcement" warning of the technology's dangers, with a deepfake of former President Barack Obama voiced by the comedian and director Jordan Peele. "We're entering an era," the fake Obama says, "in which our enemies can make it look like anyone is saying anything." To illustrate the point, the fake Obama goes on to call President Trump "a total and complete dips---."
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Where did deepfakes originate?
In porn, of course. Last December, an anonymous Reddit user who calls himself "deepfakes" started posting fake but realistic-looking videos of celebrities engaged in explicit sex. By January, the "deepfake" technology had been shared through a free app, FakeApp, which has since been downloaded more than 120,000 times. FakeApp and its imitators sparked an explosion of fake pornography online, with Michelle Obama, Ivanka Trump, and Emma Watson among those most frequently victimized. But it's not all porn. The technology has also been used to create harmless spoof and parody videos — inserting Reddit cult figure Nicolas Cage into films in which he didn't appear, for example.
How do deepfakes work?
The creator gathers a trove of photos or videos of the target — so it helps if it's a famous person — along with the video to be doctored. The video maker then feeds the data into the app, which uses a form of artificial intelligence (AI) known as "deep learning" — hence deepfake — to combine the face in the source images with the chosen video. This process requires a sizable graphics processing unit and a vast amount of memory. It's time-consuming — the Obama/Peele video took 56 hours to make — and the quality is variable. But the technology is improving fast. Tech expert Antonio García Martínez, writing for Wired, says we'll soon be able to superimpose anyone's face onto "anyone else's, creating uncannily authentic videos of just about anything."
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
How are voices faked?
The principle is the same: You feed lots of recordings of the person you want to fake into an AI program, which chops up sounds and words into discreet bits; software can then rearrange the sounds so the subject can say anything you like. A team of sound engineers recently used deep-learning software to analyze 831 of John F. Kennedy's speeches, and then created a convincing approximation of the 35th president reading the speech he was due to deliver the day he was assassinated. Researchers at the University of Washington last year synthesized realistic videos of Barack Obama speaking by mapping audio from one speech onto an existing video of him talking.
How much trouble can this cause?
Potentially, a lot. On deepfake forums, there are frequent requests for help in producing face-swap porn videos of ex-girlfriends, classmates, and teachers. In the public sphere, the technology could be even more toxic. Fake videos could show soldiers committing atrocities, or world leaders declaring war on another country — triggering an actual military response. Deepfakes could be used to damage the reputation of a politician, or a political party, or an entire country. And if fake videos become commonplace, people may start assuming real videos are fake, too. That skepticism could be corrosive. "It'll only take a couple of big hoaxes," says Justin Hendrix, executive director of NYC Media Lab, "to really convince the public that nothing's real."
Can deepfakes be stopped?
To reduce the potential dangers of deepfakes, videos can be equipped with a unique digital key that proves their origin, or with metadata showing where and when they were captured. Artificial intelligence can be trained to recognize deepfakes and remove them from websites. Deepfakes have already been banned from many porn sites, as well as from Twitter. Ultimately, though, the genie is out of the bottle. FakeApp's creator, "deepfakeapp," another Reddit user, told Vice News he wanted to give "everyday people" the opportunity to use technology previously limited to "big-budget SFX companies." Most tech experts say people will simply have to adapt to this new normal, by recalibrating their trust in the once unimpeachable medium of video. Soon, we won't be able to trust our own eyes.
How artificial intelligence works
At the core of the deepfakes code is a "deep neural network" — a computing system vaguely modeled on the biological neural networks that make up human brains. Such systems "learn," or progressively improve their performance, by taking in and analyzing vast amounts of data, acquainting themselves with the information via trial and error, and adjusting to feedback about what's wrong and right. Like a brain, AI networks reprogram themselves by reacting to patterns in incoming data, rather than relying on fixed rules. FakeApp uses a suite of neural networking tools that were developed by Google's AI division and released to the public in 2015. The software teaches itself to perform image-recognition tasks through trial and error. First, FakeApp trains itself, using "training data" in the form of photos and videos. Then it stitches the face onto another head in a video clip, accurately preserving the facial expression in the original video. These technologies have been developed by online communities, where developers are often happy to share techniques — further accelerating the pace of progress.
-
Today's political cartoons - November 18, 2024
Cartoons Monday's cartoons - new furniture, cleaning supplies, and more
By The Week US Published
-
What does the G20 summit say about the new global order?
Today's Big Question Donald Trump's election ushers in era of 'transactional' geopolitics that threatens to undermine international consensus
By Elliott Goat, The Week UK Published
-
What will Trump mean for the Middle East?
Talking Point President-elect's 'pro-Israel stance' could mask a more complex and unpredictable approach to the region
By Chas Newkey-Burden, The Week UK Published
-
Retired police captain's shooting death broadcast on Facebook Live
Speed Read
By Catherine Garcia Published
-
Elon Musk argues his 'pedo guy' insult wasn't literal because 'if you add guy to something, it's less serious'
Speed Read
By Brendan Morrow Published
-
Trump campaign regularly posted ads on Facebook using the word 'invasion'
Speed Read
By Catherine Garcia Published
-
Border Patrol chief condemns 'completely inappropriate' private Facebook group for agents
Speed Read
By Catherine Garcia Published
-
Facebook evacuates offices after poison possibly detected at mailing facility
Speed Read
By Catherine Garcia Published
-
Border Patrol agents reportedly mock migrant deaths in private Facebook group
Speed Read
By Tim O'Donnell Published
-
College student uses Snapchat gender swap filter to nab alleged predator cop
Speed Read
By Catherine Garcia Published
-
Elon Musk will head to trial for calling a cave diver a pedophile
Speed Read
By Marianne Dodson Published