Why mice are being used in the fight against ‘deepfakes’
Scientists believe the rodents may be key to creating algorithms that detect doctored footage

The fight against so-called deepfakes has taken a strange twist as researchers turn to mice to help prevent the spread of misinformation.
Scientists at the University of Oregon’s Institute of Neuroscience believe that doctored video and audio clips can be identified by training mice to detect “irregularities within speech” - a task that “the animals can do with remarkable accuracy”, the BBC reports.
The team hope their findings will provide key information about how deepfake material is constructed, in order to help the likes of Facebook and YouTube block fake videos before they are spread online.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Researcher Jonathan Saunders told the broadcaster that the ultimate goal of the study is to “take the lessons we learn” from the mice experiments and “implement that in the computer”.
“While I think the idea of a room full of mice in real time detecting fake audio on YouTube is really adorable, I don’t think that is practical for obvious reasons,” he added.
What are deepfakes?
Essentially, a deepfake is a type of doctored video in which artificial intelligence (AI) is used to superimpose the face and voice of a person onto the body of another.
The controversial practice first made headlines in 2017, when a Reddit user going by the name of “deepfakes” published pornographic videos featuring the likenesses of female celebrities including Taylor Swift and Katy Perry, reports Trusted Reviews.
The videos were shared on a subreddit of the same name, which amassed a community of creators who built “software tools” to make and share their own fake celebrity porn videos, says The Economist.
Reddit shut down the community, and pornographic websites banned deepfakes on the grounds that they were created without the consent of the celebrities edited into them. However, the technology has continued to spread, with fake footage featuring high-profile figures from a variety of different fields.
Earlier this year, a doctored video emerged apparently showing Facebook founder Mark Zuckerberg “gleefully boasting about his ownership of user data”, says Digital Trends. Another video, created by US comedian Jordan Peele, depicted Barak Obama calling Donald Trump a “total and complete dipshit”.
And with the potential to spread even more damaging fake news using the technology, researchers are racing to find methods to identify deepfakes quickly and effectively.
Why are mice good at detecting them?
University of Oregon researcher Saunders told the BBC that his team “taught mice to tell us the difference between a ‘buh’ and a ‘guh’ sound across a bunch of different contexts, surrounded by different vowels, so they know ‘boe’ and ‘bih’ and ‘bah’ - all these different fancy things that we take for granted”.
“And because they can learn this really complex problem of categorising different speech sounds, we think that it should be possible to train the mice to detect fake and real speech,” he continued.
The mice are given a reward every time they correctly identify a sound, and first learn with the same sounds every time, then with sounds from different speakers.
Alex Comerford, a data scientist at Bloomberg, told PC Mag that the researchers were able to directly track the brain activity of the rodents to see how they responded to the consonants - something that cannot be done with humans.
“[The mice] learn generalisable consonant categories,” Comerford said. “They’re about 75% accurate. Novel speakers and novel vowels drop their average, but only about 10%.”
The team believe that identifying how mice distinguish the difference between consonants may help develop computer algorithms that can spot deepfake footage.
“People are pretty good, but machines are getting better. The real way to solve this problem may lie in combining phonetics with neural networks,” Comerford added.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
-
Brazil has a scorpion problem
Under The Radar Venomous arachnids are infesting country's fast-growing cities
-
Why Rikers Island will no longer be under New York City's control
The Explainer A 'remediation manager' has been appointed to run the infamous jail
-
California may pull health care from eligible undocumented migrants
IN THE SPOTLIGHT After pushing for universal health care for all Californians regardless of immigration status, Gov. Gavin Newsom's latest budget proposal backs away from a key campaign promise
-
Is Apple breaking up with Google?
Today's Big Question Google is the default search engine in the Safari browser. The emergence of artificial intelligence could change that.
-
Inside the FDA's plans to embrace AI agencywide
In the Spotlight Rumors are swirling about a bespoke AI chatbot being developed for the FDA by OpenAI
-
Digital consent: Law targets deepfake and revenge porn
Feature The Senate has passed a new bill that will make it a crime to share explicit AI-generated images of minors and adults without consent
-
AI hallucinations are getting worse
In the Spotlight And no one knows why it is happening
-
Deepfakes and impostors: the brave new world of AI jobseeking
In The Spotlight More than 80% of large companies use AI in their hiring process, but increasingly job candidates are getting in on the act
-
Secret AI experiment on Reddit accused of ethical violations
In the Spotlight Critics say the researchers flouted experimental ethics
-
Fake AI job seekers are flooding U.S. companies
In the Spotlight It's getting harder for hiring managers to screen out bogus AI-generated applicants
-
How might AI chatbots replace mental health therapists?
Today's Big Question Clients form 'strong relationships' with tech