Why mice are being used in the fight against ‘deepfakes’
Scientists believe the rodents may be key to creating algorithms that detect doctored footage
The fight against so-called deepfakes has taken a strange twist as researchers turn to mice to help prevent the spread of misinformation.
Scientists at the University of Oregon’s Institute of Neuroscience believe that doctored video and audio clips can be identified by training mice to detect “irregularities within speech” - a task that “the animals can do with remarkable accuracy”, the BBC reports.
The team hope their findings will provide key information about how deepfake material is constructed, in order to help the likes of Facebook and YouTube block fake videos before they are spread online.
The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Researcher Jonathan Saunders told the broadcaster that the ultimate goal of the study is to “take the lessons we learn” from the mice experiments and “implement that in the computer”.
“While I think the idea of a room full of mice in real time detecting fake audio on YouTube is really adorable, I don’t think that is practical for obvious reasons,” he added.
What are deepfakes?
Essentially, a deepfake is a type of doctored video in which artificial intelligence (AI) is used to superimpose the face and voice of a person onto the body of another.
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
The controversial practice first made headlines in 2017, when a Reddit user going by the name of “deepfakes” published pornographic videos featuring the likenesses of female celebrities including Taylor Swift and Katy Perry, reports Trusted Reviews.
The videos were shared on a subreddit of the same name, which amassed a community of creators who built “software tools” to make and share their own fake celebrity porn videos, says The Economist.
Reddit shut down the community, and pornographic websites banned deepfakes on the grounds that they were created without the consent of the celebrities edited into them. However, the technology has continued to spread, with fake footage featuring high-profile figures from a variety of different fields.
Earlier this year, a doctored video emerged apparently showing Facebook founder Mark Zuckerberg “gleefully boasting about his ownership of user data”, says Digital Trends. Another video, created by US comedian Jordan Peele, depicted Barak Obama calling Donald Trump a “total and complete dipshit”.
And with the potential to spread even more damaging fake news using the technology, researchers are racing to find methods to identify deepfakes quickly and effectively.
Why are mice good at detecting them?
University of Oregon researcher Saunders told the BBC that his team “taught mice to tell us the difference between a ‘buh’ and a ‘guh’ sound across a bunch of different contexts, surrounded by different vowels, so they know ‘boe’ and ‘bih’ and ‘bah’ - all these different fancy things that we take for granted”.
“And because they can learn this really complex problem of categorising different speech sounds, we think that it should be possible to train the mice to detect fake and real speech,” he continued.
The mice are given a reward every time they correctly identify a sound, and first learn with the same sounds every time, then with sounds from different speakers.
Alex Comerford, a data scientist at Bloomberg, told PC Mag that the researchers were able to directly track the brain activity of the rodents to see how they responded to the consonants - something that cannot be done with humans.
“[The mice] learn generalisable consonant categories,” Comerford said. “They’re about 75% accurate. Novel speakers and novel vowels drop their average, but only about 10%.”
The team believe that identifying how mice distinguish the difference between consonants may help develop computer algorithms that can spot deepfake footage.
“People are pretty good, but machines are getting better. The real way to solve this problem may lie in combining phonetics with neural networks,” Comerford added.
-
The Trump administration says it deports dangerous criminals. ICE data tells a different story.IN THE SPOTLIGHT Arrest data points to an inconvenient truth for the White House’s primary justification for its ongoing deportation agenda
-
Ex-FBI agents sue Patel over protest firingspeed read The former FBI agents were fired for kneeling during a 2020 racial justice protest for ‘apolitical tactical reasons’
-
The real tragedy that inspired ‘Hamlet,’ the life of a pingpong prodigy and the third ‘Avatar’ adventure in December moviesThe Week Recommends This month’s new releases include ‘Hamnet,’ ‘Marty Supreme’ and ‘Avatar: Fire and Ash’
-
Separating the real from the fake: tips for spotting AI slopThe Week Recommends Advanced AI may have made slop videos harder to spot, but experts say it’s still possible to detect them
-
Inside a Black community’s fight against Elon Musk’s supercomputerUnder the radar Pollution from Colossal looms over a small Southern town, potentially exacerbating health concerns
-
Poems can force AI to reveal how to make nuclear weaponsUnder The Radar ‘Adversarial poems’ are convincing AI models to go beyond safety limits
-
Spiralism is the new cult AI users are falling intoUnder the radar Technology is taking a turn
-
AI agents: When bots browse the webfeature Letting robots do the shopping
-
Is AI to blame for recent job cuts?Today’s Big Question Numerous companies have called out AI for being the reason for the culling
-
‘Deskilling’: a dangerous side effect of AI useThe explainer Workers are increasingly reliant on the new technology
-
AI models may be developing a ‘survival drive’Under the radar Chatbots are refusing to shut down