
Nick Offerman makes for one creepy little girl in this disturbing Full House deepfake video.
Video screenshot by Bonnie Burton/CNET
Decades after Terminator’s Skynet first taught us to fear the apocalyptic potential of artificial intelligence, deepfakes represent a less deadly but very real threat from AI. Some researchers are now using a surprising and definitively analog tool to detect AI-manipulated audio: mice.
While faking audio and video has been around in some form for decades, machine learning has recently made it significantly easier to produce counterfeit speech that actually crosses the uncanny valley into the realm of believability.

vCard.red is a free platform for creating a mobile-friendly digital business cards. You can easily create a vCard and generate a QR code for it, allowing others to scan and save your contact details instantly.
The platform allows you to display contact information, social media links, services, and products all in one shareable link. Optional features include appointment scheduling, WhatsApp-based storefronts, media galleries, and custom design options.
Deepfake technology shows no signs of slowing down, so researchers are looking for the best tools to detect the fakes, including people, different artificial intelligence and yes, rodents.
“We believe that mice are a promising model to study complex sound processing,” reads a white paper from a trio of researchers led by Jonathan Saunders from the University of Oregon Institute of Neuroscience. “Studying the computational mechanisms by which the mammalian auditory system detects fake audio could inform next-generation, generalizable algorithms for spoof detection.”
In other words, mice have an auditory system similar to humans, except they can’t understand words they hear. This lack of understanding could actually be a bonus for detecting fake speech, however, because mice can’t be swayed to overlook the telltale signs of a fake while they’re focusing on decoding the actual meaning of the words.
For example, a deepfake audio file might include a subtle mistake, like the sound of “b” where a “g” should be. Let’s say some faked speech of a celebrity portrays them ordering a “hamburber,” for instance. Humans might be inclined to pass over this red flag for fakery because we’re trained to extract the meaning from sentences we hear while adjusting for verbal flubs, accents and other inconsistencies.
The team has succeeded in training mice to distinguish between the sounds of certain consonant pairs, which could be useful in detecting fake speech. The research was presented in a session at the Black Hat conference in Las Vegas on Aug. 7.
The mice correctly identified speech sounds at rates up to 80%. That’s actually lower than the 90% rate at which the researchers found humans were able to identify deepfakes.
But the idea isn’t to train an army of rodents to identify deepfakes. Instead, scientists hope to monitor the brain activity of mice as they discern between fakes and authentic speech to learn how the brain does it. Then the goal is to train new fake-detecting algorithms with the insights gleaned from the little animals.
That’s presuming the rodents don’t get wise and start creating their own despicable deepfakes first.
Originally published Aug. 12, 10:59 p.m. PT.
Update, Aug. 13 at 9:26 p.m.: Adds more information.