The next wave of election meddling could involve artificial intelligence

Get breaking news alerts and special reports. The news and stories that matter, delivered weekday mornings.

In the 2020 election, you might not be able to believe your eyes or your ears due to advances in artificial intelligence that researchers warn could be used in the next wave of election meddling.

The rise of AI-enhanced software will allow people with little technical skills to easily produce audio and video that makes it nearly impossible to distinguish between what is real and what isn’t, according to a report released Wednesday from researchers led by Oxford University and Cambridge University.

Entitled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation,” the report was released to sound the alarm about how artificial intelligence is becoming easier to use — and could become a key tool in the arsenal of foreign operatives seeking to spread disinformation. The report was authored by 26 of the world’s leading researchers in artificial intelligence.

“There is no obvious reason why the outputs of these systems could not become indistinguishable from genuine recordings, in the absence of specially designed authentication measures,” the authors warn. “Such systems would in turn open up new methods of spreading disinformation and impersonating others.”

Related

While the industry celebrates the positive effects AI can have on the future, the researchers warned that equal consideration must be given to the dark side of AI. They hope the community will mobilize now to mitigate future detrimental effects of the technology.

Artificial intelligence will “set off a cat and mouse game between attackers and defenders, with the attackers seeming more human-like,” said Miles Brundage, a research fellow at Oxford University’s Future of Humanity Institute and one of the authors of the report.

The report was a joint project between a group of researchers and technologists including Oxford University’s Future of Humanity Institute, Cambridge University’s Centre for the Study of Existential Risk, and OpenAI, a non-profit AI research company. Other contributors include the Electronic Frontier Foundation, a San Francisco-based digital rights group that advocates for privacy and an open internet, as well as the Center for a New American Security, a Washington think tank focused on national security.

AI-boosted trolls

The efforts by world governments and politically motivated hackers to infiltrate computer systems and manipulate online discourse have been exposed as effective but also labor intensive.

The Internet Research Agency, the Russia-backed group named in Special Counsel Robert Mueller’s indictment last Friday, allegedly used identity theft, social media manipulation and virtual private networks to launch their influence campaigns in the United States.

But researchers said artificial intelligence makes launching a disinformation campaign even easier for humans.

“Artificially intelligent systems don’t merely reach human levels of performance but significantly surpass it,” Brundage said.

AI doesn’t just make these attacks easier to execute. It also makes them easier to replicate, Brundage said, allowing the technology to work more efficiently than humans to identify targets and launch attacks.

Fake video on demand

Some of this technology is already out in the public and being used to create videos.

Deepfakes gained notoriety online earlier this month by allowing people with limited technical skills to create fantasy pornography videos. These are created using AI-enhanced software that can take any face, including those of celebrities, children, or an ex-lover, and put them on the bodies of people in previously recorded videos.

The videos have cropped up on pornography websites, with one popular destination, Pornhub, reportedly vowing to crack down on them, since they fall under the category of non-consensual content.

“There has been a night-and-day transition between a few years ago and now,” Brundage said, speaking of the advances. “It’s becoming easy to get copies of these systems. Deepfakes was a proof of concept posted on Reddit that was made easier and easier to use. Large amounts of people were able to download it.”

A program as easy to download and use as Deepfakes could also theoretically be used in other instances. With Parkland, Florida students being pelted with attacks from trolls claiming they’re crisis actors, AI technology could be used to spread false information about their identities through fake videos and audio, furthering a hurtful campaign of misinformation.

Technology to mimic peoples’ voices is already being commercialized, Brundage said. It takes just a small amount of training data to teach machines how to talk like someone. With President Donald Trump and other high profile people, that training data is already out there, ready for anyone with nefarious intent to make the most of it.

Some politicians have taken notice. Sen. Mark Warner (D-Va.) recently spoke out about this kind of technology.

While the sea of disinformation continues to be a game of whack-a-mole, the report also warned of denial of information attacks. Instead of run-of-the-mill bots, which have tell-tale signs, these attacks will be fueled by artificially intelligent bots that can expertly elude detection. They’ll slam information channels with false information, making it difficult to cut through what’s clutter and find the truth.

While researchers, including Brundage, are sounding the alarm, they’re also hopeful the AI community will take notice now to institute measures to keep AI from being exploited. That includes learning from the best practices of older fields that can be used for good and evil, such as computer security.

“It’s one thing to say this could happen, another to prevent it and lessen the damage,” Brundage said. “We need better detection of fake multimedia, more research approaches to make systems less vulnerable to attack, and changes to some norms.”