Fooling AI can now be done a thousand times faster

A picture of a dog in the snow

To human eyes, this is obviously a dog in the snow

Meike Engels/Alamy

Tricking artificial intelligence has never been easier. One way is to fool an AI into misclassifying an image by misinforming it about what that file shows. Such “adversarial examples” can now be generated a thousand times faster than before.

Anish Athalye at the Massachusetts Institute of Technology and his colleagues have created an algorithm that produces a thousand adversarial examples to fool Google’s image recognition system – Google Cloud Vision. One was an image of a dog that was wrongly identified as a picture of two people skiing. In principle, each attack could be completed in several minutes, Athalye says.

The system works by taking an image and changing it pixel by pixel into a different one. Throughout the process, it pings Google Cloud Vision to see how it classifies the image, and makes sure that it