Amazon's facial tech shows gender, racial bias, MIT study says – CNET

Amazon

Roberto Machado Noa / Getty Images

Amazon’s facial technology had a harder time recognizing the gender of darker-skinned women and made more mistakes identifying gender overall than competing technologies from Microsoft and IBM, according to an MIT study published Thursday.

Amazon’s Rekognition software incorrectly identified women as men 19 percent of the time, according to the study. In addition, it incorrectly identified darker-skinned women as men 31 percent of the time, it says. Software from Microsoft, by comparison, identified darker-skinned women as men 1.5 percent of the time.

Matt Wood, general manager of artificial intelligence at Amazon Web Services, said that the study’s test results are based on facial analysis, not facial recognition. Analysis, he said, can find faces in videos or images and assign generic attributes, such as the wearing of glasses. Recognition, he said, matches an individual’s face to images in videos and photographs. The Rekognition technology includes both of these functionalities. 

“It’s not possible to draw a conclusion on the accuracy of facial recognition for any use case – including law enforcement – based on results obtained using facial analysis,” Wood said in a statement. 

Wood added that the study didn’t use the latest version of Rekognition. Amazon, using an up-to-date version of Rekognition with similar data, found no false positive matches, Wood says.

Deborah Raji, an author of the study, said she and coauthor Joy Buolamwini understand the distinction between facial recognition and facial analysis. 

“We make it clear in our paper that the task we chose to evaluate is the facial analysis task of binary gender classification,” Raji said. “That means, given the number of faces detected, how well does the model understand what it sees?”

In a Friday blog post, Buolamwini cautioned people to be skeptical when companies say they have completely accurate systems.

“Wood states the company used a large benchmark of over 1 million faces to test their facial recognition capabilities and performed well,” Buolamwini wrote. “While their performance on the benchmark might seem laudable, we do not know the detailed demographic or phenotypic (skin type) composition of this benchmark. Without this information we cannot asses for racial, gender, color, or other kinds of bias.”

Amazon has provided Rekognition to law enforcement agencies, though civil liberties groups, members of Congress and Amazon’s own employees have raised concerns about privacy. Earlier this month, a group of shareholders also called on Amazon to stop selling its Rekognition technology to government agencies. 

In light of the MIT study, Buolamwini said it’s “irresponsible” for Amazon to keep selling the technology to law enforcement agencies. Facial analysis technology can be abused and could lead to mass surveillance, she said. In addition, inaccuracies could result in innocent people being misidentified as criminals.

Raji echoed that sentiment. “If the system falsely identifies a suspect due to its reduced accuracy on a particular demographic,” she said, “that could be seriously harmful.”

First published Jan. 25 at 3:54 p.m. PT.
Update, 11:14 p.m.: Adds comment from Buolamwini and Raji.

source: cnet.com