Black AI Workshop Becomes Latest Flashpoint in Tech’s Culture War

Black AI Workshop Becomes Latest Flashpoint in Tech’s Culture WarBlack AI Workshop Becomes Latest Flashpoint in Tech’s Culture War
An attendee types on a keyboard during the MarketplaceLIVE Hackathon, sponsored by Digital Realty Trust Inc., in New York, U.S., on Thursday, Sept. 22, 2016. Digital Realty Trust’s clients include domestic and international companies ranging from financial services, cloud and information technology services, to manufacturing, energy, gaming, life sciences and consumer products. Photographer: Bloomberg/Bloomberg

An event designed to encourage greater participation by black researchers in artificial intelligence has become the latest flashpoint in the debate over diversity at the cutting edge of computer science and whether political correctness has gone too far.

In December, a group called Black in AI plans to host an afternoon workshop to highlight AI research by black computer scientists at the Conference on Neural Information Processing Systems, one of the top gatherings for scientists working on AI. While the organizers invited people of all races to attend the workshop, they said only black researchers would be allowed to present papers. As news of the event spread on social media, it sparked a backlash from some coders and academics who questioned why an event focusing solely on research by black scientists was necessary. The debate echoes the controversy in August that followed Google employee James Damore’s circulation of a manifesto that, among other points, accused the company of overzealously promoting diversity at the expense of technical ability in hiring and promotions.

More from Bloomberg.com: This $14 Million Atlanta Home With Bunker Is ‘Safest in America’

Questions about diversity in the AI and machine-learning fields focus on inclusion and employment – but the issue has even more far-reaching consequences. Software algorithms only behave effectively and fairly if they have a diverse set of data to train them. The concern is that as these automated systems are used in increasingly critical decisions, such as who gets a bank loan or is granted parole, a lack of diversity among the scientists behind them will produce biased results.

Already there have been a number of high-profile incidents in which AI programs exhibited racial bias. In 2016, a computer program designed to judge human beauty using supposedly objective features in photographs – like facial symmetry and wrinkles — was found to heavily favor people with light skin. That same year, after Microsoft Corp. released a Twitter chatbot called Tay , users easily goaded it into using racist language and expressing neo-Nazi views. An algorithm that was supposed to provide parole boards with objective advice on whether a prisoner was likely to commit another crime if released was found to incorrectly judge black defendants as “high risk” twice as often as white defendants. And there are concerns about hidden bias within big data sets – on everything from mortgage lending and insurance underwriting to the clinical trials of new drugs – that are increasingly being used to train machine-learning algorithms. 

Timnit Gebru, who researches artificial intelligence for Microsoft and is one of the few black female computer scientists specializing in the field in the U.S., co-founded Black in AI. She has been vocal in trying to call attention to the problem of bias in the data that machine-learning algorithms use, and has said the lack of diversity in the field means that those building AI systems are less aware of these issues than they should be. Black in AI seeks to foster collaboration among researchers and increase the presence of black people in the field, according to its website .

The debate over the Black in AI workshop began shortly after Ian Goodfellow, a research scientist at Google Brain, the artificial intelligence lab that is part of Alphabet Inc., promoted it on his own Twitter feed on Oct. 11. “Why is this necessary? If the black people projected to attend wanted to present their work why can’t they do so at an event for all races?” a user with the Twitter handle @typeload wrote in response to Goodfellow’s post.

Others soon chimed in too. “This actually promotes segregation of AI (and society in general): blacks going to events for blacks, women – for women, e.t.c.,” tweeted Timofey Yarimov, a data scientist who works for SKB Kontur, a Russian firm that makes business and accounting software. “I don’t see this necessary, the benefits will be much less than the drawback,” tweeted Ahmed Adly, whose LinkedIn profile lists his title as “head of mobility” for a Qatar-based IT services firm called Malomatia.

Yarimov, when contacted by Bloomberg, declined to answer further questions about his view.

Adly said in an email to Bloomberg that he agreed there was discrimination in the field of AI against women and non-white men, but was unsure what the true intent of the Black in AI event was. “If the event is part of an effort to fight the discrimination that we know exists then that’s a good thing,” he wrote. But if it was designed “to use the hardship of a specific category just to attract attendees, then it’s not good” and would only undermine the cause of combating discrimination. He also said that while he led his company’s efforts to look at possible uses of AI, he was not planning to attend the NIPS conference this year.

More from Bloomberg.com: Donald Trump Jr. Becomes a Rainmaker on the Republican Speaking Circuit

The user with the handle @typeload later walked back from his initial inflammatory tweet, tweeting to Goodfellow, “Not trying to take that standpoint, simply asking how an event restricting speakers by race is better for the community than one that doesn’t.” When contacted through Twitter, the person using the @typeload handle declined to answer further questions.

The skeptical tweets in turn drew rebukes and condemnation from many others in the field. “Horrifying to see these reaction comments. Especially considering this community will be building there [sic] bias into the next gen of software,” tweeted Brannon Dorsey, a Chicago-based artist and programmer who uses software in his art. F. William High, a senior data scientist at Netflix, tweeted that those questioning the need for the workshop should go and ask “the nearest black AI researcher” for their views. “If you can’t find any, maybe that’s the problem the event wants to address,” he tweeted.

When contacted, some academic researchers who questioned the need for the workshop over Twitter declined to comment, saying they feared for their jobs after complaints of racism had been lodged with their universities in response to their tweets about Black in AI. Like Damore, these researchers blamed political correctness for impeding honest discussion about the lack of diversity in AI research.

Gebru, who works on issues of bias in AI data in Microsoft’s Fairness, Accountability, Transparency, and Ethics research group, and her four fellow Black in AI organizers declined to comment via email yesterday.  On Twitter, Gebru thanked Goodfellow “standing your ground and promoting our event + goals.” 

More from Bloomberg.com: Brits Tell Trump to Mind His Own Business After Crime Tweet

Adds comments from Twitter user who raised concerns about the event

More from Bloomberg.com

Read Black AI Workshop Becomes Latest Flashpoint in Tech’s Culture War on bloomberg.com