AI is becoming RACIST as robots mimic society, experts warn

Machine learning may be inherently racist and sexist if it learns from humans and will typically favour white men, research has shown.

Machine-learning algorithms, which will mimic humans and society’s actions, will have an unfair bias against women and ethnic minorities.

Noel Sharkey, Co-Director of the Foundation for Responsible Robotics, told the BBC’s Today program: “This is beginning to come up a lot in areas like shortlisting people for jobs, insurance, loans – all those things.”

Mr Sharkey highlighted an example from Boston University, where scientists trained an AI bot to read Google News.

The machine was then tasked with a word association game, where it was asked “man is to computer programmer what woman is to x”.

The computer responded with “homemaker”.

Health data expert Maxine Mackintosh said that the problem lies with society, and not the robots.

She said: “These big data are really a social mirror – they reflect the biases and inequalities we have in society.

“If you want to take steps towards changing that you can’t just use historical information.”

This is not the first time that AI has been branded as racist.

An investigative report recently found that software which was used by the US police to identify suspects is racist.

In parts of the US, when a suspect is taken in for questioning they are given a computerised risk assessment which works out the likelihood of the person reoffending.

The judges can then use this data when giving his or her verdict.

Reporters from ProPublica obtained more than 7,000 test results from Florida in 2013 and 2014 and analysed the reoffending rate among the individuals.

The suspects are asked a total of 137 questions by the AI system – Correctional Offender Management Profiling for Alternative Sanctions (Compas) – including questions such as “Was one of your parents ever sent to jail or prison?” or “How many of your friends/acquaintances are taking drugs illegally?”, with the computer generating its results at the end.

Overall, the AI system claimed black people (45 per cent) were almost twice as likely as white people (24 per cent) to reoffend.

In one example outlined by ProPublica, risk scores were provided for a black and white suspect, both of which on drug possession charges.

The white suspect had prior offences of attempted burglary and the black suspect had resisting arrest.

Seemingly giving no indication as to why, the black suspect was given a higher chance of reoffending and the white suspect was considered ‘low risk’.

But, over the next two years, the black suspect stayed clear of illegal activity and the white suspect was arrested three more times for drug possession.