Letting robots kill without human supervision could save lives

robot tank

Armed and dangerous

Mandel Ngan/AFP/Getty Images

NEXT week, a meeting at UN headquarters in Geneva will discuss autonomous armed robots. Unlike existing military drones, which are controlled remotely, these new machines would identify and attack targets without human intervention. Groups including the Campaign to Stop Killer Robots hope the meeting will lead to an international ban.

But while fiction is littered with cautionary tales of what happens when you put guns in the cold, metallic hands of a machine, the situation may not be as simple as “human good, robots bad”.

To understand why, we should look at what people are saying about the ethics of driverless cars, which advocates see as a way of reducing accidents. If your life is safer in the hands of a robot car than a human driver, might the same apply for military hardware?

Clearly, replacing a human combatant with a robot one is safer for that individual, but armed robots could also reduce civilian casualties. For example, a squad that has to clear a building must make a split-second decision about whether the occupant of a room is an armed insurgent or an innocent civilian – any hesitation could get them killed. A robot can wait for confirmation, when the enemy starts firing.

The same principle applies to air strikes. An autonomous system can make several runs over a target to confirm it is really an enemy outpost, but a pilot can risk only one pass. In both cases the only downside is the loss of machines due to excessive caution, not casualties.

Human rights groups now see the use of precision-guided weapons as