AI armageddon: Age of killer robots is closer than you think

The industrial revolution was responsible for many things such as kickstarting capitalism, but automation – the control of machines – was arguably the most important. However, anxiety about automation is beginning to grow as computing power inexorably advances, and AI-enabled machines are now permeating almost every corner of life. The world’s military complexes, with their almost unparalleled resources, are at the forefront of developing weapons capable of producing intelligent behaviour. And there is growing unease about what a fully autonomous robot army might achieve when – not if – human oversight is removed altogether from some of the world’s most deadly weaponry.

Professor Stuart Russell, a computer scientist at the University of California, Berkeley (UC Berkeley) is among a growing scientific consensus expressing its profound unease at the growing threat autonomous weaponry presents.

Right now there are about 75 million Kalashnikovs … imagine if all of those guns could get up by themselves and start killing people

Professor Stuart Russell

He told Express.co.uk: “This is not science fiction. The technology is entirely feasible.

“Some of the major powers are plunging into an arms race with little understanding of where it will lead and are opposing the diplomatic will of the vast majority of the UN member states who want to begin negotiations on a treaty to ban such weapons.

“Right now there are about 75 million Kalashnikovs outside the hands of “responsible” governments.

AI robot army

AI armageddon: An autonomous military machines arms race is accelerating (Image: Getty)

Armata-T-14-super-tank

AI armageddon: The Armata-T-14 super tank is cutting-edge (Image: Wikimedia)

“Imagine if all of those guns could get up by themselves and start killing people. That’s the future we are heading towards.”

Professor Russell, who has over 30 years experience in the field is among those convinced wars will in the future be fought solely with autonomous weapons, as distinct from remote-controlled weapons, such as today’s General Atomics MQ-9 Reaper.

He said: “The major powers are already developing these autonomous and semi-autonomous platforms, such as tanks, planes, destroyers and submarines.

“Given the 20-year timeline for the F-35 and the fact that AI systems are not yet fully capable of making all battlefield operational decisions, I think it could be 20 years, but that would be compressed significantly if there were a real arms race and a Manhattan-style project.”

Some observers could call it progress to delegate the process of dying to machines, meaning no humans would die.

However, the UC Berkeley scientist argues the world is not that simple.

Professor Russell said: “Of course, if that were true, we could settle wars by playing tiddlywinks, with the losing country agreeing to be pillaged and enslaved in perpetuity.

“War does not work that way. Once one side’s robots won, they would go on to exact a human cost against the adversary until they surrendered.”

IAI-Harpy-

AI armageddon: Israel’s IAI-Harpy is almost entirely autonomous (Image: Israeli handout)

AI news

AI armageddon: Automation – the control of machines may have seismic ramifications (Image: Getty)

There is also the all-too-real problem a fully automated defence system might start a war by itself, misinterpreting some innocuous behaviour as an attack.

There are a multitude of reasons why militaries are keen to develop killer robots, including speed of operation and accuracy, with machines usually winning against a human in simulations.

But Professor Russell warns: “The main reason I’ve heard is ‘In case our enemies do’.”

Some experts believe the technological precursors to these “killer robots” have already arrived, with Israel’s IAI Harop/Harpy anti-radiation drone already at work.

Professor Russell explained: “Indeed, there is no evidence that the human controller is actually controlling it at all. The machine is making the kill decisions.”

One of the greatest frustrations for military experts looking at the accelerating pace of AI military tech is the apparent lack of understanding about the program.

Professor Russell said: “Keep in mind that the military may not want to go there: the US and UK have policies in place against full autonomy.”

General Paul Selva, Vice-Chairman of the Joint Chiefs of Staff, told the US Congress in July 2017: “Keep the ethical rules of war in place lest we unleash on humanity a set of robots that we don’t know how to control.

Artificial intelligence news

AI armageddon: Could the world sleepwalk into a dystopian future? (Image: Getty)

AI Terminator killer robot

AI armageddon: The precursors to “killer robots” may already have arrived (Image: Getty)

“I don’t think it’s reasonable for us to put robots in charge of whether or not we take a human life.”

He also noted that although the US and UK have internal policies banning autonomous weapons, they are opposed to a treaty banning autonomous weapons.

He added: “They insist that other countries have the right to develop autonomous weapons and potentially use them against us. Go figure.”

source: express.co.uk