Why we should build AI that sometimes disobeys our commands

Man walks past weapons

Would an AI-controlled weapon attack civilians?

Bloomberg/Getty

The future of human-AI interactions is set to get fraught. With the push to incorporate ethics into artificial intelligence systems, one basic idea must be recognised: we need to make machines that can say “no” to us.

Not just in the sense of not responding to an unrecognised command, but also as the ability to recognise, in context, that an otherwise proper and usable directive from a human must be refused. That won’t be easy to achieve and may be hard for some to swallow.

As quickly as artificial intelligence has been spreading,

To continue reading this premium article, register or login for free for unlimited access. Existing users, please log in.