Is AI dangerous? Why our fears of sentient ‘Westworld’ robots are overblown

Get the Think newsletter.

Dec. 6, 2018 / 9:36 AM GMT

By Noah Berlatsky

Robots are always taking over, at least in pop culture. In the 1984 film “Terminator,” the artificial intelligence (AI) weapons system Skynet attains sentience and launches a nuclear apocalypse designed to wipe out humanity. In Netflix’s television series “Westworld,” robots attain sentience and start murdering people. Tesla founder Elon Musk has been saying for years that we need to take the threat of robot apocalypse seriously. “If one company or small group of people manages to develop god-like super-intelligence, they could take over the world,” Musk said in the 2018 documentary “Do You Trust This Computer?” “We have five years. I think digital super-intelligence will happen in my lifetime, 100 percent,” he warned.

Malevolent robots are fun monsters, like vampires or aliens. But, like vampires and aliens, they’re not real, according to “The AI Delusion,” a new book by Pomona College Economics professor Gary Smith. According to Smith, computers aren’t smart enough to threaten us — and won’t be for the foreseeable future. But if we think computers are smart, we may end up harming ourselves not in the far future, but right now.

Computers seem more intelligent than us because they can perform certain tasks much better than we can.

Computers seem more intelligent than us because they can perform certain tasks much better than we can. “People see computers do amazing things, like make complicated mathematical calculations and provide directions to the nearest Starbucks, and they think computers are really smart,” Smith told me in a phone interview. Computers can memorize huge amounts of information — a computer has effectively solved the game checkers, calculating every possible move, so that it is unbeatable. If computers can beat humans in games of skill and intelligence, then computers must be more intelligent than humans are. And if they are more intelligent than us, it follows that they pose a danger to us. Right?

This reasoning is not right, according to Smith. Computers can calculate and memorize, but that doesn’t mean they’re smarter than humans. In fact, computers are, in most respects, no smarter than a chair. They don’t have wisdom or common sense. “They have no understanding of the real world,” Smith says.

July 3, 201805:19

To explain computer limitations, Smith points to the Winograd schema, a computer challenge developed by Stanford computer science professor Terry Winograd. Winograd shemas are sentences like “I can’t cut that tree down with that axe; it is too thick.” A human reading that sentence knows instantly that the “it” refers to the tree, not to the axe, because it makes no sense to say that a thick axe can’t cut down a tree.

Computers have great difficulties with Winograd schemas. “A computer doesn’t know in any meaningful sense what a tree is or what an axe is,” Smith says. Similarly, computers aren’t going to decide to rise up against humans because computers don’t know what humans are, or what rising up is, or what their own survival is. Nor is there much chance that programmers will get them to understand any of these concepts in the near future. It’s like imagining that your television is going to leap off its perch and attack you. It’s a good science-fiction story, but not something to spend your days worrying about.

So rogue sentient televisions aren’t going to kill you. But increased stress levels from worrying about rogue sentient televisions could have a negative affect on your health. Similarly, smart computers aren’t dangerous, but imagining that computers are smart can cause problems.

So rogue sentient televisions aren’t going to kill you. But increased stress levels from worrying about rogue sentient televisions could have a negative affect on your health.

For example, computers can analyze huge amounts of data very quickly. They are good at finding unexpected correlations between different data sets. Once these correlations have been uncovered, or data-mined, researchers can go back and try to figure out what caused the correlation.

The problem here is that random correlations in data sets are quite common, especially when you are looking at huge amounts of data. If a researcher administers a treatment to a large number of patients with a range of conditions, data-mining software will likely find statistically significant results, because patterns occur in random data. But just because a computer finds a correlation, doesn’t mean the researcher has actually discovered a cure. Reliance on data-mining is one reason that up to 90 percent of medical studies are flawed or incorrect.

There are similar problems with using computer programs to pick stocks — or to run presidential campaigns. Hillary Clinton relied heavily on an algorithm named Ada to help allocate resources and identify battleground states. The algorithm correctly identified Pennsylvania as a swing state but missed the dangers to the campaign in Michigan and Wisconsin. And of course, Ada couldn’t forecast FBI director James Comey’s last-minute announcement about Clinton in the final week of the campaign. The Clinton campaign trusted Ada to give them an edge, but the algorithm was only as good as the data put into it. Trusting it to set strategy may well have hurt the campaign.

Again, just because computers aren’t taking over doesn’t mean they can’t be dangerous. In his book, Smith notes that Admiral insurance planned to base car insurance quotes on AI analysis of applicant Facebook data. The company boasted that “our analysis is not based on any one specific model,” but would simply troll through data to find correlations between words on Facebook and driving records. In other words, the program would penalize people based on random passing correlations. Liking Michael Jordan or Leonard Cohen, the company said, could impact your car insurance premiums.

Facebook nixed the plan because it violated the platform’s terms of service. But it’s a good example of how trusting computer intelligence can lead to making poorly informed decisions that harm people for no reason. “There’s a tendency for people to say, well if a computer says that it must be right,” Smith told me. But what computers say isn’t right. It isn’t even wrong. It’s just data. Only humans can create a theoretical framework in which that data has meaning. If you ask bad questions, or worse, no questions, the answers you get will be gibberish.

It’s possible that someday computers will be able to figure out why thick trees, not thick axes, make cutting difficult. We’re not there yet, though, and there’s no way we’re going to have supercomputers ruling the earth in five years, or in Elon Musk’s lifetime. Computer programs, for now and the foreseeable future, are still just tools. And like any tool, they can be helpful or dangerous, depending on how you wield them. You can use a hammer to drive in a nail or to bash your thumb. Either way, though, if you ask a hammer to tell you what to do, you’re not going to get good advice.