Computers Already Learn From Us. But Can They Teach Themselves?

This article is part of our latest Artificial Intelligence special report, which focuses on how the technology continues to evolve and affect our lives.

Artificial intelligence seems to be everywhere, but what we are really witnessing is a supervised-learning revolution: We teach computers to see patterns, much as we teach children to read. But the future of A.I. depends on computer systems that learn on their own, without supervision, researchers say.

When a mother points to a dog and tells her baby, “Look at the doggy,” the child learns what to call the furry four-legged friends. That is supervised learning. But when that baby stands and stumbles, again and again, until she can walk, that is something else.

Computers are the same. Just as humans learn mostly through observation or trial and error, computers will have to go beyond supervised learning to reach the holy grail of human-level intelligence.

Methods that do not rely on such precise human-provided supervision, while much less explored, have been eclipsed by the success of supervised learning and its many practical applications — from self-driving cars to language translation. But supervised learning still cannot do many things that are simple even for toddlers.

Set a goal, and a reinforcement learning system will work toward that goal through trial and error until it is consistently receiving a reward. Humans do this all the time. “Reinforcement is an obvious idea if you study psychology,” Dr. Sutton said.

A more inclusive term for the future of A.I., he said, is “predictive learning,” meaning systems that not only recognize patterns but also predict outcomes and choose a course of action. “Everybody agrees we need predictive learning, but we disagree about how to get there,” Dr. Sutton said. “Some people think we get there with extensions of supervised learning ideas; others think we get there with extensions of reinforcement learning ideas.”

As powerful as reinforcement learning is, Dr. LeCun says he believes that other forms of machine learning are more critical to general intelligence.

“My money is on self-supervised learning,” he said, referring to computer systems that ingest huge amounts of unlabeled data and make sense of it all without supervision or reward. He is working on models that learn by observation, accumulating enough background knowledge that some sort of common sense can emerge.

“Imagine that you give the machine a piece of input, a video clip, for example, and ask it to predict what happens next,” Dr. LeCun said in his office at New York University, decorated with stills from the movie “2001: A Space Odyssey.” “For the machine to train itself to do this, it has to develop some representation of the data. It has to understand that there are objects that are animate and others that are inanimate. The inanimate objects have predictable trajectories, the other ones don’t.”

After a self-supervised computer system “watches” millions of YouTube videos, he said, it will distill some representation of the world from them. Then, when the system is asked to perform a particular task, it can draw on that representation — in other words, it can teach itself.

Dr. Cox at the MIT-IBM Watson AI Lab is working similarly, but combining more traditional forms of artificial intelligence with deep networks in what his lab calls neuro-symbolic A.I. The goal, he says, is to build A.I. systems that can acquire a baseline level of common-sense knowledge similar to that of humans.

“A huge fraction of what we do in our day-to-day jobs is constantly refining our mental models of the world and then using those mental models to solve problems,” he said. “That encapsulates an awful lot of what we’d like A.I. to do.”

source: nytimes.com