Why deep learning won't replace its human counterparts anytime soon

I’ve met some robotic humans, but I’ve yet to meet a robot that even remotely resembles a human. Artificial intelligence (AI), with its subfields of machine learning and deep learning, is supposed to fix all that. But, for a variety of reasons, AI is nowhere near delivering human-like understanding to machines. It turns out the very same traits that cause us to jump to conclusions make it impossible for machines to act human.

Even so, deep learning, based on artificial neural networks, offers real promise. It’s just that it doesn’t promise to replace humans.

What do you do?

Machine learning, in which machines are “taught” to understand data based on copious quantities of training data, can yield computers that can spot patterns in data more accurately than trained doctors, for example, but what’s lacking is human judgment about that data. As Francois Challot has written: “In general, anything that requires reasoning—like programming, or applying the scientific method—long-term planning, and algorithmic-like data manipulation, is out of reach for deep learning models, no matter how much data you throw at them. Even learning a sorting algorithm with a deep neural network is tremendously difficult.”

SEE: Special report: How to implement AI and machine learning (free PDF)

People, however, have a “tendency to project intentions, beliefs and knowledge on the things around us,” Challot wrote. Such a tendency is born of experience with the world, and while it often leads us to incorrectly judge what is happening around us, the very fact that we can judge is dramatically more powerful than the somewhat simplistic output of our most sophisticated machines. Challot has said:

[D]eep learning models do not have any understanding of their input, at least not in any human sense. Our own understanding of images, sounds, and language, is grounded in our sensorimotor experience as humans—as embodied earthly creatures. Machine learning models have no access to such experiences and thus cannot “understand” their inputs in any human-relatable way.

In other words, as Facebook director of AI Yann LeCun has posited, to truly grok the data they ingest and process, “machines need to understand how the world works, learn a large amount of background knowledge, perceive the state of the world at any given moment, and be able to reason and plan.” Unfortunately, they can’t. Present a machine with data that varies even slightly from the training data with which they were taught and they cannot compute.

What they can do, however, is uncover structure within seemingly unstructured data. In this machines excel, surpassing their human counterparts.

The upside to dumb machines

According to Challot, “[T]he only real success of deep learning so far has been the ability to map space X to space Y using a continuous geometric transform, given large amounts of human-annotated data. Doing this well is a game-changer for essentially every industry, but it is still a very long way from human-level AI.” And yet, as he stressed, it is a big deal.

SEE :Why AI and machine learning are so hard, Facebook and Google weigh in (TechRepublic)

It becomes even bigger as we speed toward a future where deep learning isn’t simply hard-coded, as Challot highlighted in a separate post:

[W]e will move away from having on one hand “hard-coded algorithmic intelligence” (handcrafted software) and on the other hand “learned geometric intelligence” (deep learning). We will have instead a blend of formal algorithmic modules that provide reasoning and abstraction capabilities, and geometric modules that provide informal intuition and pattern recognition capabilities. The whole system would be learned with little or no human involvement.

Of course, people would do the heavy lifting of setting up the models in the first place, because domain knowledge and an understanding of what the deep learning engineer wants to accomplish can’t be turned over to machines. Even so, as more complicated ways of modeling and processing data are created, deep learning can become even more useful (and intricate).

As this happens, Challot wrote, machines won’t displace humans. Rather, deep learning engineers “will start putting a lot more effort into crafting complex loss functions that truly reflect business goals, and understanding deeply how their models impact the digital ecosystems in which they are deployed.”

It’s definitely not a world overrun by machines, but it’s one where machines actually start to do more heavy lifting. The thinking about what the heavy lifting should be, and how to approach it, however, remains a human endeavor.

Also see

deeplearning.jpg

Image: iStockphoto/nicescene