A Brief Introduction to Artificial Intelligence For Normal People

Lately, artificial intelligence has been very much the hot topic in Silicon Valley and the broader tech scene. To those of us involved in that scene it feels like an incredible momentum is building around the topic, with all kinds of companies building A.I. into the core of their business. There has also been a rise in A.I.-related university courses which is seeing a wave of extremely bright new talent rolling into the employment market. But this is not a simple case of confirmation bias – interest in the topic has been on the rise since mid-2014.

The noise around the subject is only going to increase, and for the layman it is all very confusing. Depending on what you read, it’s easy to believe that we’re headed for an apocalyptic Skynet-style obliteration at the hands of cold, calculating supercomputers, or that we’re all going to live forever as purely digital entities in some kind of cloud-based artificial world. In other words, either The Terminator or The Matrix are imminently about to become disturbingly prophetic.

Should we be worried or excited? And what does it all mean?

Will robots take over the world?

When I jumped onto the A.I. bandwagon in late 2014, I knew very little about it. Although I have been involved with web technologies for over 20 years, I hold an English Literature degree and am more engaged with the business and creative possibilities of technology than the science behind it. I was drawn to A.I. because of its positive potential, but when I read warnings from the likes of Stephen Hawking about the apocalyptic dangers lurking in our future, I naturally became as concerned as anybody else would.

So I did what I normally do when something worries me: I started learning about it so that I could understand it. More than a year’s worth of constant reading, talking, listening, watching, tinkering and studying has led me to a pretty solid understanding of what it all means, and I want to spend the next few paragraphs sharing that knowledge in the hopes of enlightening anybody else who is curious but naively afraid of this amazing new world.

Oh, if you just want the answer to the headline above, the answer is: yes, they will. Sorry.

How the machines have learned to learn

The first thing I discovered was that artificial intelligence, as an industry term, has actually been going since 1956, and has had multiple booms and busts in that period. In the 1960s the A.I. industry was bathing in a golden era of research with Western governments, universities and big businesses throwing enormous amounts of money at the sector in the hopes of building a brave new world. But in the mid seventies, when it became apparent that A.I. was not delivering on its promise, the industry bubble burst and the funding dried up. In the 1980s, as computers became more popular, another A.I. boom emerged with similar levels of mind-boggling investment being poured into various enterprises. But, again, the sector failed to deliver and the inevitable bust followed.

To understand why these booms failed to stick, you first need to understand what artificial intelligence actually is. The short answer to that (and believe me, there are very very long answers out there) is that A.I. is a number of different overlapping technologies which broadly deal with the challenge of how to use data to make a decision about something. It incorporates a lot of different disciplines and technologies (Big Data or Internet of Things, anyone?) but the most important one is a concept called machine learning.

Machine learning basically involves feeding computers large amounts of data and letting them analyse that data to extract patterns from which they can draw conclusions. You have probably seen this in action with face recognition technology (such as on Facebook or modern digital cameras and smartphones), where the computer can identify and frame human faces in photographs. In order to do this, the computers are referencing an enormous library of photos of people’s faces and have learned to spot the characteristics of a human face from shapes and colours averaged out over a dataset of hundreds of millions of different examples. This process is basically the same for any application of machine learning, from fraud detection (analysing purchasing patterns from credit card purchase histories) to generative art (analysing patterns in paintings and randomly generating pictures using those learned patterns).

As you might imagine, crunching through enormous datasets to extract patterns requires a LOT of computer processing power. In the 1960s they simply didn’t have machines powerful enough to do it, which is why that boom failed. In the 1980s the computers were powerful enough, but they discovered that machines only learn effectively when the volume of data being fed to them is large enough, and they were unable to source large enough amounts of data to feed the machines.

Then came the internet. Not only did it solve the computing problem once and for all through the innovations of cloud computing – which essentially allow us to access as many processors as we need at the touch of a button – but people on the internet have been generating more data every day than has ever been produced in the entire history of planet earth. The amount of data being produced on a constant basis is absolutely mind-boggling.

What this means for machine learning is significant: we now have more than enough data to truly start training our machines. Think of the number of photos on Facebook and you start to understand why their facial recognition technology is so accurate.

There is now no major barrier (that we currently know of) preventing A.I. from achieving its potential. We are only just starting to work out what we can do with it.

When the computers will think for themselves

There is a famous scene from the movie 2001: A Space Odyssey where Dave, the main character, is slowly disabling the artificial intelligence mainframe (called “Hal”) after the latter has malfunctioned and decided to try and kill all the humans on the space station it was meant to be running. Hal, the A.I., protests Dave’s actions and eerily proclaims that it is afraid of dying.

This movie illustrates one of the big fears surrounding A.I. in general, namely what will happen once the computers start to think for themselves instead of being controlled by humans. The fear is valid: we are already working with machine learning constructs called neural networks whose structures are based on the neurons in the human brain. With neural nets, the data is fed in and then processed through a vastly complex network of interconnected points that build connections between concepts in much the same way as associative human memory does. This means that computers are slowly starting to build up a library of not just patterns, but also concepts which ultimately lead to the basic foundations of understanding instead of just recognition.

Imagine you are looking at a photograph of somebody’s face. When you first see the photo, a lot of things happen in your brain: first, you recognise that it is a human face. Next, you might recognise that it is male or female, young or old, black or white, etc. You will also have a quick decision from your brain about whether you recognise the face, though sometimes the recognition requires deeper thinking depending on how often you have been exposed to this particular face (the experience of recognising a person but not knowing straight away from where). All of this happens pretty much instantly, and computers are already capable of doing all of this too, at almost the same speed. For example, Facebook can not only identify faces, but can also tell you who the face belongs to, if said person is also on Facebook. Google has technology that can identify the race, age and other characteristics of a person based just on a photo of their face. We have come a long way since the 1950s.

But true artificial intelligence – which is referred to as Artificial General Intelligence (AGI), where the machine is as advanced as a human brain – is a long way off. Machines can recognise faces, but they still don’t really know what a face is. For example, you might look at a human face and infer a lot of things that are drawn from a hugely complicated mesh of different memories, learnings and feelings. You might look at a photo of a woman and guess that she is a mother, which in turn might make you assume that she is selfless, or indeed the opposite depending on your own experiences of mothers and motherhood. A man might look at the same photo and find the woman attractive which will lead him to make positive assumptions about her personality (confirmation bias again), or conversely find that she resembles a crazy ex girlfriend which will irrationally make him feel negatively towards the woman. These richly varied but often illogical thoughts and experiences are what drive humans to the various behaviours – good and bad – that characterise our race. Desperation often leads to innovation, fear leads to aggression, and so on.

For computers to truly be dangerous, they need some of these emotional compulsions, but this is a very rich, complex and multi-layered tapestry of different concepts that is very difficult to train a computer on, no matter how advanced neural networks may be. We will get there one day, but there is plenty of time to make sure that when computers do achieve AGI, we will still be able to switch them off if needed.

Meanwhile, the advances currently being made are finding more and more useful applications in the human world. Driverless cars, instant translations, A.I. mobile phone assistants, websites that design themselves! All of these advancements are intended to make our lives better, and as such we should not be afraid but rather excited about our artificially intelligent future.