We've Already Encountered Alien Intelligence

Photo credit: Akritasa/CC BY-SA 4.0Photo credit: Akritasa/CC BY-SA 4.0
Photo credit: Akritasa/CC BY-SA 4.0

From Popular Mechanics

Every science fiction alien, whether it’s the cruel killers of War of the Worlds or the six-legged conversationalists of Arrival, shares one important thing in common. However unknowable they may seem-however “alien”-they are the product of human imagination. They are the result of us trying to imagine something we cannot imagine.

What might intelligence look like if it truly developed outside of the confines of human understanding? Forget the flying saucers: Alien intelligence is already here. It’s bubbling up in places where the machines don’t need our help.

They will conquer challenges humans could not even conceive of, much less solve.

Go For It

Last year brought the triumph of AlphaGo, the Go-playing AI developed by Google’s DeepMind initiative. Trained on an vast library of games, the program processed the whole of human knowledge on the subject in ways no mere mortal could. During its domination of (human) world champion Lee Sedol, AlphaGo made a brilliant, uncanny move-there was a mere 1-in-10,000 chance a human would play the same. That was only the beginning.

While AlphaGo learned the game from centuries of human strategy and gameplay, its 2017 successor called AlphaGo Zero did nothing of the sort. Instead, it learned the game from scratch by playing games against its predecessor, pushing go strategies deep into the inhuman abstract. And now the tables have turned. Now it’s the humans copycatting the AI: Flesh-and-blood players are testing strategies they’ve seen the bots play-even if they don’t comprehend the machine’s grand strategy.

This increasingly inhuman intelligence isn’t a fluke of AlphaGo Zero or even a characteristic that’s researchers are purposefully cultivating in AI. It’s inherent to the process of machine learning itself.

Humans have been hard-wired for pattern recognition by millions of years of evolution, and this shapes the very nature of human solutions to problems. By contrast, artificial neural networks like Google’s go players slam themselves up against mountains of data and develop solutions through a process of trial and error so frantic it would fry your cerebral cortex. The result is a web of interconnected software nodes-pairs joined by “weighted” connections. When data moves through them, a solution emerges. Coming at the problem this way-and having no preconceived notions about the “right” way to go about the game-AI can arrive at alien approaches to problem solving.

By exploring the simplest of examples, you can grasp how these nets are fundamentally understandable in principle. Theoretically you could program one by hand, the same way you could hypothetically build a whole city by yourself. Practically, only computers can build them, and they are too complex for humans to really understand.

Practically, only computers can build them, and they are too complex for humans to really understand.

When the Machines Coach Themselves

So far, most neural nets are trained on data that humans would understand and tasked with practical jobs that humans would otherwise need to do-image recognition, speech-to-text translation. Their alien inner workings are hidden, sandwiched between human-generated input and human-readable output.

Still, the strangeness finds ways to peek out. AlphaGo has its uncanny moves. Text-prediction networks have their tendency to default to “I love you.” Perhaps strangest of all, image recognition neural nets can fall victim to strange optical illusions that generate false positives-patterns that appear as noise and static to human eyes, but appears to the machines as bananas or panda bears for reasons we’d be hard-pressed to explain.

Photo credit: Jeff Clune, Jason Yosinski, Anh NguyenPhoto credit: Jeff Clune, Jason Yosinski, Anh Nguyen
Photo credit: Jeff Clune, Jason Yosinski, Anh Nguyen

As long as neural net AIs are trained on human data to solve problems humans want them to solving, we’re liable to see only glimpses of this emerging alien intellect. But as these nets start managing themselves, their connection to humanity becomes increasingly remote. Teams of neural nets have already been put to work cooperatively, managing successes like the independent invention of encryption. With enough leeway and computing horsepower, networks like these can (and almost assuredly will) begin to tackle increasingly abstract problems in ways humans never could have imagined. They will conquer challenges humans could not even conceive of, much less solve.

With increasingly broad access to the complicated real world, neural nets are like protean organisms without physical forms, stumbling through eons of evolution in mere moments with brute digital force. There’s no telling how far machine learning will go in the next few decades, and in large part it depends on how far we let it. But whether it ends in Skynet dystopia or not, this may be the closest those of us alive today will ever get to seeing anything that approaches alien intelligence. Unless those flying saucers really hurry up.

You Might Also Like