How artificial intelligences will see us

Right now, there is a neural network of 1,000 computers at Google's X lab that has taught itself to recognize humans and cats on the internet. But the network has also learned to recognize some weirder things, too. What can this machine's unprecedented new capability teach us about what future artificial intelligences might actually be like?

Last week, a group of scientists announced this breakthrough at a conference in Scotland. After mulling over 10 million stills yanked from YouTube for three days, the network showed off what it had learned by producing some composite images (in gray below), two of which were unmistakably a human and a cat. This is the first time computers have taught themselves to recognize the content of images. The machine did this using the kind of massively parallel computing methods made famous by Google's enormous data farms, combined with a couple of simple learning algorithms. The researchers speculate that their neural network was able to teach itself to recognize humans and cats partly because it had access to an enormous amount of data, and partly because of the whopping 16,000 processors they built into the network.

How artificial intelligences will see us

Researchers Jeff Dean and Andrew Ng cautioned that this network is quite unlike a human brain, despite being called a "neural network." Ng told the New York Times' John Markoff, "A loose and frankly awful analogy is that our numerical parameters correspond to synapses." The human visual cortex is millions of times larger, at least from the perspective of synapses and neurons.

So this network isn't like a human brain, though they share some characteristics. It's a new kind of (semi) intelligent entity. Let's call it XNet. Most of the news stories covering XNet have focused on how it learned to recognize humans and kitties after seeing them thousands of times, which is just the kind of thing a little kid would do. Very cuddly and relatable.

How artificial intelligences will see us

But XNet also recognized some other things, too. Over at Slate, Will Oremus reports:

Dean notes that the computers "learned" a slew of concepts that have little meaning to humans. For instance, they became intrigued by "tool-like objects oriented at 30 degrees," including spatulas and needle-nose pliers.

This is, to me, the most interesting part of the research. What are the patterns in human existence that jump out to non-human intelligences? Certainly 10 million videos from YouTube do not comprise the whole of human existence, but it is a pretty good start. They reveal a lot of things about us we might not have realized, like a propensity to orient tools at 30 degrees. Why does this matter, you ask? It doesn't matter to you, because you're human. But it matters to XNet.

How artificial intelligences will see us

What else will matter to XNet? Will it really discern a meaningful difference between cats and humans? What about the difference between a tool and a human body? This kind of question is a major concern for University of Oxford philosopher Nick Bostrom, who has written about the need to program AIs so that they don't display a "lethal indifference" to humanity. In other words, he's not as worried about a Skynet scenario where the AIs want to crush humans — he's worried that AIs won't recognize humans as being any more interesting than, say, a spatula. This becomes a problem if, as MIT roboticist Cynthia Breazeal has speculated, human-equivalent machine minds won't emerge until we put them into robot bodies. What if XNet exists in a thousand robots, and they all decide for some weird reason that humans should stand completely still at 30 degree angles? That's some lethal indifference right there.

I'm not terribly concerned about future AIs turning humans into spatulas. But I am fascinated by the idea that XNet and its next iterations will start noticing patterns we never would. Already, XNet is showing signs of being a truly alien intelligence. If it's true that we cobble together our identities out of what we recognize in the world around us, what exactly would a future XNet come to think of as "itself"? Would it imagine itself as a cat, or as something oddly abstract, like an angle? We just don't know.

What seems certain is that if XNet becomes the template for a future AI, it will be learning about itself and the world from an ocean of data debris we created. I want to know what unknown or unnamable patterns it will find in us, and what we'll learn from its observations.

For more information, read the full scientific paper about Google's breakthrough experiments with neural networks.

Image by Michal Bednarek via Shutterstock