Pandemonium Explains Why Computers Will Share Human Biases

One of the most famous ideas in the history of artificial intelligence is "pandemonium." It involves little "demons," all shouting at once, and a central consciousness listening to the loudest of them. It could help computers learn — but can it also help computers screw up?

People began dreaming of intelligent computers from the moment the computer was invented, but few people had any idea of how to bring it about. Part of the problem was the fact that few knew how the human brain worked, and so any attempt to make machines intelligent would be like recreating a statue that one was not permitted to look at. In 1958, a particular theory gained favor with computer scientists. It has also gained favor with neuroscientists - but with a twist that might make computers as filled with biases and blindspots as humans.

The theory, invented by Oliver Selfridge, was called "pandemonium." It would make the computer, and the mind, a much noisier place. Selfridge thought the process of perception and recognition might go through a bunch of little "demons," all of which would compete for attention from a higher demon. Each little demon was in charge of one tiny snippet of information.

Pandemonium Explains Why Computers Will Share Human Biases

We can recognize the letter "m" even when it's written in many different fonts. We recognize it because, according to the pandemonium theory, one little demon in in charge on small curves that peak in the middle, and shouts that it sees them. Another is in charge of vertical lines and shouts that it sees them as well. Those two shouters pass their messages, simultaneously, up the chain of command until a demon in charge of recognizing the letter "m" accepts them and starts shouting itself. But, because the letter "n" also consists of curves and lines, and also joins in the chorus. Eventually, the whole process comes to a decision demon. It picks, "n" or "m" and that is what registers in our conscious mind. This process of recognizing tiny bits of information, and putting them together for the best fit, can work for anything from basic curves to human faces.

The idea that many processes could be running at once, and that the best of them, not just the one that was activated, could be selected by a higher process, provided an inspiring model for computer and cognitive scientists. Today, neuroscientists think there is another twist to the pandemonium theory. The decision demon, the one at the top, can shout back down to the lower demons, telling them what they are seeing. We see things as what we think they are, not just think of them because we see them. Human consciousness goes both ways.

Mechanical consciousness might go both ways as well. We know that human ideas can influence human perception. People see what they believe. They talk themselves into ideas that they were only partly sure of in the first place. They seek out strategies that confirm their idea, and neglect strategies that would prove it wrong. It would be interesting to see if these things are as pervasive in machine consciousness as it is in human consciousness. Would a mechanical intelligence be able to function without these heuristics and biases? Would it function much better than ours without them? Perhaps most interestingly, if we allowed some way for machines to develop biases, but didn't pre-program them, what biases would they develop on their own?

Would the coolest thing about developing artificial intelligence be performing psychological experiments on it?

[Via The Mechanisation of Thought Processes, Pandemonium, Consciousness and the Brain.]