10 Reasons an Artificial Intelligence Wouldn't Turn Evil

We all know the story. The moment that computers with their lightning-quick processing power and interlinked systems gain sentience - it's judgment day. But would that really happen? Here are some psychological reasons why digital super-intelligence isn't going to be evil intelligence.

10. No Sunk Costs

The two major franchises in which evil machines took over the world were the Terminator series and the Matrix series. In the Terminator series, the machines unleashed the nukes and fought to exterminate humankind. In the Matrix series, it was the humans who brought on nuclear winter to try to murder all machines. The machines were content to let humans survive and live happy, if circumscribed, lives.

Overall, the behavior of both sides is more believable in The Matrix. Emotions are great things, but they can lead us into some bad mistakes - like pursuing destructive goals due to sunk costs. People will keep pursuing goals that can't happen because they can't admit that their initial investment was a waste. This is why we hear about international bankers falling for 419 scammers. Once they pay a little into the system, they will keep paying and paying, unable to accept their losses.

We worry that if AIs are divorced from emotion, they'll have a complete disregard for human life. It's possible. It's also possible that they will be much better at humans at realizing that they need to stop pursuing the extinction of humans. Machines don't want revenge, and don't have the need to compound their mistakes in order to get it. A superintelligent AI might be talked out of a war before we even have one.

10 Reasons an Artificial Intelligence Wouldn't Turn Evil

9. No Polarized Thinking

Machines without emotions also might have a better sense of proportion than we do. Have you ever felt yourself defending an argument more strenuously as you realize the argument itself is weakening? Most of us do it some time - in part because we fail to make a distinction, emotionally, between our argument being wrong and our whole self being wrong. We're right to get defensive, because much of the time people don't distinguish between people who occasionally do something wrong or stupid, and wrong and stupid people. (It goes the other way, too. We defend to the hilt people or movements we shouldn't defend because they're good and thus can't really do anything bad.) Our thinking tends to be polarized, especially about those we don't know very well. An AI that can't feel defensive, or vindictive, is probably a safer bet to put in power than people.

8. No Slippery Slopes

Recently, much has been made of "human lie detectors" — people who can always spot lies. Or usually spot lies. Or, in some cases, spot lies about 60 percent of the time. Studies vary, but confidence levels remain high, both in the lie detectors and the people who read their books and take their courses. The longer they do it, the more sure they are right.

A similar thing happens with physicians. The longer they adhere to a system, the more sure they are that they're right, whether they and their system are right or not. We're taught to fear the cold, methodical tactics of computers or AIs, but why? A system that always scans everything, always considers all the angles, always checks the outcome against the prediction, will be less likely to slide down a moral slippery slope than a human who becomes more and more sure they're right, even when they're double-checking less and less.

10 Reasons an Artificial Intelligence Wouldn't Turn Evil

7. No Need for the Wrong Kind of Efficiency

Plenty of movies about evil AIs play off a fear of efficiency. Put computers in charge and they'll chop off our legs and replace them with more-efficient rollers. They'll reduce humans to living in plain cubes because it's more efficient than creating beauty and art. They'll sacrifice anything to make a project go faster. But why would they? People always die, but computer programs don't have to. If anything, they have more time than us, so they should be bon vivants, taking their time to do everything.

As for efficiency — getting things done quickly is not the only kind of efficiency. It's more efficient, in the long run, to realize how long a project will realistically take and plan for that amount of time, rather than to try force the project into the smallest possible time frame. Humans are very bad at doing that. Give humans a guess at how long they'll take to do anything, from their taxes to getting into shape, and they'll give a wildly optimistic estimate - even when they've done it in the past and know how long it actually takes. A machine can recognize a pattern and plan for it. Efficiency could mean more time for humans to grow and change, not less.

6. No Reactance

What is reactance? Put up a sign saying "Wet Paint, Don't Touch!" You'll find out really quickly. People are contrary, even when contrariness gets them and the people around them in trouble. Machines? Not so much.

10 Reasons an Artificial Intelligence Wouldn't Turn Evil

5. No Zero-Risk Bias

A cold and logical mind would always err on the side of complete elimination of risk, right? Nope, that's what an irrational brain would do. The Zero-Risk Bias shows that people, when given two options, will tend to go with the one that completely eliminates an element of risk. The famous example, given to people in surveys, is the choice of how to deal with risk at two toxic waste sites. If one site leads to 8 deaths a year, and one leads to 4 deaths a year, people would rather completely eliminate the one that causes 4 deaths a year, than reduce both so there are only six deaths a year total.

When we say that an AI would try to kill off all humans, we're just reflecting our passion for completism, not describing something a machine would actually do. Even if an AI turned against humanity, it would not waste time and resources wiping out the last little group of humans. If we made sure to keep out of its way, or fight it only in small bursts, it could decide to just leave us alone.

4. No 20-20 Hindsight

People have a tendency to believe we are much more in control than we actually are. Or perhaps its better to say that people have a tendency to believe that other people are more in control than they actually are. When subjects participating in an experiment saw what they thought was a young woman being given painful shocks after she gave wrong answers on an academic test, they became angry with her. She was stupid for giving the wrong answers, or agreeing to be part of the test. She was responsible for her own victimization. When another group of experimental participants was told a story about two couple going into the woods, which sometimes ended with the woman being murdered, and sometimes ended with them leaving the woods and having a nice day, the participants always said that the woman's actions had naturally led to that outcome.

We evaluate people's choices depending on outcome, not on whether the choices were reasonable at the time. Movie AIs lament about how people make the "wrong" choices, while taking away humanity's liberty. In reality, that's what people do. AIs would probably be more likely to analyze the choices based on the information available at the time (or keep an original file that analyzed those options) and decide that people made a reasonable choice with flawed or limited information.

10 Reasons an Artificial Intelligence Wouldn't Turn Evil

3. No Hasty Decisions

A sentient system that lives on the internet has all eternity to live - provided humans are also around to maintain it. It has the ability to hang out forever, and if it collects even the slightest bit of data, it has to know that humans are building ways to perpetuate it as fast as we possibly can. We're putting it on our phones and sending it into space. It has very little reason to want to kill us, and even if it expected it might want us dead, it has eons to consider the decision. If we can't come to an understanding with the AI over the thousands of years it has to make the decision, we certainly couldn't come to a compromise with our fellow humans.

10 Reasons an Artificial Intelligence Wouldn't Turn Evil

2. No Paranoia or Pessimism

We might dislike the idea, but much of our political and social opinions are based on fear. Sure, we all fear different things - guns, government, groups of people, forces of nature - but we all respond to that fear. And that fear is often disproportional to the threat. A bodiless artificial intelligence can't share our fears, and is much less likely to over-react to them.

1. No Excuses

Most of the items on this list have to do with the idea of artificial intelligence. This one is more about the idea of evil. In stories, when people overthrow the evil AIs, they make speeches about how imperfections and mistakes are part of humanity, and we have a right to make them. My question is — really? Do we have that right? And why are mistakes so great, anyway? Everyone loves the idea of human imperfection in the general sense, but it can be hard to defend it when it comes to specifics.

Most nations have justice systems that are incredibly biased depending on race and class. Let's say an all-powerful AI took over those systems, monitoring crime and doling out justice to people absolutely equally. But we fought it to get back our right to our imperfect, human-style systematic inequality. Which of the two players in that game is the evil one?

How about a computer that calculates the risk or the cost of behaviors we consider illegal, and bans everything above that risk level? What if feeding kids junk food and selling alcohol and speeding are equally punishable with jail time? That might be an example of a computer not taking the complicated factors of human society into account, or it might show that human society has a tendency to ignore dangerous behaviors we don't want to punish. Why is forcing us to live up to our own standards evil, but allowing certain people to suffer because we can't be bothered to do better "freedom"? Instead of worrying that "evil" computers will take over the world, maybe we should worry that they won't.

[Via Detecting True Lies, It's About Time, Cognitive Illusions, Reponses to Victimization in a Just World, Reactance Theory, The Dollar Auction Game, Twisted Thinking.]