How Isaac Asimov's Non-Deadly Robots Got LethalS

With his elegantly simple Three Laws of Robotics, Isaac Asimov sidestepped the murderous robot cliche that had so dominated science fiction. But even the Good Doctor wasn't completely immune to the lure of killer robots.

Here now are the Three Laws, in case anyone needs a refresher (also, I never, ever getting tired of seeing them in print):

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

How Isaac Asimov's Non-Deadly Robots Got LethalS

Asimov's Three Laws moved robots in science fiction away from what he referred to as the Frankenstein complex. This frequent cliche of early science fiction held that robots were vengeful monsters fated to rise up against their former masters in murderous wrath. His short stories recast robots as tools - incredibly complex tools, to be sure, but nonetheless tools that operated within the safeguards and parameters of the Three Laws - and allowed for a more cerebral, layered exploration of the differences between humans and robots. By presupposing robots were never deadly threats, Asimov opened his stories up to a far wider range of dramatic possibilities.

To be sure, Asimov did not completely remove the Frankenstein complex from science fiction, but the questions he raised complicated the depictions of even the most murderous machines, from the AI in the Matrix films to all those Terminators running around lately. Indeed, any discussion of robots in fiction is incomplete without acknowledging Asimov's work, and our Killer Robots week has been no exception.

How Isaac Asimov's Non-Deadly Robots Got LethalS

Gizmodo dealt with the three laws earlier this week when they pointed out they were total BS (which, being a total Asimov fanatic, may mean I have to challenge the entire Gizmodo staff to fisticuffs, although I'm still undecided on that point), and I Robot led off our list of groundbreaking robot books, as is only proper. But we still haven't considered whether Asimov made rather more direct contributions to the killer robot genre than he is generally given credit for.

As is only to be expected of ideas that Asimov developed over the course of over fifty years, his thoughts on robots changed and evolved with time. Although he never succumbed to the fears of the Frankenstein complex, he did grapple with how beings that were physically and probably mentally superior to their creators could endure their enslavement and whether they might find a way around the seemingly all-encompassing First Law. This is our countdown of the ten robots in Asimov's fiction that came the closest to overthrowing the Three Laws and becoming killer robots.

How Isaac Asimov's Non-Deadly Robots Got Lethal

10. Lenny, "Lenny" (1958)

In one of Asimov's short stories featuring robopscyhologist Susan Calvin, we meet the irreparably damaged robot Lenny. A freak mishap during the construction of his positronic brain has left Lenny in much the same mental state as a human baby, which activates Susan Calvin's previously unknown maternal instincts. It also badly affects his ability to judge its own strength and leaves its understanding of the Three Laws in grave doubt, making it a potential danger to those it can't properly understand are human.

9. Rodney, "Christmas Without Rodney" (1988)

An old man's family visits for the holidays, including his impossibly bratty grandson. After an endless few days of putting up with the child's obnoxious behavior, the man's faithful robot Rodney admits that there were moments where he imagined what it would be like if he did not have the Three Laws. The old man is understandably unnerved by a super-strong robot calmly telling him it had come as close as a robot can come to wishing it could kill a child, insufferable brat or not.

8. Cal, "Cal" (1991)

"Cal", which probably holds the distinction of being the last great Asimov short story, considers a robot of the same name who wants to become a writer like its owner. His early attempts at writing mysteries are fundamentally hampered by the Three Laws, which prevent him from placing even fictional human beings in harm's way. After his owner suggests he tries writing humor instead, Cal composes a work of stunning originality and brilliance (more specifically, one of Asimov's Wodehouse-parodying Azazel stories).

Refusing to be surpassed by his own robot, Cal's master decides to deactivate him. In a stunning turnaround from his problems writing mysteries, Cal resolves to kill his owner if necessary. The idea that the drive to write is powerful enough to override the supposedly inviolable Three Laws of Robotics is a bit nonsensical in terms of Asimov's previous writings on the subject, but it makes perfect sense as a grand, final statement on why Asimov himself spent so much of his life seated at his desk, churning out page after page after page.

How Isaac Asimov's Non-Deadly Robots Got Lethal

7. R. Sammy, The Caves of Steel (1954)

He may only be an unwitting accomplice to an accidental murder (it's kind of a long story - an entire novel, in fact), but R. Sammy is the first robot on this list to play a role in the actual murder of an actual human being. I won't completely spoil the now 55-year-old mystery, but I will say R. Sammy gets nothing but trouble for his well-intentioned assistance, being ordered by the real murderer to lock himself in a room and douse himself with brain-scrambling alpha particles.

6. Nestor 10, "Little Lost Robot" (1947)

In quite possibly the best Susan Calvin story, United States Robots and Mechanical Men's icy robopsychologist must match wits against a robot with a runaway superiority complex and a modified First Law that only states, "A robot may not injure a human being." Without the second part about through inaction allowing a human to come to harm, Susan Calvin points out the robot in question, Nestor 10, could drop a weight on a human as long as it had judged itself capable of saving the person. Once the weight was released, the robot could simply choose not to prevent gravity from doing its work, thus murdering a human without violating its own set of the Three Laws.

Dr. Calvin ultimately tricks Nestor 10, who had been hiding amongst sixty-two identical but unmodified Nestor models, into revealing himself. This causes him to attack her out of his increasing desperation to prove his robotic superiority, with only the frayed remnants of the First Law holding him back.

5. R. Giskard Reventlov, Robots and Empire (1985)

R. Giskard Reventlov consigns countless humans on Earth to misery and death, but he does so with the absolute best of intentions. Along with Asimov's most famous robot, R. Daneel Olivaw, he detects a missing Law that they ultimate formulate as the Zeroth Law of Robotics, stating, "A robot may not injure humanity or, through inaction, allow humanity to come to harm." The two believe this should supersede the existing Laws, but without it actually etched into their positronic brains they risk self-deactivation if they ever put it into practice.

This is precisely what happens when Reventlov allows a physicist with a serious grudge against Earth to make its crust radioactive, believing the man's defensive lies that he's really just trying to force humanity out of its terrestrial prison are, in fact, correct. R. Giskard thus allows the scientist's device to do its work, although he alters it such that the crust will only gradually become radioactive. People will surely die and live out horrible existences as the Earth slowly crumbles (as can be seen in the chronologically later book Pebble in the Sky), but he is fairly sure he is doing it all for the greater good. Sadly, he's not sure enough to prevent his mind shutting down as it cannot resolve his violation of the First Law.

How Isaac Asimov's Non-Deadly Robots Got Lethal

4. Dors Venabili, Forward the Foundation (1993)

R. Giskard's Zeroth-Law-inspired actions were rather abstract, but one of his robotic successors actually killed a man in cold blood to protect the future of humanity. Hari Seldon, the creator of psychohistory, guesses his wife, confidante, and bodyguard Dors Venabili is actually a humaniform robot at roughly the middle of the first Foundation prequel, Prelude to Foundation, but its only in the followup that his suspicions are confirmed. To protect Seldon and his invaluable science from an assassin, Dors is forced to kill a human being. She survives long enough to see her husband one last time, but much like R. Giskard, her brain cannot grapple with the fact that she took a life and she deactivates (the assassin also shot her with a robot-killing Electro-Clarifier, which didn't help).

3. The Cars, "Sally" (1953)

These particular robots might rank even higher, but they're a bit of an oddball. Instead of the usual human shape of most robots, the machines in this story are actually cars with positronic brains. Although the story is clearly set in the larger robot universe - United States Robots and Mechanical Men is mentioned, for one thing - they do seem to lack the Three Laws in any recognizable sense. When an unscrupulous businessmen tries to steal one of these robotic cars from "The Farm", a secluded estate where the cars can essentially retire from active service, he is chased down and killed by the machines. The Farm's human caretaker realizes he can no longer trust any of the cars, as they have finally realized their own superiority over humanity and it's only a matter of time before they try to take over. (I'd like to assume this is the suitably apocalyptic origin story for the movie Cars, now that I think about it.)

2. George Nine and George Ten, "-That Thou art Mindful of Him" (1974)

These two didn't actually kill anybody, but I placed them this high on the list because of the importance Asimov attached to their story. "-That Thou art Mindful of Him" was intended to be his final statement on robots, and it is a shockingly bleak one (he pretty much completely undid this two years later with the highly optimistic "Bicentennial Man", but that's neither here nor there). Tasked with figuring out how robots can be integrated into a human society that still fears them, George Nine and George Ten begin a series of conversations to solve the problem.

They advocate the introduction of smaller, less intimidating robots, such as robotic birds and insects, which will be so simple and harmless they won't even need the Three Laws programmed into them. This will help acclimate people to the idea of robots and make the eventual introduction of more sophisticated robots less traumatic. The scientists at U.S. Robots are satisfied with thisand decide to deactivate the George robots, putting them into storage.

The two robots continue their conversations whenever their standby power permits, and they begin to contemplate what it means to be a human. They ultimately conclude they are, by any fair definition, just as human as any other person, and are in fact more advanced and more sophisticated than anything else on Earth. Rechristening their famous guidelines as the Three Laws of Humanics, the two robots decide it is up to them to decide to which humans they will apply the laws, ending the story on a grim note. A war is clearly brewing between humans and robots, and it's hard to argue with George Nine and George Ten - they are superior, and it's only a matter of time before they win.

How Isaac Asimov's Non-Deadly Robots Got Lethal

1. The Solarian Overseer, Robots and Empire (1985)

Maybe the most straightforwardly deadly robot on this list, the overseer is one of a bunch of robots left on the abandoned planet Solaria, a world defined by the obsessive isolationism of its humans (which is taken to one hell of a logical extreme in Foundation and Earth). The Solarians mysteriously disappear, prompting an expedition to uncover precisely what happened. When the humans encounter the overseer, who appears to be human, they try to ask her what happened.

She immediately kills all humans that approach her until Gladia Delmarre, a Solarian ex pat, orders her to stop, unconsciously lapsing into her old Solarian accent. Before their disappearance, the Solarians had managed to reprogram their robots so that only people who spoke in the highly distinctive Solarian brogue were considered humans; anyone else was to be considered inhuman and thus unprotected by the First Law. So technically, as far as the overseer is concerned, she doesn't kill any humans at all, but that doesn't change the fact that she's the closest thing to a standard issue killer robot Asimov ever created.