Should we extend legal rights to social robots?

If you've ever played with an AIBO or a Pleo or owned a zippy little Roomba vacuum cleaner, you know how easy it is to treat robots like they're living, feeling things. But does that mean that we should consider granting legal protections to the robots designed to attract our empathy?

This past spring, Kate Darling, an Intellectual Property Research Specialist at the MIT Media Lab who co-taught Larry Lessig's "Robot Rights" course in 2011, presented her paper "Extending Legal Rights to Social Robots" at the University of Miami's We Robot Conference. In it, Darling explores the reasons why we might consider granting limited legal protections to social robots designed to interact with and elicit emotional responses from human beings, much like the protections granted to animals.

While it's arguable that societies pass laws prohibiting animal abuse in order to protect the inherent dignity of thinking and feeling creatures, Darling suggests that one reason we condemn animal abuse is because of our own discomfort with witnessing animals in pain. Social robots may not have the ability to feel pain, but humans are quite capable of feeling discomfort when a robot is struck or kicked. On a more pragmatic level, though, as social robots are increasingly made to mimic living creatures, it may become difficult for people, especially young children, to distinguish between robots and animals. In that case, we may want to grant rights to social robots in order to protect living creatures:

One reason that people could want to prevent the "abuse" of robotic companions is the protection of societal values. Parents of small children with a robotic pet in their household are likely familiar with the situation in which they energetically intervene to prevent their toddler from kicking or otherwise physically abusing the robot. Their reasons for doing so are partly to protect the (usually expensive) object from breaking, but will also be to discourage the child from engaging in types of conduct that could be harmful in other contexts. Given the lifelike behavior of the robot, a child could easily equate kicking it with kicking a living thing, such as a cat or another child. As it becomes increasingly difficult for children to fully grasp the difference between live pets and lifelike robots, we may want to teach them to act equally considerately towards both. While this is easily done when a parent has control over both the robot and the child, protecting social robots more generally would set the leading examples in society and prevent children from witnessing undesirable behavior elsewhere. For instance, one could imagine a child being emotionally traumatized by watching older children "torture" a robotic toy on the playground, the likes of which he or she has developed an emotional relationship with at home.

I suspect that robot rights, even limited rights on the scale granted to animals, would be difficult to introduce right now. But Darling's suggestion introduces the intriguing possibility that robot rights could be granted rights gradually rather than abruptly. Instead of having an AI wake up one day to sentience forcing us to suddenly consider granting human rights to something that was formerly mere property, we might consider slowly granting robots and other AIs various rights and dignities, even before those robots could feel pain or frustration.

You can download and read Darling's entire paper at SSRN.

AIBO photo by Jean-Baptiste LABRUNE.

Extending Legal Rights to Social Robots [SSRN via Nerdcore]