Will We Hold Robots Accountable for War Crimes?

Now that the military is using autonomous surveillance/combat robots created by iRobot, the company behind the Roomba robot vacuum, a strange question emerges: What do we do if a robot commits a war crime? This isn't idle speculation. An automated anti-aircraft cannon's friendly fire killed nine soldiers in South Africa last year, and computer scientists speculate that as more weapons (and aircraft) are robot-controlled that we'll need to develop new definitions of war crimes. In fact, the possibility of robot war crimes is the subject of a panel at an upcoming conference at Stanford.

The conference, called Technology in Wartime (caveat: I'm helping to organize it), will feature a panel of expert roboticists and ethicists dealing with what happens when mobile, autonomous robots become soldiers — and have the potential to malfunction catastrophically. Ronald Arkin from Georgia Tech's mobile robots lab will be speaking, as well as Rutgers techno-ethicist Peter Asaro.

Other panels at the conference will deal with recent government research into cyberterrorism, as well as ways that human rights and civil liberties workers are using sneaky software to aid dissidents in war-torn countries. Featured speakers include computer security hero Bruce Schneier, EFF's legal director Cindy Cohn, e-voting expert and former ACM president Barbara Simons, human rights software crusader Patrick Ball, National Academy of Science's Herb Lin, Danger Room's Noah Shachtman, and sly computer security expert (and Sarah Connor Chronicles hater) Kevin Poulsen.

The conference is open to the public (entrance fee gets you free lunch, a t-shirt, and serves as a donation to nonprofit Computer Professionals for Social Responsibility). Students get in cheap! There's still time to register if you want to come. Technology in Wartime [conference site]