Can a Robotic Weapon Be Programmed to Have Ethics?

Combat robots and computerized missile launchers may one day be better soldiers than humans because they are programmed with ethical behavior and will never engage in friendly fire. You learn about all this and more from videos just posted from the awesome Technology in Wartime conference, held two weeks ago at Stanford's Center for Internet and Society, and organized by Computer Professionals for Social Responsibility. (Caveat: I'm the Vice President of CPSR, and helped organize this conference.)

In the future, human soldiers may see the battlefield through a World of Warcraft-like interface, complete with tagged enemies and multiple channels of chat. Plus, human rights workers will use covert computer technologies to get information about war zones out to the public before censorship regimes can stop their internet traffic. This is just a snippet of what got discussed at Technology in Wartime.

Prominent computer scientists, robotics experts, and tech policy experts argued for an entire day about the ethics of building computerized weapons, and how to defeat closed regimes with sneaky software. Some suggested that you could program ethics into a weapon, while others argued passionately that you should never take money from the Department of Defense to fund your work. What's great about these videos is that you can see all the participants' presentations, as well as their discussions with members of the audience. There's really nothing like watching Bruce Schneier arguing with a covert operations expert from the Navy. Or watching Cindy Cohn from EFF jump up and down while yelling about AT&T. Or watching Kevin Poulsen tease Herb Lin about government secrecy. Check out the videos, linked from the CPSR website and hosted on Archive.org. AP Photo/Yonhap, Sim Un-chul

Technology in Wartime video [CPSR]