Must Read: An FAQ Explaining Why You Should Worry About Killer Robots

We all agree that programming robots to kill humans at their own discretion is a bad idea — but this scenario is a lot closer than most of us realize. A new FAQ from Human Rights Watch summarizes how close we are to autonomous killing machines, and how important it is to ban them now.

As the HRW document explains:

While fully autonomous weapons technology does not exist yet, developments in that direction make it a pressing issue. The 2012 US Defense Department directive on autonomy mandates keeping humans in the loop for any decision about the use of lethal force for up to 10 years. Other US military documents, however, have indicated a long-term interest in full autonomy.

For example, a 2011 US roadmap specifically for ground systems stated, “There is an ongoing push to increase UGV [unmanned ground vehicle] autonomy, with a current goal of ‘supervised autonomy,’ but with an ultimate goal of full autonomy.” A US Air Force planning document from 2009 said, “[A]dvances in AI will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input.”

And the HRW document does a great job of breaking down why this would be a problem from a humanitarian standpoint — we want decisions about killing a human being to be made by a human being, and autonomous killing machines might make soldiers safer, but put more civilians at risk — but also from an international law standpoint, and a general legal standpoint. (Whom do you hold accountable when a robot kills a human?) Plus the bottom line is, once one country has robots that can kill on their own, every other country will feel obliged to follow suit, or get left behind. The whole thing is worth reading — it's a pretty quick read. [HRW]