A New Program Teaches Ethics To Robot SolidersS

We've all seen the commercials that say the Navy is "working every day to unman the front lines." How do we do this while avoiding an Asimovian situation where our robots go crazy? And is that even possible?

It turns out that the best way to teach machines about ethics is to copy the methods for teaching people about ethics: instruction and experience.

When we're young, we don't think much about the consequences of our actions. Part of this is because we have tiny child brains and need to be told what's right and wrong. Part of it is we haven't lived long enough to see the consequences of what we do and understand them. As we grow, we experience, see, or read enough about what happens when people make different kinds of decisions and we're better equipped to evaluate whether a particular decision is moral or not.

A New Program Teaches Ethics To Robot SolidersS

New software would allow robotic drones to do the same thing we do. The program would give the drone certain specifications for when aggressive action could be taken. After the drone took that action, more information would be gathered. Some of the information would be taken immediately by the drone itself, but the rest would be added as the incident was investigated by researchers, witnesses and military personnel. The drone would then compare and contrast the expected consequences of its action with the actual consequences. If they didn't match, it would then adjust its own behavior. The drone would learn ethics, just the way we do.

The idea of machines behaving ethically elicits many different opinions. Some say that robots may behave more ethically than humans. After all, a robot doesn't fear for its own safety. It doesn't panic. It doesn't harbor any rage or grudges. Having machines do the fighting might mean some of the atrocities of war could be avoided.

A New Program Teaches Ethics To Robot SolidersS

On the other hand, they could also be exacerbated. Professor Noel Sharkey disagrees that machines would make moral soldiers:

You could train it all you want, give it all the ethical rules in the world. If the input to it isn't correct, it's no good whatsoever, humans can be held accountable, machines can't.

Any number of horrors could be excused as a technical glitch.

Others, like J Storrs Hall, believe that ethical machines are not just important for war, but for building a better world. Hall's take on the situation is that finding a way to give machines an ethical framework is half parenting responsibility and half self-defense. Without ethical machines, the world could be destroyed. With them, it could be better than humans could ever imagine it.

I, personally, think that the most advanced computer isn't any more likely to develop morals, or for that matter, intelligence, than my light switch.

Then again, sometimes I think that switch is looking at me funny.

Via Economist.