Should Robots Be Blamed for Battlefield Mistakes?

Should Robots Be Blamed for Battlefield Mistakes?

Print Friendly, PDF & Email

As militaries develop autonomous robotic warriors to replace humans on the battlefield, new ethical questions emerge. If a robot in combat has a hardware malfunction or programming glitch that causes it to kill civilians, do we blame the robot, or the humans who created and deployed it?

Some argue that robots do not have free will and therefore cannot be held morally accountable for their actions. But psychologists at the University of Washington are finding that people don’t have such a clear-cut view of humanoid robots. The researchers’ latest results show that humans apply a moderate amount of morality and other human characteristics to robots that are equipped with social capabilities and are capable of harming humans. In this case, the harm was financial, not life-threatening. But it still demonstrated how humans react to robot errors. The findings imply that as robots become more sophisticated and humanlike, the public may hold them morally accountable for causing harm.