Wednesday, July 1, 2015

Machine Ethics: The Robot’s Dilemma

In his 1942 short story 'Runaround', science-fiction writer Isaac Asimov introduced the Three Laws of Robotics — engineering safeguards and built-in ethical principles that he would go on to use in dozens of stories and novels. They were: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Fittingly, 'Runaround' is set in 2015. Real-life roboticists are citing Asimov's laws a lot these days: their creations are becoming autonomous enough to need that kind of guidance. In May, a panel talk on driverless cars at the Brookings Institution, a think tank in Washington DC, turned into a discussion about how autonomous vehicles would behave in a crisis. What if a vehicle's efforts to save its own passengers by, say, slamming on the brakes risked a pile-up with the vehicles behind it? Or what if an autonomous car swerved to avoid a child, but risked hitting someone else nearby?

“We see more and more autonomous or automated systems in our daily life,” said panel participant Karl-Josef Kuhn, an engineer with Siemens in Munich, Germany. But, he asked, how can researchers equip a robot to react when it is “making the decision between two bad choices”?

The pace of development is such that these difficulties will soon affect health-care robots, military drones and other autonomous devices capable of making decisions that could help or harm humans. Researchers are increasingly convinced that society's acceptance of such machines will depend on whether they can be programmed to act in ways that maximize safety, fit in with social norms and encourage trust. “We need some serious progress to figure out what's relevant for artificial intelligence to reason successfully in ethical situations,” says Marcello Guarini, a philosopher at the University of Windsor in Canada.

Several projects are tackling this challenge, including initiatives funded by the US Office of Naval Research and the UK government's engineering-funding council. They must address tough scientific questions, such as what kind of intelligence, and how much, is needed for ethical decision-making, and how that can be translated into instructions for a machine. Computer scientists, roboticists, ethicists and philosophers are all pitching in.

“If you had asked me five years ago whether we could make ethical robots, I would have said no,” says Alan Winfield, a roboticist at the Bristol Robotics Laboratory, UK. “Now I don't think it's such a crazy idea.”

Learning machines

In one frequently cited experiment, a commercial toy robot called Nao was programmed to remind people to take medicine.

“On the face of it, this sounds simple,” says Susan Leigh Anderson, a philosopher at the University of Connecticut in Stamford who did the work with her husband, computer scientist Michael Anderson of the University of Hartford in Connecticut. “But even in this kind of limited task, there are nontrivial ethics questions involved.” For example, how should Nao proceed if a patient refuses her medication? Allowing her to skip a dose could cause harm. But insisting that she take it would impinge on her autonomy.

To teach Nao to navigate such quandaries, the Andersons gave it examples of cases in which bioethicists had resolved conflicts involving autonomy, harm and benefit to a patient. Learning algorithms then sorted through the cases until they found patterns that could guide the robot in new situations.

With this kind of 'machine learning', a robot can extract useful knowledge even from ambiguous inputs (see go.nature.com/2r7nav). The approach would, in theory, help the robot to get better at ethical decision-making as it encounters more situations. But many fear that the advantages come at a price. The principles that emerge are not written into the computer code, so “you have no way of knowing why a program could come up with a particular rule telling it something is ethically 'correct' or not”, says Jerry Kaplan, who teaches artificial intelligence and ethics at Stanford University in California.

Getting around this problem calls for a different tactic, many engineers say; most are attempting it by creating programs with explicitly formulated rules, rather than asking a robot to derive its own. Last year, Winfield published the results of an experiment that asked: what is the simplest set of rules that would allow a machine to rescue someone in danger of falling into a hole? Most obviously, Winfield realized, the robot needed the ability to sense its surroundings — to recognize the position of the hole and the person, as well as its own position relative to both. But the robot also needed rules allowing it to anticipate the possible effects of its own actions.

Winfield's experiment used hockey-puck-sized robots moving on a surface. He designated some of them 'H-robots' to represent humans, and one — representing the ethical machine — the 'A-robot', named after Asimov. Winfield programmed the A-robot with a rule analogous to Asimov's first law: if it perceived an H-robot in danger of falling into a hole, it must move into the H-robot's path to save it.

Winfield put the robots through dozens of test runs, and found that the A-robot saved its charge each time. But then, to see what the allow-no-harm rule could accomplish in the face of a moral dilemma, he presented the A-robot with two H-robots wandering into danger simultaneously. Now how would it behave?

The results suggested that even a minimally ethical robot could be useful, says Winfield: the A-robot frequently managed to save one 'human', usually by moving first to the one that was slightly closer to it. Sometimes, by moving fast, it even managed to save both. But the experiment also showed the limits of minimalism. In almost half of the trials, the A-robot went into a helpless dither and let both 'humans' perish. To fix that would require extra rules about how to make such choices. If one H-robot were an adult and another were a child, for example, which should the A-robot save first? On matters of judgement like these, not even humans always agree. And often, as Kaplan points out, “we don't know how to codify what the explicit rules should be, and they are necessarily incomplete”.

by Boer Deng, Nature |  Read more:
Image: Peter Adams and Day The Earth Stood Still