Tuesday, June 3, 2014

Programming the Moral Robot


The U.S. Navy’s Office of Naval Research is funding an effort, by scientists at Tufts, Brown, and RPI, to develop military robots capable of moral reasoning:
The ONR-funded project will first isolate essential elements of human moral competence through theoretical and empirical research. Based on the results, the team will develop formal frameworks for modeling human-level moral reasoning that can be verified. Next, it will implement corresponding mechanisms for moral competence in a computational architecture.
That sounds straightforward. But hidden in those three short sentences are, so far as I can make out, at least eight philosophical challenges of extraordinary complexity:
  • Defining “human moral competence”
  • Boiling that competence down to a set of isolated “essential elements”
  • Designing a program of “theoretical and empirical research” that would lead to the identification of those elements
  • Developing mathematical frameworks for explaining moral reasoning
  • Translating those frameworks into formal models of moral reasoning
  • “Verifying” the outputs of those models as truthful
  • Embedding moral reasoning into computer algorithms
  • Using those algorithms to control a robot operating autonomously in the world
Barring the negotiation of a worldwide ban, which seems unlikely for all sorts of reasons, military robots that make life-or-death decisions about human beings are coming (if they’re not already here). So efforts to program morality into robots are themselves now morally necessary. It’s highly unlikely, though, that the efforts will be successful — unless, that is, we choose to cheat on the definition of success.
Selmer Bringsjord, head of the Cognitive Science Department at RPI, and Naveen Govindarajulu, post-doctoral researcher working with him, are focused on how to engineer ethics into a robot so that moral logic is intrinsic to these artificial beings. Since the scientific community has yet to establish what constitutes morality in humans the challenge for Bringsjord and his team is severe.
We’re trying to reverse-engineer something that wasn’t engineered in the first place.

by Nicholas Carr, Rough Type |  Read more:
Image: Frankenstein