The ONR-funded project will first isolate essential elements of human moral competence through theoretical and empirical research. Based on the results, the team will develop formal frameworks for modeling human-level moral reasoning that can be verified. Next, it will implement corresponding mechanisms for moral competence in a computational architecture.That sounds straightforward. But hidden in those three short sentences are, so far as I can make out, at least eight philosophical challenges of extraordinary complexity:
- Defining “human moral competence”
- Boiling that competence down to a set of isolated “essential elements”
- Designing a program of “theoretical and empirical research” that would lead to the identification of those elements
- Developing mathematical frameworks for explaining moral reasoning
- Translating those frameworks into formal models of moral reasoning
- “Verifying” the outputs of those models as truthful
- Embedding moral reasoning into computer algorithms
- Using those algorithms to control a robot operating autonomously in the world
Selmer Bringsjord, head of the Cognitive Science Department at RPI, and Naveen Govindarajulu, post-doctoral researcher working with him, are focused on how to engineer ethics into a robot so that moral logic is intrinsic to these artificial beings. Since the scientific community has yet to establish what constitutes morality in humans the challenge for Bringsjord and his team is severe.We’re trying to reverse-engineer something that wasn’t engineered in the first place.
by Nicholas Carr, Rough Type | Read more:
Image: Frankenstein