For the French philosopher Paul Virilio, technological development is inextricable from the idea of the accident. As he put it, each accident is ‘an inverted miracle… When you invent the ship, you also invent the shipwreck; when you invent the plane, you also invent the plane crash; and when you invent electricity, you invent electrocution.’ Accidents mark the spots where anticipation met reality and came off worse. Yet each is also a spark of secular revelation: an opportunity to exceed the past, to make tomorrow’s worst better than today’s, and on occasion to promise ‘never again’.
This, at least, is the plan. ‘Never again’ is a tricky promise to keep: in the long term, it’s not a question of if things go wrong, but when. The ethical concerns of innovation thus tend to focus on harm’s minimisation and mitigation, not the absence of harm altogether. A double-hulled steamship poses less risk per passenger mile than a medieval trading vessel; a well-run factory is safer than a sweatshop. Plane crashes might cause many fatalities, but refinements such as a checklist, computer and co-pilot insure against all but the wildest of unforeseen circumstances.
Similar refinements are the subject of one of the liveliest debates in practical ethics today: the case for self-driving cars. Modern motor vehicles are safer and more reliable than they have ever been – yet more than 1 million people are killed in car accidents around the world each year, and more than 50 million are injured. Why? Largely because one perilous element in the mechanics of driving remains unperfected by progress: the human being.
Enter the cutting edge of machine mitigation. Back in August 2012, Google announced that it had achieved 300,000 accident-free miles testing its self-driving cars. The technology remains some distance from the marketplace, but the statistical case for automated vehicles is compelling. Even when they’re not causing injury, human-controlled cars are often driven inefficiently, ineptly, antisocially, or in other ways additive to the sum of human misery.
What, though, about more local contexts? If your vehicle encounters a busload of schoolchildren skidding across the road, do you want to live in a world where it automatically swerves, at a speed you could never have managed, saving them but putting your life at risk? Or would you prefer to live in a world where it doesn’t swerve but keeps you safe? Put like this, neither seems a tempting option. Yet designing self-sufficient systems demands that we resolve such questions. And these possibilities take us in turn towards one of the hoariest thought-experiments in modern philosophy: the trolley problem.
In its simplest form, coined in 1967 by the English philosopher Philippa Foot, the trolley problem imagines the driver of a runaway tram heading down a track. Five men are working on this track, and are all certain to die when the trolley reaches them. Fortunately, it’s possible for the driver to switch the trolley’s path to an alternative spur of track, saving all five. Unfortunately, one man is working on this spur, and will be killed if the switch is made.
In this original version, it’s not hard to say what should be done: the driver should make the switch and save five lives, even at the cost of one. If we were to replace the driver with a computer program, creating a fully automated trolley, we would also instruct it to pick the lesser evil: to kill fewer people in any similar situation. Indeed, we might actively prefer a program to be making such a decision, as it would always act according to this logic while a human might panic and do otherwise.
The trolley problem becomes more interesting in its plentiful variations. In a 1985 article, the MIT philosopher Judith Jarvis Thomson offered this: instead of driving a runaway trolley, you are watching it from a bridge as it hurtles towards five helpless people. Using a heavy weight is the only way to stop it and, as it happens, you are standing next to a large man whose bulk (unlike yours) is enough to achieve this diversion. Should you push this man off the bridge, killing him, in order to save those five lives?
A similar computer program to the one driving our first tram would have no problem resolving this. Indeed, it would see no distinction between the cases. Where there are no alternatives, one life should be sacrificed to save five; two lives to save three; and so on. The fat man should always die – a form of ethical reasoning called consequentialism, meaning conduct should be judged in terms of its consequences.
When presented with Thomson’s trolley problem, however, many people feel that it would be wrong to push the fat man to his death. Premeditated murder is inherently wrong, they argue, no matter what its results – a form of ethical reasoning called deontology, meaning conduct should be judged by the nature of an action rather than by its consequences.
The friction between deontology and consequentialism is at the heart of every version of the trolley problem. Yet perhaps the problem’s most unsettling implication is not the existence of this friction, but the fact that – depending on how the story is told – people tend to hold wildly different opinions about what is right and wrong.
by Tom Chatfield, Aeon | Read more:
Image: James Bridle
This, at least, is the plan. ‘Never again’ is a tricky promise to keep: in the long term, it’s not a question of if things go wrong, but when. The ethical concerns of innovation thus tend to focus on harm’s minimisation and mitigation, not the absence of harm altogether. A double-hulled steamship poses less risk per passenger mile than a medieval trading vessel; a well-run factory is safer than a sweatshop. Plane crashes might cause many fatalities, but refinements such as a checklist, computer and co-pilot insure against all but the wildest of unforeseen circumstances.
Similar refinements are the subject of one of the liveliest debates in practical ethics today: the case for self-driving cars. Modern motor vehicles are safer and more reliable than they have ever been – yet more than 1 million people are killed in car accidents around the world each year, and more than 50 million are injured. Why? Largely because one perilous element in the mechanics of driving remains unperfected by progress: the human being.
Enter the cutting edge of machine mitigation. Back in August 2012, Google announced that it had achieved 300,000 accident-free miles testing its self-driving cars. The technology remains some distance from the marketplace, but the statistical case for automated vehicles is compelling. Even when they’re not causing injury, human-controlled cars are often driven inefficiently, ineptly, antisocially, or in other ways additive to the sum of human misery.
What, though, about more local contexts? If your vehicle encounters a busload of schoolchildren skidding across the road, do you want to live in a world where it automatically swerves, at a speed you could never have managed, saving them but putting your life at risk? Or would you prefer to live in a world where it doesn’t swerve but keeps you safe? Put like this, neither seems a tempting option. Yet designing self-sufficient systems demands that we resolve such questions. And these possibilities take us in turn towards one of the hoariest thought-experiments in modern philosophy: the trolley problem.
In its simplest form, coined in 1967 by the English philosopher Philippa Foot, the trolley problem imagines the driver of a runaway tram heading down a track. Five men are working on this track, and are all certain to die when the trolley reaches them. Fortunately, it’s possible for the driver to switch the trolley’s path to an alternative spur of track, saving all five. Unfortunately, one man is working on this spur, and will be killed if the switch is made.
In this original version, it’s not hard to say what should be done: the driver should make the switch and save five lives, even at the cost of one. If we were to replace the driver with a computer program, creating a fully automated trolley, we would also instruct it to pick the lesser evil: to kill fewer people in any similar situation. Indeed, we might actively prefer a program to be making such a decision, as it would always act according to this logic while a human might panic and do otherwise.
The trolley problem becomes more interesting in its plentiful variations. In a 1985 article, the MIT philosopher Judith Jarvis Thomson offered this: instead of driving a runaway trolley, you are watching it from a bridge as it hurtles towards five helpless people. Using a heavy weight is the only way to stop it and, as it happens, you are standing next to a large man whose bulk (unlike yours) is enough to achieve this diversion. Should you push this man off the bridge, killing him, in order to save those five lives?
A similar computer program to the one driving our first tram would have no problem resolving this. Indeed, it would see no distinction between the cases. Where there are no alternatives, one life should be sacrificed to save five; two lives to save three; and so on. The fat man should always die – a form of ethical reasoning called consequentialism, meaning conduct should be judged in terms of its consequences.
When presented with Thomson’s trolley problem, however, many people feel that it would be wrong to push the fat man to his death. Premeditated murder is inherently wrong, they argue, no matter what its results – a form of ethical reasoning called deontology, meaning conduct should be judged by the nature of an action rather than by its consequences.
The friction between deontology and consequentialism is at the heart of every version of the trolley problem. Yet perhaps the problem’s most unsettling implication is not the existence of this friction, but the fact that – depending on how the story is told – people tend to hold wildly different opinions about what is right and wrong.
by Tom Chatfield, Aeon | Read more:
Image: James Bridle