Consider the following hypothetical scenarios:
(1) A group of scientists working on the development of an HIV vaccine has accidentally created an air-transmissible variant of HIV. The scientists must decide whether to publish their discovery, knowing that it might be used to create a devastating biological weapon, but also that it could help those who hope to develop defenses against such weapons. Most members of the group think publication is too risky, but one disagrees. He mentions the discovery at a conference, and soon the details are widely known.
2) A sports team is planning a surprise birthday party for its coach. One of the players decides that it would be more fun to tell the coach in advance about the planned event. Although the other players think it would be better to keep it a surprise, the unilateralist lets word slip about the preparations underway.
(3) Geoengineering techniques have developed to the point that it is possible for any of the world’s twenty most technologically advanced nations to substantially reduce the earth’s average temperature by emitting sulfate aerosols. Each of these nations separately considers whether to release such aerosols. Nineteen decide against, but one nation estimates that the benefits of lowering temperature would exceed the costs. It presses ahead with its sulfate aerosol program and the global average temperature drops by almost 1˚.
In each of these cases, each of a number of agents is in a position to undertake an initiative, X. Suppose that each agent decides whether or not to undertake X on the basis of her own independent judgment of the value of X, where the value of X is assumed to be independent of who undertakes X, and is supposed to be determined by the contribution of X to the common good. Each agent’s judgment is subject to error—some agents might overestimate the value of X, others might underestimate it. If the true value of X is negative, then the larger the number of agents, the greater the chances that at least one agent will overestimate X sufficiently to make the value of X seem positive. Thus, if agents act unilaterally, the initiative is too likely to be undertaken, and if such scenarios repeat, an excessively large number of initiatives are likely to be undertaken. We shall call this phenomenon the unilateralist’s curse.
Though we have chosen to introduce the unilateralist’s curse with hypothetical examples, it is not merely a hypothetical problem. There are numerous historical examples, ranging from the mundane to the high-tech. Here is one:
Until the late 1970s, the mechanism of the hydrogen bomb was one of the world’s best kept scientific secrets: it is thought that only four governments were in possession of it, each having decided not to divulge it. But staff at the Progressive magazine believed that nuclear secrecy was fuelling the Cold War by enabling nuclear policy to be determined by a security elite without proper public scrutiny. They pieced together the mechanism of the bomb and published it in their magazine, arguing that the cost, in the form of aiding countries such as India, Pakistan and South Africa in acquiring hydrogen bombs, was outweighed by the benefits of undermining nuclear secrecy.
Another possible example from atomic physics had occurred several decades earlier: In 1939 the Polish nuclear physicist Joseph Rotblat noticed that the fission of uranium released more neutrons than used to trigger it, realizing that it could produce a chain reaction leading to an explosion of unprecedented power. He Social Epistemology 351 assumed that other scientists elsewhere were doing similar experiments, and were thus in a position to release similar information, an assumption that turned out to be correct. Initially, Rotblat vowed to tell no-one of his discovery, believing it to be a threat to mankind, and it is plausible that others did likewise, for similar reasons. However, when the war broke out, Rotblat decided that releasing the information was now in the public interest, given the likelihood that the Germans were working on an atomic bomb. He confided in colleagues and thus unilaterally triggered the United Kingdom’s atomic bomb project.
Rotblat was later to leave the Manhattan Project, coming to the view that his had overestimated the German nuclear threat, and underestimated the likelihood that the US would use an atomic bomb offensively.
It is perhaps too soon to say whether these unilateral actions were suboptimal. But in other cases, it is clearer that unilateral action led to a suboptimal outcome: In the mid-nineteenth century there were virtually no wild rabbits in Australia, though many were in a position to introduce them. In 1859, Thomas Austin, a wealthy grazier, took it upon himself to do so. He had a dozen or two European rabbits imported from England and is reported to have said that “The introduction of a few rabbits could do little harm and might provide a touch of home, in addition to a spot of hunting.”
However, the rabbit population grew dramatically, and rabbits quickly became Australia’s most reviled pests, destroying large swathes of agricultural land.
The abovementioned examples were isolated incidents, but similar situations occur regularly in some spheres of activity, for instance, in the media:
Media outlets sometimes find themselves in the situation that journalists have access to information that is of public interest but could also harm specific individuals or institutions: the name of a not-yet charged murder suspect (publication may bias legal proceedings), the news that a celebrity committed suicide (publication may risk copycat suicides), or sensitive government documents such as those leaked by Wikileaks and Edward Snowden (publication may endanger national security). It is enough that one outlet decides that the public interest outweighs the risk for the information to be released. Thus, the more journalists have access to the information the more likely it is to be published.
Unilateralist situations also regularly crop up in regards to new biotechnologies:
Gene drives, a technique for inducing altered genes to be inherited by nearly all offspring (rather than just 50%) of a genetically modified organism, have potential for spreading altered genes across a population, enabling ecological control (e.g. making mosquitos incapable of spreading malaria or reducing herbicide resistance) but also potentially creating worrisome risks (e.g. to genetic diversity or of sabotage). Here unilateral action could both be taken in releasing a particular altered organism into the environment, and in releasing the information about how to produce it in the first place. There is scientific disagreement on the utility and risk of both. [ed. For a nightmarish vision of this last scenario see: The Windup Girl, by Paolo Bacigalupi.]
2. The Unilateralist’s Curse: A Model
The unilateralist’s curse is closely related to a problem in auction theory known as the winner’s curse. The winner’s curse is the phenomenon that the winning bid in an auction has a high likelihood of being higher than the actual value of the good sold. Each bidder makes an independent estimate and the bidder with the highest estimate outbids the others. But if the average estimate is likely to be an accurate estimate of the value, then the winner overpays. The larger the number of bidders, the more likely it is that at least one of them has overestimated the value.
The unilateralist’s curse and the winner’s curse have the same basic structure. The difference between them lies in the goals of the agents and the nature of the decision. In the winner’s curse, each agent aims to make a purchase if and only if doing so will be valuable for her. In the unilateralist’s curse, the decision-maker chooses whether to undertake an initiative with an eye to the common good, that is, seeking to undertake the initiative if and only if the initiative contributes positively to the common good. (...)
There are six features of the unilateralist’s curse that need to be emphasized.
First, in cases where the curse arises, the risk of erroneously undertaking an initiative is not caused by self-interest. In the model, all agents act for the common good, they simply disagree about the contribution of the initiative to the common good.
Second, though the curse could be described as a group-level bias in favor of undertaking initiatives, in does not arise from biases in the individual estimates of the value that would result from undertaking the initiative. The model above assumes symmetric random errors in the estimates of the true value.
Third, there is a sense in which the unilateralist’s curse is the obverse of Condorcet’s jury theorem. The jury theorem states that the average estimate of a group of people with above 50% likelihood of guessing correctly and with uncorrelated errors will tend to be close to the correct value, and will tend to move closer to the true value as the size of the group increases. But what is also true, and relevant to the argument in this paper, is that the highest estimate will tend to be above the true value, and the expected overestimation of this highest estimate increases with the size of the group. In the cases we are interested in here, it is the highest estimate that will determine whether an initiative is undertaken, not the average estimate.
Fourth, though we have chosen to illustrate the curse using initiatives that are (probably) irreversible, the problem can arise in other cases too. The problem becomes sharper if the initiative is irreversible, but even for actions that can be undone the problem remains in a milder form. Resources will be wasted on undoing erroneous initiatives, and if the bad consequences are not obvious they might occur before the problem is noticed. There might even be a costly tug-o-war between disagreeing agents.
Finally, fifth, though we have thus far focused on cases where a number of agents can undertake an initiative and it matters only whether at least one of them does so, a similar problem arises when any one of a group of agents can spoil an initiative—for instance, where universal action is required to bring about an intended outcome. Consider the following example:
In Norse mythology, the goddess Hel of the underworld promised to release the universally beloved god Baldr if all objects, alive and dead, would shed a tear for him. All did, except the giantess Þo¨kk. The god was forced to remain in the underworld.
Similar situations can arise when all the actors in a play must come together in order for a rehearsal to take place, when all members of committee must attend a meeting in order for it to be quorate, or when all signatories to an international treaty must ratify it in order for it to come into effect. The United Nations Security Council frequently provides examples of unilateral spoiling. The five permanent members of the Council—currently China, France, Russia, the United Kingdom and the United States—each possesses the power to veto the adoption of any non-procedural resolution. In the early years of the Council, this veto power was frequently employed by the Soviet Union to block applications for new membership of the United Nations. More recently, it has been used by the United States to block resolutions criticizing Israel, and by Russia and China to block resolutions on the Syria conflict. While some of these vetoes presumably reflect differences in the national interests of the council members, others may reflect different estimations of the contribution that a resolution would make to the common good. Certainly, considerations relating to the common good are often invoked in their defence. For instance, the United States’ 2011 veto of a draft resolution condemning Israeli settlements in Palestinian territory was defended on the grounds that the resolution would be an impediment to peace talks.
These cases of unilateral spoiling or abstinence are formally equivalent to the original unilateralist curse, with merely the sign reversed. Since the problem in these cases is the result of unilateral abstinence, it seems appropriate to include them within the scope of the unilateralist’s curse. Thus, in what follows, we assume that the unilateralist’s curse can arise when each member of a group can unilaterally undertake or spoil an initiative (though for ease of exposition we sometimes mention only the former case).
Lifting the Curse
Let a unilateralist situation be one in which each member of a group of agents can undertake or spoil an initiative regardless of the cooperation or opposition of other members of the group. We will say that a policy would lift the unilateralist’s curse if universal adherence to it by all agents in unilateralist situations should be expected (ex ante) to eliminate any surfeit or deficit of initiatives that the unilateralist’s curse might otherwise produce.
The Principle of Conformity
When acting out of concern for the common good in a unilateralist situation, reduce your likelihood of unilaterally undertaking or spoiling the initiative to a level that ex ante would be expected to lift the curse. In the following subsections we will explore various ways in which one might bring oneself into compliance with this principle. These can be organized around three models: collective deliberation, epistemic deference, and moral deference. The three models are applicable in somewhat different circumstances, and their suitability might depend on the type of agents involved.
It should be noted that, though some of the methods discussed below do not require agents to be aware of the nature of the situation, most hinge on agents recognizing that they are in an unilateralist situation. However, this is not to say that agents must be able to identify the other parties to the unilateralist situation: this is necessary for some but not all of our proposed solutions.
by Nick Bostrom, Thomas Douglas & Anders Sandberg, Social Epistomology | Read more (pdf):
[ed. In other words (as I understand it): one bad apple ruins the bunch, or put another way, outliers have an outsize influence on outcomes (as we see in many hung juries). Not a good prospect for controlling existential threats such as AI, biotechnology, geoenginnering, etc. Also, perhaps a good reason for refining prediction markets in decision-making. See also: Why Worry? (Good Optics).]