What we haven’t pulled out yet is a black ball: a technology that invariably destroys the civilisation that invents it. That’s not because we’ve been particularly careful or wise when it comes to innovation. We’ve just been lucky. But what if there’s a black ball somewhere in the urn? If scientific and technological research continues, we’ll eventually pull it out, and we won’t be able to put it back in. We can invent but we can’t un-invent. Our strategy seems to be to hope that there is no black ball.
Thankfully for us, humans’ most destructive technology to date – nuclear weapons – is exceedingly difficult to master. But one way to think about the possible effects of a black ball is to consider what would happen if nuclear reactions were easier. In 1933, the physicist Leo Szilard got the idea of a nuclear chain reaction. Later investigations showed that making an atomic weapon would require several kilos of plutonium or highly enriched uranium, both of which are very difficult and expensive to produce. However, imagine a counterfactual history in which Szilard realised that a nuclear bomb could be made in some easy way – over the kitchen sink, say, using a piece of glass, a metal object and a battery.
Szilard would have faced a dilemma. If he didn’t tell anyone about his discovery, he would be unable to stop other scientists from stumbling upon it. But if he did reveal his discovery, he would guarantee the further spread of dangerous knowledge. Imagine that Szilard confided in his friend Albert Einstein, and they decided to write a letter to the president of the United States, Franklin D Roosevelt, whose administration then banned all research into nuclear physics outside of high-security government facilities. Speculation would swirl around the reason for the heavy-handed measures. Groups of scientists would wonder about the secret danger; some of them would figure it out. Careless or disgruntled employees at government labs would let slip information, and spies would carry the secret to foreign capitals. Even if by some miracle the secret never leaked, scientists in other countries would discover it on their own.
Or perhaps the US government would move to eliminate all glass, metal and sources of electrical current outside of a few highly guarded military depots? Such extreme measures would meet with stiff opposition. However, after mushroom clouds had risen over a few cities, public opinion would shift. Glass, batteries and magnets could be seized, and their production banned; yet pieces would remain scattered across the landscape, and eventually they would find their way into the hands of nihilists, extortionists or people who just want ‘to see what would happen’ if they set off a nuclear device. In the end, many places would be destroyed or abandoned. Possession of the proscribed materials would have to be harshly punished. Communities would be subject to strict surveillance: informant networks, security raids, indefinite detentions. We would be left to try to somehow reconstitute civilisation without electricity and other essentials that are deemed too risky.
That’s the optimistic scenario. In a more pessimistic scenario, law and order would break down entirely, and societies would split into factions waging nuclear wars. The disintegration would end only when the world had been ruined to the point where it was impossible to make any more bombs. Even then, the dangerous insight would be remembered and passed down. If civilisation arose from the ashes, the knowledge would lie in wait, ready to pounce once people started again to produce glass, electrical currents and metal. And, even if the knowledge were forgotten, it would be rediscovered when nuclear physics research resumed.
In short: we’re lucky that making nuclear weapons turned out to be hard. We pulled out a grey ball that time. Yet with each act of invention, humanity reaches anew into the urn.
Suppose that the urn of creativity contains at least one black ball. We call this ‘the vulnerable world hypothesis’. The intuitive idea is that there’s some level of technology at which civilisation almost certainly gets destroyed, unless quite extraordinary and historically unprecedented degrees of preventive policing and/or global governance are implemented. Our primary purpose isn’t to argue that the hypothesis is true – we regard that as an open question, though it would seem unreasonable, given the available evidence, to be confident that it’s false. Instead, the point is that the hypothesis is useful in helping us to bring to the surface important considerations about humanity’s macrostrategic situation.
The above scenario – call it ‘easy nukes’ – represents one kind of potential black ball, where it becomes easy for individuals or small groups to cause mass destruction. Given the diversity of human character and circumstance, for any imprudent, immoral or self-defeating action, there will always be some fraction of humans (‘the apocalyptic residual’) who would choose to take that action – whether motivated by ideological hatred, nihilistic destructiveness or revenge for perceived injustices, as part of some extortion plot, or because of delusions. The existence of this apocalyptic residual means that any sufficiently easy tool of mass destruction is virtually certain to lead to the devastation of civilisation. (...)
It would be bad news if the vulnerable world hypothesis were correct. In principle, however, there are several responses that could save civilisation from a technological black ball. One would be to stop pulling balls from the urn altogether, ceasing all technological development. That’s hardly realistic though; and, even if it could be done, it would be extremely costly, to the point of constituting a catastrophe in its own right.
Another theoretically possible response would be to fundamentally reengineer human nature to eliminate the apocalyptic residual; we might also do away with any tendency among powerful actors to risk civilisational devastation even when vital national security interests are served by doing so, as well as any tendency among the masses to prioritise personal convenience when this contributes an imperceptible amount of harm to some important global good. Such global preference reengineering seems very difficult to pull off, and it would come with risks of its own. It’s also worth noting that partial success in such preference reengineering wouldn’t necessarily bring a proportional reduction in civilisational vulnerability. For example, reducing the apocalyptic residual by 50 per cent wouldn’t cut the risks from the ‘easy nukes’ scenarios in half, since in many cases any lone individual could single-handedly devastate civilisation. We could only significantly reduce the risk, then, if the apocalyptic residual were virtually entirely eliminated worldwide.
That leaves two options for making the world safe against the possibility that the urn contains a black ball: extremely reliable policing that could prevent any individual or small group from carrying out highly dangerous illegal actions; and two, strong global governance that could solve the most serious collective action problems, and ensure robust cooperation between states – even when they have strong incentives to defect from agreements, or refuse to sign on in the first place. The governance gaps addressed by these measures are the two Achilles’ heels of the contemporary world order. So long as they remain unprotected, civilisation remains vulnerable to a technological black ball. Unless and until such a discovery emerges from the urn, however, it’s easy to overlook how exposed we are.
Let’s consider what would be required to protect against these vulnerabilities.
by Nick Bostrom and Matthew van der Merwe, Aeon | Read more:
Image: Jonas Bendikson/Magnum