Thursday, December 8, 2016

Who Would Destroy the World?

Consider a seemingly simple question: If the means were available, who exactly would destroy the world? There is surprisingly little discussion of this question within the nascent field of existential risk studies. But the issue of “agential risks” is critical: What sort of agent would either intentionally or accidentally cause an existential catastrophe?

An existential risk is any future event that would either permanently compromise our species’ potential for advancement or cause our extinction. Oxford philosopher Nick Bostrom coined the term in 2002, but the concept dates back to the end of World War II, when self-annihilation became a real possibility for the first time in human history.

In the past 15 years, the concept of an existential risk has received growing attention from scholars in a wide range of fields. And for good reason: An existential catastrophe could only happen once in our history. This raises the stakes immensely, and it means that reacting to existential risks won’t work. Humanity must anticipate such risks to avoid them.

So far, existential risk studies has focused mostly on the technologies—such as nuclear weapons and genetic engineering—that future agents could use to bring about a catastrophe. Scholars have said little about the types of agents who might actually deploy these technologies, either on purpose or by accident. This is a problematic gap in the literature, because agents matter just as much as, or perhaps even more than, potentially dangerous advanced technologies. They could be a bigger factor than the number of weapons of total destruction in the world.

Agents matter. To illustrate this point, consider the “two worlds” thought experiment: In world A, one finds many different kinds of weapons that are powerful enough to destroy the world, and virtually every citizen has access to them. Compare this with world B, in which there exists only a single weapon, and it is accessible to only one-fourth of the population. Which world would you rather live in? If you focus only on the technology, then world B is clearly safer.

Imagine, though, that world A is populated by peaceniks, while world B is populated by psychopaths. Now which world would you rather live in? Even though world A has more weapons, and greater access to them, world B is a riskier place to live. The moral is this: To accurately assess the overall probability of risk, as some scholars have attempted to do, it’s important to consider both sides of the agent-tool coupling.

Studying agents might seem somewhat trivial, especially for those with a background in science and technology. Humans haven’t changed much in the past 30,000 years, and we’re unlikely to evolve new traits in the coming decades, whereas the technologies available to us have changed dramatically. This makes studying the latter much more important. Nevertheless, studying the human side of the equation can suggest new ways to mitigate risk.

Agents of terror. “Terrorists,” “rogue states,” “psychopaths,” “malicious actors,” and so on—these are frequently lumped together by existential risk scholars without further elaboration. When one takes a closer look, though, one discovers important and sometimes surprising differences between various types of agents. For example, most terrorists would be unlikely to intentionally cause an existential catastrophe. Why? Because the goals of most terrorists—who are typically motivated by nationalist, separatist, anarchist, Marxist, or other political ideologies—are predicated on the continued existence of the human species.

The Irish Republican Army, for example, would obstruct its own goal of reclaiming Northern Ireland if it were to dismantle global society or annihilate humanity. Similarly, if the Islamic State were to use weapons of total destruction against its enemies, doing so would interfere with its vision for Muslim control of the Middle East.

The same could be said about most states. For example, North Korea’s leaders may harbor fantasies of world domination, and the regime could decide that launching nuclear missiles at the West would help achieve this goal. But insofar as North Korea is a rational actor, it is unlikely to initiate an all-out nuclear exchange, because this could produce a nuclear winter leading to global agricultural failures, which would negatively impact the regime’s ability to maintain control over large territories.

On the other hand, there are some types of agents that might only pose a danger after world-destroying technologies become widely available—but not otherwise. Consider the case of negative utilitarians. Individuals who subscribe to this view believe that the ultimate aim of moral conduct is to minimize the total suffering in the universe. As the Scottish philosopher R. N. Smart pointed out in a 1958 paper, the problem with this view is that it seems to call for the destruction of humanity. After all, if there are no humans around to suffer, there can be no human suffering. Negative utilitarianism—or at least some versions of it—suggests that the most ethical actor would be a “world-exploder.”

As powerful weapons become increasingly accessible to small groups and individuals, negative utilitarians could emerge as a threat to human survival. Other types of agents that could become major hazards in the future are apocalyptic terrorists (fanatics who believe that the world must be destroyed to be saved), future ecoterrorists (in particular, those who see human extinction as necessary to save the biosphere), idiosyncratic agents (individuals, such as school shooters, who simply want to kill as many people as possible before dying), and machine superintelligence.

Superintelligence has received considerable attention in the past few years, but it’s important for scholars and governments alike to recognize that there are human agents who could also bring about a catastrophe. Scholars should not succumb to the “hardware bias” that has so far led them to focus exclusively on superintelligent machines.

by Phil Torres, Bulletin of the Atomic Scientists | Read more:
Image: Dr. Strangelove