Monday, October 28, 2013

Omens

As the oldest university in the English-speaking world, Oxford is a strange choice to host a futuristic think tank, a salon where the concepts of science fiction are debated in earnest. The Future of Humanity Institute seems like a better fit for Silicon Valley or Shanghai. During the week that I spent with him, Bostrom and I walked most of Oxford’s small cobblestone grid. On foot, the city unfolds as a blur of yellow sandstone, topped by grey skies and gothic spires, some of which have stood for nearly 1,000 years. There are occasional splashes of green, open gates that peek into lush courtyards, but otherwise the aesthetic is gloomy and ancient. When I asked Bostrom about Oxford’s unique ambience, he shrugged, as though habit had inured him to it. But he did once tell me that the city's gloom is perfect for thinking dark thoughts over hot tea.

There are good reasons for any species to think darkly of its own extinction. Ninety-nine percent of the species that have lived on Earth have gone extinct, including more than five tool-using hominids. A quick glance at the fossil record could frighten you into thinking that Earth is growing more dangerous with time. If you carve the planet's history into nine ages, each spanning five hundred million years, only in the ninth do you find mass extinctions, events that kill off more than two thirds of all species. But this is deceptive. Earth has always had her hazards; it's just that for us to see them, she had to fill her fossil beds with variety, so that we could detect discontinuities across time. The tree of life had to fill out before it could be pruned. (...)

Bostrom isn’t too concerned about extinction risks from nature. Not even cosmic risks worry him much, which is surprising, because our starry universe is a dangerous place. Every 50 years or so, one of the Milky Way’s stars explodes into a supernova, its detonation the latest gong note in the drumbeat of deep time. If one of our local stars were to go supernova, it could irradiate Earth, or blow away its thin, life-sustaining atmosphere. Worse still, a passerby star could swing too close to the Sun, and slingshot its planets into frigid, intergalactic space. Lucky for us, the Sun is well-placed to avoid these catastrophes. Its orbit threads through the sparse galactic suburbs, far from the dense core of the Milky Way, where the air is thick with the shrapnel of exploding stars. None of our neighbours look likely to blow before the Sun swallows Earth in four billion years. And, so far as we can tell, no planet-stripping stars lie in our orbital path. Our solar system sits in an enviable bubble of space and time.

But as the dinosaurs discovered, our solar system has its own dangers, like the giant space rocks that spin all around it, splitting off moons and scarring surfaces with craters. In her youth, Earth suffered a series of brutal bombardments and celestial collisions, but she is safer now. There are far fewer asteroids flying through her orbit than in epochs past. And she has sprouted a radical new form of planetary protection, a species of night watchmen that track asteroids with telescopes.

‘If we detect a large object that’s on a collision course with Earth, we would likely launch an all-out Manhattan project to deflect it,’ Bostrom told me. Nuclear weapons were once our asteroid-deflecting technology of choice, but not anymore. A nuclear detonation might scatter an asteroid into a radioactive rain of gravel, a shotgun blast headed straight for Earth. Fortunately, there are other ideas afoot. Some would orbit dangerous asteroids with small satellites, in order to drag them into friendlier trajectories. Others would paint asteroids white, so the Sun’s photons bounce off them more forcefully, subtly pushing them off course. Who knows what clever tricks of celestial mechanics would emerge if Earth were truly in peril. (...)

The risks that keep Bostrom up at night are those for which there are no geological case studies, and no human track record of survival. These risks arise from human technology, a force capable of introducing entirely new phenomena into the world.

Nuclear weapons were the first technology to threaten us with extinction, but they will not be the last, nor even the most dangerous. A species-destroying exchange of fissile weapons looks less likely now that the Cold War has ended, and arsenals have shrunk. There are still tens of thousands of nukes, enough to incinerate all of Earth’s dense population centers, but not enough to target every human being. The only way nuclear war will wipe out humanity is by triggering nuclear winter, a crop-killing climate shift that occurs when smoldering cities send Sun-blocking soot into the stratosphere. But it’s not clear that nuke-levelled cities would burn long or strong enough to lift soot that high. The Kuwait oil field fires blazed for ten months straight, roaring through 6 million barrels of oil a day, but little smoke reached the stratosphere. A global nuclear war would likely leave some decimated version of humanity in its wake; perhaps one with deeply rooted cultural taboos concerning war and weaponry.

Such taboos would be useful, for there is another, more ancient technology of war that menaces humanity. Humans have a long history of using biology’s deadlier innovations for ill ends; we have proved especially adept at the weaponisation of microbes. In antiquity, we sent plagues into cities by catapulting corpses over fortified walls. Now we have more cunning Trojan horses. We have even stashed smallpox in blankets, disguising disease as a gift of good will. Still, these are crude techniques, primitive attempts to loose lethal organisms on our fellow man. In 1993, the death cult that gassed Tokyo’s subways flew to the African rainforest in order to acquire the Ebola virus, a tool it hoped to use to usher in Armageddon. In the future, even small, unsophisticated groups will be able to enhance pathogens, or invent them wholesale. Even something like corporate sabotage, could generate catastrophes that unfold in unpredictable ways. Imagine an Australian logging company sending synthetic bacteria into Brazil’s forests to gain an edge in the global timber market. The bacteria might mutate into a dominant strain, a strain that could ruin Earth’s entire soil ecology in a single stroke, forcing 7 billion humans to the oceans for food.

These risks are easy to imagine. We can make them out on the horizon, because they stem from foreseeable extensions of current technology. But surely other, more mysterious risks await us in the epochs to come. After all, no 18th-century prognosticator could have imagined nuclear doomsday. Bostrom’s basic intellectual project is to reach into the epistemological fog of the future, to feel around for potential threats. It’s a project that is going to be with us for a long time, until — if — we reach technological maturity, by inventing and surviving all existentially dangerous technologies.

There is one such technology that Bostrom has been thinking about a lot lately. Early last year, he began assembling notes for a new book, a survey of near-term existential risks. After a few months of writing, he noticed one chapter had grown large enough to become its own book. ‘I had a chunk of the manuscript in early draft form, and it had this chapter on risks arising from research into artificial intelligence,’ he told me. ‘As time went on, that chapter grew, so I lifted it over into a different document and began there instead.’

by Ross Andersen, Aeon |  Read more:
Image: Andy Sansom