Have you heard? Someday we will live in a perfect society ruled by an omnipotent artificial intelligence, provably and utterly beneficial to mankind.
That is, if we don’t all die once the machines gain consciousness, take over, and kill us.
Wait, actually, they are going to take some of us with them, and we will transcend to another plane of existence. Or at least clones of us will. Or at least clones of us that are not being perpetually tortured for our current sins.
These are all outcomes that futurists of various stripes currently believe. A futurist is a person who spends a serious amount of time—either paid or unpaid—forming theories about society’s future. And although it can be fun to mock them for their silly sounding and overtly religious predictions, we should take futurists seriously. Because at the heart of the futurism movement lies money, influence, political power, and access to the algorithms that increasingly rule our private, political, and professional lives.
Google, IBM, Ford, and the Department of Defense all employ futurists. And I am myself a futurist. But I have noticed deep divisions and disagreements within the field, which has led me, below, to chart the four basic “types” of futurists. My hope is that by better understanding the motivations and backgrounds of the people involved—however unscientifically—we can better prepare ourselves for the upcoming political struggle over whose narrative of the future we should fight for: tech oligarchs that want to own flying cars and live forever, or gig economy workers that want to someday have affordable health care.
With that in mind, let me introduce two dimensions of futurism, represented by axes. That is to say, two ways to measure and plot futurists on a graph, which we can then examine more closely.
The first measurement of a futurist is the extent to which he or she believes in a singularity. Broadly speaking a singularity is a moment where technology gets so much better, at such an exponentially increasing rate, that it achieves a fundamental and meaningful technological shift of existence, transcending its original purpose and even nature. In many singularity myths the computer either becomes self-aware and intelligent, possibly in a good way but sometimes in a destructive or even vindictive way. In others humans are connected to machines and together become something new. The larger point is that some futurists believe fervently in a singularity, while others do not.
On our second axis, let’s measure the extent to which a given futurist is worried when they theorize about the future. Are they excited or scared? Cautious or jubilant? The choices futurists make are often driven by their emotions. Utopianists generally focus on all the good that technology can do; they find hope in cool gadgets and the newest AI helpers. Dystopianists are by definition focused on the harm; they consequently think about different aspects of technology altogether. The kinds of technologies these two groups consider are nearly disjoint, and even where they do intersect, the futurists’ takes are diametrically opposed.
So, now that we have our two axes, we can build quadrants and consider the group of futurists in each one. Their differences shed light on what their values are, who their audiences are, and what product they are peddling.
Q1.
First up: the people who believe in the singularity and are not worried about it. They welcome it with open arms in the name of progress. Examples of people in this quadrant are Ray Kurzweil, the inventor and author of The Age of Spiritual Machines (1999); the libertarians in the Seasteaders movement who want to create autonomous floating cities outside of any government jurisdiction; and the people who are trying to augment intelligence and live forever.
These futurists enthusiastically believe in Moore’s Law—the observation by Gordon Moore, a co-founder of Intel, that the number of transistors in a circuit doubles approximately every two years—and in exponential growth of everything in sight. Singularity University, started by Kurzweil, has no fewer than twelve mentions of the word “exponential” on its website. Its motto is “Be Exponential.”
Generally speaking these futurists are hobbyists—they have the time for these theories because, in terms of wealth, they are already in the top 0.1 percent. They think of the future in large part as a way to invest their money and become even wealthier. They once worked at or still own Silicon Valley companies, venture capital firms, or hedge funds, and they learned to think of themselves as deeply clever—possibly even wise. They wax eloquent about meritocracy over expensive wine or their drug of choice (micro-dosing, anyone?).
With enormous riches and very few worldly concerns, these futurists focus their endeavors on the only things that could actually threaten them: death and disease.
They talk publicly about augmenting intelligence through robotic assistance or better quality of life through medical breakthroughs, but privately they are interested in technical fixes to physical problems and are impatient with the medical establishment for being too cautious and insufficiently innovative. They invest heavily in cryogenics, dubious mind–computer interface technology, medical strategies for living forever (here’s looking at you, Sergey Brin and Larry Page), and possibly even the blood of young people.
These futurists are ready and willing to install hardware in their brains because, as they are mostly young or middle-age white men, they have never been oppressed. For them the worst-case scenario is that they live their future lives as uploaded software in the cloud, a place where they can control the excellent virtual reality graphics. (If this sounds like a science fiction fantasy for sex-starved teenagers, don’t be surprised. They got most of these ideas—as sex-starved teenagers—from writers such as Robert Heinlein and Ayn Rand.)
The problem here, of course, is the “I win” blind spot—the belief that if this system works for me, then it must be a good system. These futurists think that racism, sexism, classism, and politics are problems to be solved by technology. If they had their way, they would be asked to program the next government. They would keep it proprietary, of course, to keep the hoi polloi from gaming the system.
And herein lies the problem: whether it is the nature of existence in the super-rich bubble, or something distinctly modern and computer-oriented, futurism of this flavor is inherently elitist, genius-obsessed, and dismissive of larger society.
Q2.
Next: people who believe in a singularity but are worried about the future. They do not see the singularity as a necessarily positive force. These are the men—majority men, although more women than in the previous group—who read dystopian science fiction in their youth and think about all the things that could go wrong once the machines become self-aware, which has a small (but positive!) probability of happening. They spend time trying to estimate that probability.
A community center for these folks is the website lesswrong.com, which was created by Eliezer Yudkowsky, an artificial intelligence researcher. Yudkowsky thinks people should use rationality and avoid biases in order to lead better lives. It was a good idea, as far as practical philosophies go, but eventually he and his followers got caught up in increasingly abstract probability calculations using Bayes’ Theorem and bizarre thought experiments.
My favorite is called Roko’s basilisk, the thought experiment in which a future superintelligent and powerful AI tortures anyone who imagined its existence but didn’t go to the trouble of creating it. In other words it is a vindictive hypothetical being that puts you in danger as soon as you hear the thought experiment. Roko’s basilisk was seen by its inventor, Roko, as an incentive to donate to the cause of Friendly AI to “thereby increase the chances of a positive singularity.” But discussion of it soon so dominated Yudkowsky’s site that he banned it—a move that, not surprisingly, created more interest in the discussion.
A different but related movement in the world of AI futures comes from the Effective Altruism movement, which has been advocated for in this journal by philosopher Peter Singer. Like Yudkowsky, Effective Altruists started out well. Their basic argument was that we should care about human suffering outside our borders, not just in our close proximity, and that we should take personal responsibility for optimizing our money to improve the world.
You can go pretty far with that reasoning—and to their credit, Effective Altruists have made enormous international charitable contributions—but obsessing over the concept of effectiveness is limited by the fact that suffering, like community good, is hard to quantify.
Instead of acknowledging the limits of hard numbers, however, the group has more recently spun off into a parody of itself. Some factions believe that instead of worrying about current suffering, they should worry about “existential risks,” unlikely futuristic events that are characterized by computations besieged by powers of ten and could thus cause enormous suffering. A good example comes from Nick Bostrom’s Future of Humanity Institute website: ". . . we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives."
As a group these futurists are fundamentally sympathetic figures but woefully simplistic regarding current human problems. If they are not worried about existential risk, they are worried about the suffering of plankton, or perhaps every particle in the universe.
I will shove Elon Musk into this Q2 group, even though he is not a perfect fit. Being an enormously rich and powerful entrepreneur, he probably belongs in the first group, but he sometimes shows up at Effective Altruism events, and he has made noise recently about the computers getting mean and launching us into World War III. The cynics among us might suspect this is mostly a ploy to sell his services as a mediator between the superintelligent AI and humans when the time inevitably comes. After all Musk always has something to sell, including a ticket to Mars, Earth’s backup planet.
[ed. From the Boston Review's series: Global Dystopias. See also: Monopoly Men, Schlesinger and the Decline of Liberalism.]
That is, if we don’t all die once the machines gain consciousness, take over, and kill us.
Wait, actually, they are going to take some of us with them, and we will transcend to another plane of existence. Or at least clones of us will. Or at least clones of us that are not being perpetually tortured for our current sins.
These are all outcomes that futurists of various stripes currently believe. A futurist is a person who spends a serious amount of time—either paid or unpaid—forming theories about society’s future. And although it can be fun to mock them for their silly sounding and overtly religious predictions, we should take futurists seriously. Because at the heart of the futurism movement lies money, influence, political power, and access to the algorithms that increasingly rule our private, political, and professional lives.

With that in mind, let me introduce two dimensions of futurism, represented by axes. That is to say, two ways to measure and plot futurists on a graph, which we can then examine more closely.
The first measurement of a futurist is the extent to which he or she believes in a singularity. Broadly speaking a singularity is a moment where technology gets so much better, at such an exponentially increasing rate, that it achieves a fundamental and meaningful technological shift of existence, transcending its original purpose and even nature. In many singularity myths the computer either becomes self-aware and intelligent, possibly in a good way but sometimes in a destructive or even vindictive way. In others humans are connected to machines and together become something new. The larger point is that some futurists believe fervently in a singularity, while others do not.
On our second axis, let’s measure the extent to which a given futurist is worried when they theorize about the future. Are they excited or scared? Cautious or jubilant? The choices futurists make are often driven by their emotions. Utopianists generally focus on all the good that technology can do; they find hope in cool gadgets and the newest AI helpers. Dystopianists are by definition focused on the harm; they consequently think about different aspects of technology altogether. The kinds of technologies these two groups consider are nearly disjoint, and even where they do intersect, the futurists’ takes are diametrically opposed.
So, now that we have our two axes, we can build quadrants and consider the group of futurists in each one. Their differences shed light on what their values are, who their audiences are, and what product they are peddling.
Q1.
First up: the people who believe in the singularity and are not worried about it. They welcome it with open arms in the name of progress. Examples of people in this quadrant are Ray Kurzweil, the inventor and author of The Age of Spiritual Machines (1999); the libertarians in the Seasteaders movement who want to create autonomous floating cities outside of any government jurisdiction; and the people who are trying to augment intelligence and live forever.
These futurists enthusiastically believe in Moore’s Law—the observation by Gordon Moore, a co-founder of Intel, that the number of transistors in a circuit doubles approximately every two years—and in exponential growth of everything in sight. Singularity University, started by Kurzweil, has no fewer than twelve mentions of the word “exponential” on its website. Its motto is “Be Exponential.”
Generally speaking these futurists are hobbyists—they have the time for these theories because, in terms of wealth, they are already in the top 0.1 percent. They think of the future in large part as a way to invest their money and become even wealthier. They once worked at or still own Silicon Valley companies, venture capital firms, or hedge funds, and they learned to think of themselves as deeply clever—possibly even wise. They wax eloquent about meritocracy over expensive wine or their drug of choice (micro-dosing, anyone?).
With enormous riches and very few worldly concerns, these futurists focus their endeavors on the only things that could actually threaten them: death and disease.
They talk publicly about augmenting intelligence through robotic assistance or better quality of life through medical breakthroughs, but privately they are interested in technical fixes to physical problems and are impatient with the medical establishment for being too cautious and insufficiently innovative. They invest heavily in cryogenics, dubious mind–computer interface technology, medical strategies for living forever (here’s looking at you, Sergey Brin and Larry Page), and possibly even the blood of young people.
These futurists are ready and willing to install hardware in their brains because, as they are mostly young or middle-age white men, they have never been oppressed. For them the worst-case scenario is that they live their future lives as uploaded software in the cloud, a place where they can control the excellent virtual reality graphics. (If this sounds like a science fiction fantasy for sex-starved teenagers, don’t be surprised. They got most of these ideas—as sex-starved teenagers—from writers such as Robert Heinlein and Ayn Rand.)
The problem here, of course, is the “I win” blind spot—the belief that if this system works for me, then it must be a good system. These futurists think that racism, sexism, classism, and politics are problems to be solved by technology. If they had their way, they would be asked to program the next government. They would keep it proprietary, of course, to keep the hoi polloi from gaming the system.
And herein lies the problem: whether it is the nature of existence in the super-rich bubble, or something distinctly modern and computer-oriented, futurism of this flavor is inherently elitist, genius-obsessed, and dismissive of larger society.
Q2.
Next: people who believe in a singularity but are worried about the future. They do not see the singularity as a necessarily positive force. These are the men—majority men, although more women than in the previous group—who read dystopian science fiction in their youth and think about all the things that could go wrong once the machines become self-aware, which has a small (but positive!) probability of happening. They spend time trying to estimate that probability.
A community center for these folks is the website lesswrong.com, which was created by Eliezer Yudkowsky, an artificial intelligence researcher. Yudkowsky thinks people should use rationality and avoid biases in order to lead better lives. It was a good idea, as far as practical philosophies go, but eventually he and his followers got caught up in increasingly abstract probability calculations using Bayes’ Theorem and bizarre thought experiments.
My favorite is called Roko’s basilisk, the thought experiment in which a future superintelligent and powerful AI tortures anyone who imagined its existence but didn’t go to the trouble of creating it. In other words it is a vindictive hypothetical being that puts you in danger as soon as you hear the thought experiment. Roko’s basilisk was seen by its inventor, Roko, as an incentive to donate to the cause of Friendly AI to “thereby increase the chances of a positive singularity.” But discussion of it soon so dominated Yudkowsky’s site that he banned it—a move that, not surprisingly, created more interest in the discussion.
A different but related movement in the world of AI futures comes from the Effective Altruism movement, which has been advocated for in this journal by philosopher Peter Singer. Like Yudkowsky, Effective Altruists started out well. Their basic argument was that we should care about human suffering outside our borders, not just in our close proximity, and that we should take personal responsibility for optimizing our money to improve the world.
You can go pretty far with that reasoning—and to their credit, Effective Altruists have made enormous international charitable contributions—but obsessing over the concept of effectiveness is limited by the fact that suffering, like community good, is hard to quantify.
Instead of acknowledging the limits of hard numbers, however, the group has more recently spun off into a parody of itself. Some factions believe that instead of worrying about current suffering, they should worry about “existential risks,” unlikely futuristic events that are characterized by computations besieged by powers of ten and could thus cause enormous suffering. A good example comes from Nick Bostrom’s Future of Humanity Institute website: ". . . we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives."
As a group these futurists are fundamentally sympathetic figures but woefully simplistic regarding current human problems. If they are not worried about existential risk, they are worried about the suffering of plankton, or perhaps every particle in the universe.
I will shove Elon Musk into this Q2 group, even though he is not a perfect fit. Being an enormously rich and powerful entrepreneur, he probably belongs in the first group, but he sometimes shows up at Effective Altruism events, and he has made noise recently about the computers getting mean and launching us into World War III. The cynics among us might suspect this is mostly a ploy to sell his services as a mediator between the superintelligent AI and humans when the time inevitably comes. After all Musk always has something to sell, including a ticket to Mars, Earth’s backup planet.
by Cathy O'Neil, Boston Review | Read more:
Image: Maurizio Pesce[ed. From the Boston Review's series: Global Dystopias. See also: Monopoly Men, Schlesinger and the Decline of Liberalism.]