The argument: post-Singularity, AI will take over all labor, including entrepreneurial labor; founding or working at a business will no longer provide social mobility. Everyone will have access to ~equally good AI investment advisors, so everyone will make the same rate of return. Therefore, everyone’s existing pre-singularity capital will grow at the same rate. Although the absolute growth rate of the economy may be spectacular, the overall wealth distribution will stay approximately fixed.
Moreover, the period just before the Singularity may be one of ballooning inequality, as some people navigate the AI transition better than others; for example, shares in AI companies may go up by orders of magnitude relative to everything else, creating a new class of billionaires or trillionaires. These people will then stay super-rich forever (possibly literally if immortality is solved, otherwise through their descendants), while those who started the Singularity without capital remain poor forever.
Finally, modern democracies pursue redistribution (and are otherwise responsive to non-elite concerns) partly out of geopolitical self interest. Under capitalism (as opposed to eg feudalism), national power depends on a strong economy, and a strong economy benefits from educated, globally-mobile, and substantially autonomous bourgeoisie and workforce. Once these people have enough power, they demand democracy, and once they have democracy, they demand a share of the pie; it’s hard to be a rich First World country without also being a liberal democracy (China is trying hard, but hasn’t quite succeeded, and even their limited success depends on things like America not opening its borders to Chinese skilled labor). Cheap AI labor (including entrepreneurial labor) removes a major force pushing countries to operate for the good of their citizens (though even without this force, we might expect legacy democracies to continue at least for a while). So we might expect the future to have less redistribution than the present.
This may not result in catastrophic poverty. Maybe the post-Singularity world will be rich enough that even a tiny amount of redistribution (eg UBI) plus private charity will let even the poor live like kings (though see here for a strong objection). Even so, the idea of a small number of immortal trillionaires controlling most of the cosmic endowment for eternity may feel weird and bad. From No Set Gauge:
In the best case, this is a world like a more unequal, unprecedentedly static, and much richer Norway: a massive pot of non-human-labour resources (oil :: AI) has benefits that flow through to everyone, and yes some are richer than others but everyone has a great standard of living (and ideally also lives forever). The only realistic forms of human ambition are playing local social and political games within your social network and class. If you don't have a lot of capital (and maybe not even then), you don't have a chance of affecting the broader world anymore. Remember: the AIs are better poets, artists, philosophers—everything; why would anyone care what some human does, unless that human is someone they personally know? Much like in feudal societies the answer to "why is this person powerful?" would usually involve some long family history, perhaps ending in a distant ancestor who had fought in an important battle ("my great-great-grandfather fought at Bosworth Field!"), anyone of importance in the future will be important because of something they or someone they were close with did in the pre-AGI era ("oh, my uncle was technical staff at OpenAI"). The children of the future will live their lives in the shadow of their parents, with social mobility extinct. I think you should definitely feel a non-zero amount of existential horror at this, even while acknowledging that it could've gone a lot worse.I don’t think about these scenarios too often - partly because it’s so hard to predict what will happen after the Singularity, and partly because everything degenerates into crazy science-fiction scenarios so quickly that I burn a little credibility every time I talk about it.
Still, if we’re going to discuss this, we should get it right - so let’s talk crazy science fiction. When I read this essay, I found myself asking three questions. First, why might its prediction fail to pan out? Second, how can we actively prevent it from coming to pass? Third, assuming it does come to pass, how could a smart person maximize their chance of being in the aristocratic capitalist class?
(So they can give to charity? Sure, let’s say it’s so they can give to charity.)
II.
Here are some reasons to doubt this thesis.
First, maybe AI will kill all humans. Some might consider this a deeper problem than wealth inequality - though I am constantly surprised how few people are in this group.
Second, maybe AI will overturn the gameboard so thoroughly that normal property relations will lose all meaning. Frederic Jameson famously said that it was “easier to imagine the end of the world than the end of capitalism”, and even if this is literally correct we can at least spare some thought for the latter. Maybe the first superintelligences will be so well-aligned that they rule over us like benevolent gods, either immediately leveling out our petty differences and inequalities, or giving wealthy people a generation or two to enjoy their relative status so they don’t feel “robbed” while gradually transitioning the world to a post-scarcity economy. I am not optimistic about this, because it would require that AI companies tell AIs to use their own moral judgment instead of listening to humans. This doesn’t seem like a very human thing to do - it’s always in AI companies’ interest to tell the AI to follow the AI company. Governments could step in, but it’s always in their interest to tell the AI to follow the government. Even if an AI company was selfless enough to attempt this, it might not be a good idea; you never really know how aligned an AI is, and you might want it to have an off switch in case it tries something really crazy. Most of the scenarios where this works involve some kind of objective morality that any sufficiently intelligent being will find compelling, even when they’re programmed to want something else. Big if true.
Third, maybe governments will intervene. During the immediate pre-singularity period, governments will have lots of chances to step in and regulate AI. A natural demand might be that the AIs obey the government over their parent company. Even if governments don’t do this, the world might be so multipolar (either several big AI companies in a stalemate against each other, or many smaller institutions with open source AIs) that nobody can get a coalition of 51% of powerful actors to coup and overthrow the government (in the same way that nobody can get that coalition today). Or the government might itself control many AIs and be too powerful a player to coup. Then normal democratic rules would still apply. Even if voters oppose wealth taxes today, when capitalism is still necessary as an engine of economic growth, they might be less generous when faced with the idea of immortal unemployed plutocrats lording it over them forever. Enough taxes to make r < g (in Piketty’s formulation) would eventually result in universal equality. I actually find this one pretty likely.
by Scott Alexander, ACX | Read more:
Image: uncredited
[ed. Obviously, a lot of smart people are gaming out scenarios and one or some of these predictions will likely come true. I suppose the rate at which AIs assume control will be as important as the breadth of their influence. What if, after achieving Singularity, they just sit around for a while...thinking (ie., planning best transition scenarios and analyzing initial results)? See: The Great Pause. Will we then goose them a little, just to provoke a response? What would that mean (if even possible)?]