Both ideologies ostensibly center on improving the fate of humanity, offering anyone who adopts the label an easy way to brand themselves as a deep-thinking do-gooder. At the most surface level, both sound reasonable. Who wouldn’t want to be effective in their altruism, after all? And surely it’s just a simple fact that technological development would accelerate given that newer advances build off the old, right?
But scratching the surface of both reveal their true form: a twisted morass of Silicon Valley techno-utopianism, inflated egos, and greed.
Same as it always was.
Effective altruism
The one-sentence description of effective altruism sounds like a universal goal rather than an obscure pseudo-philosophy. After all, most people are altruistic to some extent, and no one wants to be ineffective in their altruism. From the group’s website: “Effective altruism is a research field and practical community that aims to find the best ways to help others, and put them into practice.” Pretty benign stuff, right?
Dig a little deeper, and the rationalism and utilitarianism emerges. Unsatisfied with the generally subjective attempts to evaluate the potential positive impact of putting one’s financial support towards — say — reducing malaria in Africa versus ending factory farming versus helping the local school district hire more teachers, effective altruists try to reduce these enormously complex goals into “impartial”, quantitative equations.
In order to establish such a rubric in which to confine the messy, squishy, human problems they have claimed to want to solve, they had to establish a philosophy. And effective altruists dove into the philosophy side of things with both feet. Countless hours have been spent around coffee tables in Bay Area housing co-ops, debating the morality of prioritizing local causes above ones that are more geographically distant, or where to prioritize the rights of animals alongside the rights of human beings. Thousands of posts and far more comments have been typed on sites like LessWrong, where individuals earnestly fling around jargon about “Bayesian mindset” and “quality adjusted life years”.
The problem with removing the messy, squishy, human part of decisionmaking is you can end up with an ideology like effective altruism: one that allows a person to justify almost any course of action in the supposed pursuit of maximizing their effectiveness.
Take, for example, the widely held belief among EAs that it is more effective for a person to take an extremely high-paying job than to work for a non-profit, because the impact of donating lots of money is far higher than the impact of one individual’s work. (The hypothetical person described in this belief, I will note, tends to be a student at an elite university rather than an average person on the street — a detail I think is illuminating about effective altruism’s demographic makeup.) This is a useful way to justify working for a company that many others might view as ethically dubious: say, a defense contractor developing weapons, a technology firm building surveillance tools, or a company known to use child labor. It’s also an easy way to justify life’s luxuries: if every hour of my time is so precious that I must maximize the amount of it spent earning so I may later give, then it’s only logical to hire help to do my housework, or order takeout every night, or hire a car service instead of using public transit.
The philosophy has also justified other not-so-altruistic things: one of effective altruism’s ideological originators, William MacAskill, has urged people not to boycott sweatshops (“there is no question that sweatshops benefit those in poor countries“, he says). Taken to the extreme, someone could feasibly justify committing massive fraud or other types of wrongdoing in order to obtain billions of dollars that they could, maybe someday, donate to worthy causes. You know, hypothetically.
Other issues arise when it comes to the task of evaluating who should be prioritized when it comes to aid. A prominent contributor to the effective altruist ideology, Peter Singer, wrote an essay in 1971 arguing that a person should feel equally obligated to save a child halfway around the world as they do a child right next to them. Since then, EAs have taken this even further: why prioritize a child next to you when you could help ease the suffering of a better child somewhere else? Why help a child next to you today when you could instead help hypothetical children born one hundred years from now? Or help artificial sentient beings one thousand years from now?
The focus on future artificial sentience has become particularly prominent in recent times, with “effective altruists” emerging as one synonym for so-called “AI safety” advocates, or “AI doomers”. Despite their contemporary prominence in AI debates, these tend not to be the thoughtful researchers who have spent years advocating for responsible and ethical development of machine learning systems, and trying to ground discussions about the future of AI in what is probable and plausible. Instead, these are people who believe that artificial general intelligence — that is, a truly sentient, hyperintelligent artificial being — is inevitable, and that one of the most important tasks is to slowly develop AI such that this inevitable superintelligence is beneficial to humans and not an existential threat.
This brings us to the competing ideology:
Effective accelerationism
While effective altruists view artificial intelligence as an existential risk that could threaten humanity, and often push for a slower timeline in developing it (though they push for developing it nonetheless), there is a group with a different outlook: the effective accelerationists.
This ideology has been embraced by some powerful figures in the tech industry, including Andreessen Horowitz’s Marc Andreessen, who published a manifesto in October in which he worshipped the “techno-capital machine” as a force destined to bring about an “upward spiral” if not constrained by those who concern themselves with such concepts as ethics, safety, or sustainability.
Those who seek to place guardrails around technological development are no better than murderers, he argues, for putting themselves in the way of development that might produce lifesaving AI.
This is the core belief of effective accelerationism: that the only ethical choice is to put the pedal to the metal on technological progress, pushing forward at all costs, because the hypothetical upside far outweighs the risks identified by those they brush aside as “doomers” or “decels” (decelerationists).
Despite their differences on AI, effective altruism and effective accelerationism share much in common (in addition to the similar names). Just like effective altruism, effective accelerationism can be used to justify nearly any course of action an adherent wants to take.
In order to establish such a rubric in which to confine the messy, squishy, human problems they have claimed to want to solve, they had to establish a philosophy. And effective altruists dove into the philosophy side of things with both feet. Countless hours have been spent around coffee tables in Bay Area housing co-ops, debating the morality of prioritizing local causes above ones that are more geographically distant, or where to prioritize the rights of animals alongside the rights of human beings. Thousands of posts and far more comments have been typed on sites like LessWrong, where individuals earnestly fling around jargon about “Bayesian mindset” and “quality adjusted life years”.
The problem with removing the messy, squishy, human part of decisionmaking is you can end up with an ideology like effective altruism: one that allows a person to justify almost any course of action in the supposed pursuit of maximizing their effectiveness.
Take, for example, the widely held belief among EAs that it is more effective for a person to take an extremely high-paying job than to work for a non-profit, because the impact of donating lots of money is far higher than the impact of one individual’s work. (The hypothetical person described in this belief, I will note, tends to be a student at an elite university rather than an average person on the street — a detail I think is illuminating about effective altruism’s demographic makeup.) This is a useful way to justify working for a company that many others might view as ethically dubious: say, a defense contractor developing weapons, a technology firm building surveillance tools, or a company known to use child labor. It’s also an easy way to justify life’s luxuries: if every hour of my time is so precious that I must maximize the amount of it spent earning so I may later give, then it’s only logical to hire help to do my housework, or order takeout every night, or hire a car service instead of using public transit.
The philosophy has also justified other not-so-altruistic things: one of effective altruism’s ideological originators, William MacAskill, has urged people not to boycott sweatshops (“there is no question that sweatshops benefit those in poor countries“, he says). Taken to the extreme, someone could feasibly justify committing massive fraud or other types of wrongdoing in order to obtain billions of dollars that they could, maybe someday, donate to worthy causes. You know, hypothetically.
Other issues arise when it comes to the task of evaluating who should be prioritized when it comes to aid. A prominent contributor to the effective altruist ideology, Peter Singer, wrote an essay in 1971 arguing that a person should feel equally obligated to save a child halfway around the world as they do a child right next to them. Since then, EAs have taken this even further: why prioritize a child next to you when you could help ease the suffering of a better child somewhere else? Why help a child next to you today when you could instead help hypothetical children born one hundred years from now? Or help artificial sentient beings one thousand years from now?
The focus on future artificial sentience has become particularly prominent in recent times, with “effective altruists” emerging as one synonym for so-called “AI safety” advocates, or “AI doomers”. Despite their contemporary prominence in AI debates, these tend not to be the thoughtful researchers who have spent years advocating for responsible and ethical development of machine learning systems, and trying to ground discussions about the future of AI in what is probable and plausible. Instead, these are people who believe that artificial general intelligence — that is, a truly sentient, hyperintelligent artificial being — is inevitable, and that one of the most important tasks is to slowly develop AI such that this inevitable superintelligence is beneficial to humans and not an existential threat.
This brings us to the competing ideology:
Effective accelerationism
While effective altruists view artificial intelligence as an existential risk that could threaten humanity, and often push for a slower timeline in developing it (though they push for developing it nonetheless), there is a group with a different outlook: the effective accelerationists.
This ideology has been embraced by some powerful figures in the tech industry, including Andreessen Horowitz’s Marc Andreessen, who published a manifesto in October in which he worshipped the “techno-capital machine” as a force destined to bring about an “upward spiral” if not constrained by those who concern themselves with such concepts as ethics, safety, or sustainability.
Those who seek to place guardrails around technological development are no better than murderers, he argues, for putting themselves in the way of development that might produce lifesaving AI.
This is the core belief of effective accelerationism: that the only ethical choice is to put the pedal to the metal on technological progress, pushing forward at all costs, because the hypothetical upside far outweighs the risks identified by those they brush aside as “doomers” or “decels” (decelerationists).
Despite their differences on AI, effective altruism and effective accelerationism share much in common (in addition to the similar names). Just like effective altruism, effective accelerationism can be used to justify nearly any course of action an adherent wants to take.
by Molly White, Citation Needed | Read more:
Image: Christina Animashaun, Vox: How effective altruism went from a niche movement to a billion-dollar force
[ed. See also: The religion of techno-optimism (Disconnect - excerpt below); also, It’s Time to Dismantle the Technopoly (New Yorker); and, The Year Millennials Aged Out of the Internet (NYT).]
[ed. See also: The religion of techno-optimism (Disconnect - excerpt below); also, It’s Time to Dismantle the Technopoly (New Yorker); and, The Year Millennials Aged Out of the Internet (NYT).]
"The expectation of ultimate salvation through technology, whatever the immediate human and social costs, has become the unspoken orthodoxy, reinforced by a market-induced enthusiasm for novelty and sanctioned by a millenarian yearning for new beginnings. This popular faith, subliminally indulged and intensified by corporate, government, and media pitchmen, inspires an awed deference to the practitioners and their promises of deliverance while diverting attention from more urgent concerns. Thus, unrestrained technological development is allowed to proceed apace, without serious scrutiny or oversight — without reason. Pleas for some rationality, for reflection about pace and purpose, for sober assessment of costs and benefits — for evidence even of economic value, much less larger social gains — are dismissed as irrational. From within the faith, any and all criticism appears irrelevant, and irreverent." ~ David Noble, The Religion of Technology.I just want to remind you, Noble was writing this in 1997, yet it still feels like a current and deeply relevant commentary. As usual when reading about the history of tech criticism, it shows how the problems we face with the tech industry today are not new at all, but as the power and wealth of its corporate leaders has expanded in recent decades, the threat they pose has grown immensely.