Strictly speaking, generative AI has been around for a while. Misinformation researchers have warned about deep fake capabilities for nearly a decade. A few years ago, chatbots were all the rage in the business world, partly because someone was trying to figure out what to do with all of the data scientists they hired, and partly because chatbots would allow them to decimate their customer service teams. (Of course, consumers didn’t ask for this. Nobody actually wants to interact with a chatbot over a human being.) AI has been writing mundane sports recaps for a few years at least.
These earlier incarnations of generative AI failed to find mainstream traction. They required a lot of specific technology knowledge and frankly weren’t very good. Engineers and data scientists had to spend a lot of time tuning and implementing them. The costs were huge. Average users couldn’t access them. That changed when ChatGPT’s public demo became available.
These earlier incarnations of generative AI failed to find mainstream traction. They required a lot of specific technology knowledge and frankly weren’t very good. Engineers and data scientists had to spend a lot of time tuning and implementing them. The costs were huge. Average users couldn’t access them. That changed when ChatGPT’s public demo became available.
ChatGPT’s public release arrived less than 3 weeks after the collapse of FTX. The technology was a step change from what we’d seen with generative AI previously. It was far from perfect, but it was frighteningly good and had clear general purpose functionality. Image generation tools like DALL-E, Stable Diffusion, and Midjourney jumped on this bandwagon. Suddenly, everyone was using AI, or at least playing around with it.
The tech industry’s blink-and-you’ll-miss-it pivot was fast enough to give you whiplash. Crypto was out. Metaverse was out. Mark Zuckerberg’s company, which traded out its globally-known household name to rebrand as Meta, laid off thousands of technologists it had hired to build the metaverse and pivoted to AI. Every social media crypto-charlatan quietly removed the “.ETH” label from their user names and rebranded themselves as a large language model (LLM) expert. Microsoft sank eye-watering money into OpenAI and Google and Amazon raced to keep up. Tech companies sprinted to integrate generative AI into their products, quality be damned. And suddenly every data scientist found themselves playing a central role in what might be the most important technology shift since the advent of the world wide web.
There was one group of people who weren’t nonplussed by this sudden change. Technology ethicists had been tracking these developments from both inside and outside the industry for years, sounding the alarm about the potential harms posed by, inter alia, AI, crypto, and the metaverse. Disproportionately women and people of color, the community has struggled for years to raise awareness of the multifaceted social risks posed by AI. I’ve spoken on some of these issues myself over the years, though I’ve mostly retired from that work. Many of the arguments have grown stale and the field suffers from the same mistake made by American liberals during the 2016 election: you can’t argue from a position of decency if your opponent has no intention to act decently to begin with. Longtermists offered a mind-blowing riposte: who cares about racism today when you’re trying to save billions of lives in the future?
GenAI solved two challenges that other Singularity-aligned technology failed to address: commercial viability and real-world relevance. The only thing standing in its way is a relatively small and disempowered group of responsible technology protestants, who may yet possess enough gravitas to impede the technology’s unrestricted adoption. It’s not that the general public isn’t concerned about AI risk. It’s that their concerns are largely misguided, worrying more about human extinction and less about programmed social inequality. (...)
Singularity theorists have capitalized on these fears by engaging in arbitrage. On the one hand, they’re playing a game of regulatory capture by overstating the risk of the emergence of a super-intelligent AI, promising to support regulation that would prevent companies from birthing such a creation. On the other hand, they’re actively promoting the imminence of the technology. OpenAI’s CEO, Sam Altman, was briefly fired when OpenAI employees apparently raised concerns to the board over such a possibility. What followed was a week of chaos that saw Altman hired by Microsoft only to return to OpenAI and execute a Game of Thrones-esque power grab, ousting the two women on the board who had tried to keep the supposedly not-for-profit company on-mission.
Humanity’s demise is a scarier idea than, say, labor displacement. It’s not a coincidence that AI advocates are keeping extinction risk as the preĆ«minent “AI safety” topic in regulators' minds. It’s something they can easily agree to avoid without any negligible impact in the day-to-day operations of their business: we are not close to the creation of an Artificial General Intelligence (AGI), despite the breathless claims of the Singularity disciples working on the tech. This allows them to distract from and marginalize the real concerns about AI safety: mass unemployment, educational impairment, encoded social injustice, misinformation, and so forth. Singularity theorists get to have it both ways: they can keep moving towards their promised land without interference from those equipped to stop them. (...)
I texted my good friend, Eve Ettinger, the other night after a particularly frustrating exchange I had with some AI evangelists. Eve is a brilliant activist whose experience escaping an evangelical Christian cult has shaped their work. “Are there any tests to check if you’re in a cult,” I wondered.
“Can you ask the forbidden questions and not get ostracized?”
There was one group of people who weren’t nonplussed by this sudden change. Technology ethicists had been tracking these developments from both inside and outside the industry for years, sounding the alarm about the potential harms posed by, inter alia, AI, crypto, and the metaverse. Disproportionately women and people of color, the community has struggled for years to raise awareness of the multifaceted social risks posed by AI. I’ve spoken on some of these issues myself over the years, though I’ve mostly retired from that work. Many of the arguments have grown stale and the field suffers from the same mistake made by American liberals during the 2016 election: you can’t argue from a position of decency if your opponent has no intention to act decently to begin with. Longtermists offered a mind-blowing riposte: who cares about racism today when you’re trying to save billions of lives in the future?
GenAI solved two challenges that other Singularity-aligned technology failed to address: commercial viability and real-world relevance. The only thing standing in its way is a relatively small and disempowered group of responsible technology protestants, who may yet possess enough gravitas to impede the technology’s unrestricted adoption. It’s not that the general public isn’t concerned about AI risk. It’s that their concerns are largely misguided, worrying more about human extinction and less about programmed social inequality. (...)
Singularity theorists have capitalized on these fears by engaging in arbitrage. On the one hand, they’re playing a game of regulatory capture by overstating the risk of the emergence of a super-intelligent AI, promising to support regulation that would prevent companies from birthing such a creation. On the other hand, they’re actively promoting the imminence of the technology. OpenAI’s CEO, Sam Altman, was briefly fired when OpenAI employees apparently raised concerns to the board over such a possibility. What followed was a week of chaos that saw Altman hired by Microsoft only to return to OpenAI and execute a Game of Thrones-esque power grab, ousting the two women on the board who had tried to keep the supposedly not-for-profit company on-mission.
Humanity’s demise is a scarier idea than, say, labor displacement. It’s not a coincidence that AI advocates are keeping extinction risk as the preĆ«minent “AI safety” topic in regulators' minds. It’s something they can easily agree to avoid without any negligible impact in the day-to-day operations of their business: we are not close to the creation of an Artificial General Intelligence (AGI), despite the breathless claims of the Singularity disciples working on the tech. This allows them to distract from and marginalize the real concerns about AI safety: mass unemployment, educational impairment, encoded social injustice, misinformation, and so forth. Singularity theorists get to have it both ways: they can keep moving towards their promised land without interference from those equipped to stop them. (...)
I texted my good friend, Eve Ettinger, the other night after a particularly frustrating exchange I had with some AI evangelists. Eve is a brilliant activist whose experience escaping an evangelical Christian cult has shaped their work. “Are there any tests to check if you’re in a cult,” I wondered.
“Can you ask the forbidden questions and not get ostracized?”
There’s a joke in the data science world that goes something like this: What’s the difference between statistics, machine learning, and AI? The size of your marketing budget. It’s strange, actually, that we still call it “artificial intelligence” to this day. Artificial intelligence is a dream from the 40s mired in the failures of the ’60s and ’70s. By the late 1980s, despite the previous spectacular failures to materialize any useful artificial intelligence, futurists had moved on to artificial life.
Nobody much is talking about artificial life these days. That idea failed, too, and those failures have likewise failed to deter us. We are now talking about creating “cybernetic superintelligence.” We’re talking about creating an AI that will usher a period of boundless prosperity for humankind. We’re talking about the imminence of our salvation.
The last generation of futurists envisioned themselves as gods working to create life. We’re no longer talking about just life. We’re talking about making artificial gods.
I’m certainly not the first person to shine a light on the eschatological character of today’s AI conversation. Sigal Samuel did it a few months back in far fewer words than I’ve used here, though perhaps glossing over some of the political aspects I’ve brought in. She cites Noble and Kurzweil in many of the same ways. I’m not even the first person to coin the term “techno-eschatology.” The parallels between the Singularity Hypothesis and the second coming of Christ are plentiful and not hard to see.
Still, I wonder why so many technologists, many of whom pride themselves on their rationalism, fail to make the connection. (...)
Eve’s second test for cult membership was, “is the leader replaceable or does it all fall apart.”
And so the vast majority of OpenAI’s employees threatened to quit if Altman was not reinstated. And so Altman was returned to the company five days after the board fired him, with more power and influence than before.
The idea behind this post is not to simply call everything I don’t like fascist. Sam Altman is a gay Jewish man who was furious about the election of Donald Trump. The issue is not that Altman or Bankman-Fried or Andreesen or Kurzweil or any of the other technophiles discussed so far are “literally Hitler.” The issue is that high technology shares all the hallmarks of a millenarian cult and the breathless evangelism about the power and opportunity of AI is indistinguishable from cult recruitment. And moreover, that its cultism meshes perfectly with the American evangelical far-right. Technologists believe they are creating a revolution when in reality they are playing right into the hands of a manipulative, mainstream political force. We saw it in 2016 and we learned nothing from that lesson.
Doomsday cults can never admit when they are wrong. Instead, they double down. We failed to make artificial intelligence so we pivoted to artificial life. We failed to make artificial life so now we’re trying to program the messiah. Two months before the Metaverse went belly-up, McKinsey valued it at up to $5 trillion dollars by 2030. And it was without a hint of irony or self-reflection that they pivoted and valued GenAI at up to $4.4 trillion annually. There’s not even a hint of common sense in this analysis.
As a career computational mathematician, I’m shaken by this. It’s not that I think machine learning doesn’t have a place in our world. I’m also not innocent. I’ve earned a few million dollars lifetime hitting data with processing power and hoping money comes out, not all of that out of pure goodwill. Yet I truly believe there are plenty of good, even humanitarian applications of data science. It’s just that creating godhood is not one of them.
And so the vast majority of OpenAI’s employees threatened to quit if Altman was not reinstated. And so Altman was returned to the company five days after the board fired him, with more power and influence than before.
The idea behind this post is not to simply call everything I don’t like fascist. Sam Altman is a gay Jewish man who was furious about the election of Donald Trump. The issue is not that Altman or Bankman-Fried or Andreesen or Kurzweil or any of the other technophiles discussed so far are “literally Hitler.” The issue is that high technology shares all the hallmarks of a millenarian cult and the breathless evangelism about the power and opportunity of AI is indistinguishable from cult recruitment. And moreover, that its cultism meshes perfectly with the American evangelical far-right. Technologists believe they are creating a revolution when in reality they are playing right into the hands of a manipulative, mainstream political force. We saw it in 2016 and we learned nothing from that lesson.
Doomsday cults can never admit when they are wrong. Instead, they double down. We failed to make artificial intelligence so we pivoted to artificial life. We failed to make artificial life so now we’re trying to program the messiah. Two months before the Metaverse went belly-up, McKinsey valued it at up to $5 trillion dollars by 2030. And it was without a hint of irony or self-reflection that they pivoted and valued GenAI at up to $4.4 trillion annually. There’s not even a hint of common sense in this analysis.
As a career computational mathematician, I’m shaken by this. It’s not that I think machine learning doesn’t have a place in our world. I’m also not innocent. I’ve earned a few million dollars lifetime hitting data with processing power and hoping money comes out, not all of that out of pure goodwill. Yet I truly believe there are plenty of good, even humanitarian applications of data science. It’s just that creating godhood is not one of them.
by Emily F. Gorcenski | Read more:
Image: uncredited
[ed. This is a long, really long essay. But, as I stated in an earlier post, I tend to gravitate toward the smartest people in the room and Ms. Gorecenski is certainly one of them. She gives us much to think about here (and above). Her perspectives on technology evangelism and venture capitalist motives - particularly everlasting life through digital integration are, to me really thought provoking.]