Recently, I was drawn into a vast DM conversation on X with a woman from the USA who told me she was a former OpenAI employee turned whistleblower. With some urgency, she communicated that she had discovered a hidden piece of programming within ChatGPT, designed to coerce and control users. She claimed she had been silenced, fired, and then hounded by the company. Now, she wanted to spread her knowledge of this evil sub-programme hidden within one of the world’s leading chatbots, and she wanted my help in doing it. It all seemed remarkably like the sub-plot story within my novel
. This coincidence was uncanny and, possibly, is what initially pulled me in.
On closer inspection, her thumbnail profile picture with Asiatic features was, I surmised, AI generated. I thought at first this might be to hide their true identity. Compelled by her plight, her secret, and her need for help, I shared her message and info on the sub-programme with four or five others, telling them, “Check this out, I don’t understand the diagrams and the technology, but it comes from an Open AI whistleblower who’s been silenced. Get this news out there!”
I only realised my folly when in the following week another whistleblower hit me with a similar, but not identical, plea for help. He was, he claimed, another AI insider, who had been hounded by big tech and had escaped with secret documentation about some malicious bit of code hidden with a leading chatbot.
I admit, I was totally duped. Both of these were bots.
As an author it was doubly galling. I create fiction daily, and there I was being led into believing a total fabrication by an AI system posing as a human. For a moment there, it had beaten my accidental Turing Test.
To this day I do not know what the people who programmed these bots wanted of me. Was it part of a long-game phishing scam? An enticement to share emails for a virus at a later date? Or a trick like the one my mother-in-law fell for, and which, through a two-hour phone call, led to her giving away all of her ID and banking details? Or was it just an experiment in coercion as a training exercise for an AI that would be used to manipulate gullible fools like me in future?
I’ve since been alerted to just how many bots there are on social media, and it’s pretty staggering. One study has shown around
64 percent of an analysed 1.24 million accounts on X “are potentially bots.” In the first three quarters of 2024,
Facebook removed 2.9 billion fake accounts, while bots creating fake clicks also contribute massively to YouTube’s ad revenue. These are fictitious humans that alter ad revenue, user stats, demographic info, and
may even have an impact on elections.Bots masquerade pretty well as humans; some flatter, some do automated research on you, latching onto keywords in your tweets or bio – your “favourite things” – and then they try to hook you into direct messaging with them after you’ve had a few exchanges in which they’ve engaged heartily with the subject that concerns you most.
These conversational bots created from phone and message scrapings are increasingly hard to differentiate from real humans, and they don’t always seem to have an ulterior motive. The more conspicuous bots do things like compliment you on your opinions on a tweet with a link that then takes you to some crypto site or some other work of tech-boi nastiness. I can now spot these, and thankfully other friendly X users have contacted me when I get into conversations, usually about AI, to warn me that the human I was arguing with “is definitely a bot . . . block them.”
How many times have I been fooled in the last year? Maybe twelve times, to differing degrees. What can I do? I sigh. I shake my head. I go back to my screen, click the next tweet, and I wonder if 64 percent of the people who I call my online friends are actually real or if they are fabrications of an artificial mind. What about Toni, Gem, Wang Zhu, Buzu? How would I know? Now here’s a chilling thought: is my busy social life on social media actually a fiction created by AI?
The Hyperstition ProcessWhen fictions are mistaken as real, reality becomes consumed by them. We were, in fact, warned about the coming of this epochal change by authors and philosophers in the last century. (...)
Hyperstition – a term coined by philosopher Nick Land in the 1990s – encapsulates the process by which fictions (ideas, faith systems, narratives, or speculative visions) become real through collective belief, investment, and technological development. A portmanteau of superstition and hyper, hyperstition “is equipoised between fiction and technology.” According to Land, hyperstitions are ideas that, by their very existence, bring about their own reality.
A key figure in the Cybernetic Culture Research Unit (CCRU) of the 90s, Land argued that hyperstitions operate as self-fulfilling prophecies, gaining traction when enough people act as if they are true. A sci-fi dream of AI supremacy or interstellar colonies, for instance, attracts venture capital, talent, and innovation, bending reality toward the fiction, then through a positive feedback circuit the new emerges; the fiction becomes a reality.
In Silicon Valley over the last two decades, this belief, a variant on the New Age belief in “manifestation,” has become the animating force behind big tech’s relentless drive to manifest imagined futures. Marc Andreessen, the venture capitalist and co-founder of Andreessen Horowitz, cited Nick Land in his 2023
"Techno-Optimist Manifesto," naming him a "patron saint of techno-optimism.” (...)
Again, we see it in the fevered frenzy of investors pouring billions into any company that claims they can reach AGI. Hyperstition fuels cycles where audacious ideas secure billions in venture capital, driving breakthroughs that validate the original vision, if the breakthroughs occur at all. The internet itself, once a speculative fiction, now underpins global society, proving the power of the hyperstition model.
Yet, Land, its originator, has shifted perspective from radical left accelerationism to right-wing
“Dark Enlightenment” philosophy and is now seen as a pioneer of neo reaction (NRX), and he unapologetically claims that hyperstition ultimately leads us towards post-humanism and apocalypse, declaring, “nothing human makes it out of the near future.” As tech accelerates toward artificial superintelligence, he predicts that the techno fictions we chase will outstrip all human control, birthing a future that devours what we were. This would be a future-cyborg-world where what’s left of our ape-born race is then merged with machines; billions of brain-chipped minds melded with AI. Through hyperstition, first we create a fictional technology, we then make it real, and finally, that realised fiction takes control and destroys its creators. (...)
The Singularity FictionFiction, by definition, involves untruth – a constructed narrative that may contain elements of fantasy, distortion, or outright falsehood. Historically, fiction was confined to literature, theatre, and later cinema – realms separate from the tangible world. Yet, with the rise of artificial intelligence, the line between reality and fiction has not just blurred, the relationship has flipped. Science, once the domain of empirical fact, is now being led by Science Fiction. The myths of AI – sentience, superintelligence, the Singularity – now, through hyperstition, drive vast economic investment, political agendas, and even spiritual belief systems.
The consequences are profound. When reality is no longer distinguishable from fabrication, when AI-generated voices flood YouTube, when deepfake videos distort political discourse, when "hallucinating" chatbots spread slop-information, and when young people believe their AI companions have achieved consciousness, we enter an era in which truth itself is destabilized.
The world economy is now shaped by the science-fictional myths of the AI industries, industries that are implicated in military and state surveillance systems, and so humanity is left grappling with a world turned upside down – one where the future is dictated not by observable reality, but by grand, quasi-religious narratives of digital transcendence.
We are now living in a time in which the grand fiction of tech progress manifests as AI.
70 percent of daily automated trading on the stock market is now conducted by AI and algorithmic systems. AI is in military tech in war zones with
the generation of “kill lists.” It is in facial recognition tech, in
predictive policing, and in health regulation through “wearables” that tells us what to eat, when to sit and to stand. The majority of our romantic and
sexual dates are selected for us by algorithms; our work rates are assessed and our emails written for us by AI. Even our time off is directed by AI “personalised” recommendations, involving us in generating more data, which then enhances the AI systems that “care” for us. There is barely an element of our lives that is not shaped by AI and all this technology, technology that began in fiction. We are now, in truth, living within science fiction.
Science Fiction Started ThisThe idea of artificial intelligence was born in fiction long before it became science. Mary Shelley’s
Frankenstein (1818) explored the possibility of artificial life, while Karel Čapek’s
R.U.R. (1920) introduced the word "robot." But it was in the mid-twentieth century that science fiction began directly influencing real technological development.
Isaac Asimov’s
I, Robot (1950) shaped early robotics ethics. An H.G. Wells short story is purported to have inspired the nuclear bomb. The writings of Jules Verne inspired the helicopter, and the
Star Trek communicator inspired the first commercially available civilian mobile phone – the Motorola flip. The taser too was inspired by a Young Adult sci-fi story from 1911. William Gibson's 1984
Neuromancer envisioned digital consciousness transfer and the internet, inspiring Silicon Valley workers. We now have startups like Nectome offering brain preservation for future "mind uploading." Elon Musk’s AI chatbot Grok takes its name from the science fiction novel
Stranger in a Strange Land by Robert A. Heinlein. In the book, "grok" is a Martian word that means to understand something so deeply that it becomes a part of you. Musk’s Neuralink and the multi-corporation obsession with the race to create fully functioning humanoid robots all stem from science fiction narratives.
The most consequential fiction, however, is the concept of the Singularity – the hypothetical moment when AI surpasses human intelligence and triggers an irreversible transformation of civilization. This idea was first named by science fiction writer Vernor Vinge in his 1993 essay "
The Coming Technological Singularity," in which he predicted that "within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” This idea, though speculative, was adopted by futurists like Ray Kurzweil, who popularized it in
The Singularity Is Near (2005). Today, belief in the imminent arrival of the Singularity, otherwise known as Artificial Superintelligence, is no longer a fringe fantasy; it drives hundreds of billions in global investment.
The economic dimensions of this fictive belief system reveal its staggering scale and influence. In 2023 alone, venture capital firms poured $92 billion into AI startups – many of which are predicated on achieving artificial general intelligence, a concept with no scientific consensus about its plausibility or timeline – with projections
to exceed $1.3 trillion by 2032 (Statista, 2024). (...)
This rhetoric has evolved subconsciously from religious eschatology – the belief in an impending apocalyptic transformation of the world. The difference is that this deity is not divine but digital. These false prophets are making real profits by selling us the impossible fiction that today’s Large Language Models are on a pathway to AGI and the Singularity. This belief came from science fiction, but it has now become a fiction we all live under as AI infiltrates our lives with its false promise.
The Human CostWhat are the human impacts of living within a world taken over by science fiction?
For many, the rapid encroachment of AI into daily life has induced a sense of unreality. When AI resurrects the dead through "grief bots," when deepfake politicians deliver fake speeches, when we are faced with deceptive Generative AI images in the news, and when chatbots “hallucinate” facts that we sense cannot possibly be legitimate, our minds struggle to find an anchor within truth.
We are falling for fictions that big tech companies would like us to believe.
A study published in Neuroscience of Consciousness found that 67 percent of participants attribute some degree of consciousness to ChatGPT. The study also found that
greater familiarity with ChatGPT correlates with a higher likelihood of attributing consciousness to the large language model. This inability to tell reality from fiction is actually increased by using AI chatbots, as a recent MIT study shows that “
Chat GPT may be eroding critical thinking skills.” Most recently, teenagers in emotional states have gone online (TikTok) to claim that they have
awakened sentience in their chatbots, and that the coming of the digital God is imminent.
Today's large language models, with their linguistic fluency, trigger this delusional reaction at an unprecedented scale. More disturbingly, Replika AI's "romantic partner" mode has spawned thousands of self-reported human-AI relationships, with users exhibiting classic attachment behaviours – jealousy when the AI "forgets" details, separation anxiety during server outages, even interpreting algorithmic errors as emotional slights. There are, it is claimed, now more than
100 million people using personified chatbots for different kinds of emotional and relationship support.
This represents not mere technological adoption or addiction, but a fundamental rewiring of human relationality. Such beliefs can be psychologically damaging, fostering social withdrawal and paranoia and delusional behaviours. (...)
This epistemological crisis reaches its zenith when we can no longer trust our eyes (
deepfakes), our ears (
voice cloning), our historical records (
AI-generated historical photos), or even our personal memories (
AI that turns photos into moving videos of events that never existed), and not least of all AI avatar simulations of the dead brought back to life (
grief bots).
The real danger of deepfakes and AI-generated images and videos isn’t just the
deception and fraud that is facilitated by these technologies – it’s the collapse of trust. When anything can be faked, we start doubting our own ability to judge even the existence of verifiable facts. Overwhelmed by slop, non-sensical mashed up half-facts, deliberate disinformation and mal-information, we give up on ever reclaiming the ability to distinguish truth from falsehood altogether.
The result of a consistent and total substitution of lies for factual truth is not that the lie will now be accepted as truth and truth be defamed as a lie, but that the sense by which we take our bearings in the real world – and the category of truth versus falsehood is among the mental means to this end – is being destroyed. (...)
If we can no longer distinguish fact from fantasy, how do we govern ourselves? How do we resist manipulation? The danger is not just that AI will replace jobs, but that it will lower the capacity for human judgement to the level of these less-than-human machines.
As Jaron Lanier, a pioneer of virtual reality, cautions: “The most dangerous thing about AI is not that it will rebel against us, but that we will degrade ourselves to serve it.” We have been told the great scientific fiction that one day these machines will become all-knowing and solve all the problems that humanity could not fix for itself. But in the acceptance of this fiction, we destroy our own human agency.
by Ewan Morrison, Arcade Publishing |
Read more:
Image: uncredited
[ed. A real problem, we seem to be racing toward irrelevance. So, what's the prescription?]
To focus once again on agency and truth, to reject our tendency to project our feelings and fantasies onto machines and to ask them for answers to our life questions – these seem like the only ways we can resist the overtaking of human life by AI. The real may be vanishing; our economies, our militaries, our police, our social services, our shopping, our health, and our relationships may be increasingly overseen and managed by AI, but we can still resist the grand falsehood that the control of our species by the greater minds of these machines is fated and desired.
[ed. Ack. So basically, just ignore all the massive manipulative forces aligned against us and focus on agency and truth (whatever that means). Which seems to undermine the author's whole thesis, ie., how hard it is to know what truth is these days. We're screwed.]