Perhaps no writer has been more clairvoyant about our current technological age than Neal Stephenson. His novels coined the term metaverse, laid the conceptual groundwork for cryptocurrency, and imagined a geoengineered planet. And nearly three decades before the release of ChatGPT, he presaged the current AI revolution. A core element of one of his early novels, The Diamond Age: Or, a Young Lady’s Illustrated Primer, is a magical book that acts as a personal tutor and mentor for a young girl, adapting to her learning style—in essence, it is a personalized and ultra-advanced chatbot. The titular Primer speaks aloud in the voice of a live actor, known as a “ractor”—evoking how today’s generative AI, like many digital technologies, is highly dependent on humans’ creative labor.
Stephenson’s book, published in 1995, explores a future of seamless, instant digital communication, in which tiny computers with immense capabilities are embedded in everyday life. Corporations are dominant, news and ads are targeted, and screens are omnipresent. It’s a world of stark class and cultural divisions (the novel follows a powerful aristocratic sect that styles itself as the “neo-Victorians”), but it’s nevertheless one in which the Primer is presented as the best of what technology can be.
But Stephenson is far more pessimistic about today’s AI than he was about the Primer. “A chatbot is not an oracle,” he told me over Zoom last Friday. “It’s a statistics engine that creates sentences that sound accurate.” I spoke with Stephenson about his uncannily prescient book and the generative-AI revolution that has seemingly begun.
This conversation has been edited for length and clarity.
Matteo Wong: The Young Lady’s Illustrated Primer is a book that adapts to and teaches a young girl, which seems to resonate with the vision of AI chatbots and assistants that many companies have for the near future. Did you set out to explore the idea of an intelligent machine in imagining the Primer?
Neal Stephenson: The idea came to me after we had a kid and got this mobile that was designed to suspend over the crib. It had very primitive, simple shapes on it because, when they’re newborns, their visual systems can’t resolve fine details. So there would be a square and a triangle and a circle. And then, after a certain number of days or weeks had gone by, you were supposed to pop those cards off of the mobile and snap on a different set that had a more appropriate fit for what their brains were capable of at that age. That just got me to thinking: What if you extended that idea to every other form of intellectual growth?
The technology that drives the book wasn’t really AI as we think of it now—I was talking to people who were working on some of the underlying technologies that would be needed to communicate on the internet in a secure, anonymous manner. I guess it’s implicit that there’s an AI in there that’s generating the story and increasing the degree of sophistication in response to the learning curve of the child, but I didn’t really go into that very much; I just kind of assumed it would be there.
Wong: A lot of companies today—OpenAI, Google, Meta, to name a few—have said they want to build AI assistants that adapt to each user, somewhat like how the Primer acts as a teacher. Do you see anything in the generative-AI models of today that resembles or could one day become like the Primer?
Stephenson: About a year ago, I worked with a start-up that makes AI characters in video games. I found it rewarding and fascinating because of the hallucinations: I could see how new patterns emerged from the soup of inputs being fed to it. The same thing that I consider to be a feature is a bug in most applications. We’ve already seen examples of lawyers who use ChatGPT to create legal documents, and the AI just fabricated past cases and precedents that seemed completely plausible. When you think about the idea of trying to make use of these models in education, this becomes a bug too. What they do is generate sentences that sound like correct sentences, but there’s no underlying brain that can actually discern whether those sentences are correct or not.
Think about any concept that we might want to teach somebody—for instance, the Pythagorean theorem. There must be thousands of old and new explanations of the Pythagorean theorem online. The real thing we need is to understand each child’s learning style so we can immediately connect them to the one out of those thousands that is the best fit for how they learn. That to me sounds like an AI kind of project, but it’s a different kind of AI application from DALL-E or large language models.
Wong: And yet, today, those language models, which fundamentally predict words in a sequence, are being applied to many areas where they have no specialized abilities—GPT-4 for medical diagnosis, Google Bard as a tutor. That reminds me of a term used in the book instead of artificial intelligence, pseudo-intelligence, which many critics of the technology might appreciate today.
Stephenson: I’d forgotten about that. The running gag of that book was applying Victorian diction and prejudices to high-tech things. What was probably going through my mind was that Victorians would look askance at the term artificial intelligence, because they would be offended by the idea that computers could replace human brains. So they would probably want to bracket the idea as a simulation, or a “pseudo” intelligence, as opposed to the real thing.
Wong: About a year ago, in an interview with the Financial Times, you called the outputs of generative AI “hollow and uninteresting.” Why was that, and has your assessment changed?
Stephenson: I suspect that what I had in mind when I was making those remarks was the current state of image-generating technology. There were a few things about that rubbing me the wrong way, the biggest being that they are benefiting from the uncredited work of thousands of real human artists. I’m going to exaggerate slightly, but it seems like one of the first applications of any new technology is making things even shittier for artists. That’s certainly happened with music. These image-generation systems just seemed like that was mechanized and weaponized on an inconceivable scale.
Another part of it was that a lot of people who got excited about this early on just generated huge volumes of material and put them out willy-nilly on the internet. If your only way of making a painting is to actually dab paint laboriously onto a canvas, then the result might be bad or good, but at least it’s the result of a whole lot of micro-decisions you made as an artist. You were exercising editorial judgment with every paint stroke. That is absent in the output of these programs.
Wong: Even in The Diamond Age, the Primer seems to provide commentary on artists’ labor and tech, which is very relevant to generative AI today. The Primer teaches a girl, but a human actor digitally connected to the book has to voice the text aloud. (...)
Stephenson: The scenario I was laying out in The Diamond Age is that the ractors are a scarce resource, and so the Primer is more of a luxury product. But eventually, the source code for the book falls into the hands of a man who wants to manufacture it on a massive scale, and there’s not enough money and not enough actors in the world to voice all those books, so at that point, he decides to use automatically generated voices.
Wong: Another theme in the novel is how different socioeconomic classes have access to education. The Primer is designed for an aristocrat, but your novel also traces the stories of middle- and working-class girls who interact with versions of the book. Right now a lot of generative AI is free, but the technology is also very expensive to run. How do you think access to generative AI might play out?
Stephenson: There was a bit of early internet utopianism in the book, which was written during that era in the mid-’90s when the internet was coming online. There was a tendency to assume that when all the world’s knowledge comes online, everyone will flock to it. It turns out that if you give everyone access to the Library of Congress, what they do is watch videos on TikTok. The Diamond Age reflects the same naivete that I shared with a lot of other people back in the day about how all of that knowledge was going to affect society.
Wong: Do you think we’re seeing some of that naivete today in people looking at how generative AI can be used?
Stephenson: For sure. It’s based on an understandable misconception as to what these things are doing. A chatbot is not an oracle; it’s a statistics engine that creates sentences that sound accurate. Right now my sense is that it’s like we’ve just invented transistors. We’ve got a couple of consumer products that people are starting to adopt, like the transistor radio, but we don’t yet know how the transistor will transform society.
by Matteo Wong, The Atlantic | Read more:
Image:Illustration by The Atlantic. Sources: Heritage Images; Amy E. Price / Getty
[ed. Great book. Kind of went off the rails in the end (in my opinion), which is a shame because the rest of it is terrific.]
[ed. Great book. Kind of went off the rails in the end (in my opinion), which is a shame because the rest of it is terrific.]