Lately, Chiang has been thinking about this current reality: Via viral essays for The New Yorker, he’s been wading into this year’s public discourse to explain ChatGPT and generative AI in terms any smartphone-wielder can actually process. For a species forever at odds with our own imaginative powers, the sci-fi author has become the most lucid voice in the room—a credit as much to that compact Chiangian prose as much as it is to the utter chaos of the 2023 technological landscape.
Some time in between Marc Andreessen blogging about how AI will save the world and the release of the new Black Mirror season, Chiang and I sat down over Zoom to discuss our current moment in tech and the metaphors we use to make sense of it all.
This conversation has been condensed and edited.
Vanity Fair: In terms of cultural touchstones, what were your earliest influences?
Ted Chiang: When I was maybe 11, I started reading Isaac Asimov—his science fiction and his popular science writing. Reading both gave me a very clear sense of the difference between the two. When I was younger, say in fourth grade, I had been really into books about sea serpents and Bigfoot and ancient astronauts. What I didn’t realize was the mixture of fact and fiction that is involved in those topics, so when I started reading Asimov, it clarified for me the nature of my interest. Because, yeah, there’s cool stuff in science, and there’s really cool stuff in speculating about science, but in coming up with your fictional scenarios inspired by science, you should be very clear about which one you’re engaged with at any point. (...)
How plugged in are you to the daily churn of tech news? I’m curious if you keep up with things like Marc Andreessen’s blog post about AI.
I am not, although I guess I'll say I'm not super interested in what Marc Andreessen has to say. In general, I can't say that I really keep up in any systematic fashion. But nowadays, you almost have to make a deliberate effort to avoid hearing about AI.
Would you consider yourself to be an early adopter?
Not of most technologies. I feel like being an early adopter requires a real commitment to constantly getting used to a new UI. I’m interested to see what is happening in technology, but in terms of my day-to-day work, I’m not looking for new software unless there’s an actual problem that I’m having. I wish I could still use a much older version of Word than I have to. (...)
In the most recent essay, “Will AI Become the New McKinsey?,” you talk about our reliance on problematic metaphors, like comparing AI to a genie in a bottle, stuff like that. I’ve been thinking about how we also love using the same default pop culture touchstones when it comes to talking about tech we don’t understand—works like The Terminator, Black Mirror, Her, etc. What do you think of the limitations of having these default references on hand?
By personifying things, it's easy to tell a dramatic story. If you think of what is currently called “AI,” it’s more like a system. There are stories about the effects of bureaucracy and systems crushing people, but those are a little harder. They’re not as visceral. (...)
By personifying things, it's easy to tell a dramatic story. If you think of what is currently called “AI,” it’s more like a system. There are stories about the effects of bureaucracy and systems crushing people, but those are a little harder. They’re not as visceral. (...)
The “AI as McKinsey” piece also articulates an underlying capitalist critique in your work. You clearly hold a lot of skepticism about the idea that Silicon Valley can provide magic fixes for social ills; you wrote this BuzzFeed News essay in 2017 that was so saucy. When reading “Seventy-Two Letters,” your short story from 2000, I gravitate toward this conversation between a craftsman and an inventor trying to create labor-saving robots, where the craftsman tells the inventor:
“Your desire for reform does you credit. Let me suggest, however, that there are simpler cures for the social ills you cite: a reduction in working hours, or the improvement of conditions. You do not need to disrupt our entire system of manufacturing.”
At a moment when we’re being promised “labor-saving” AI, this feels…relevant.
There's this saying, “There are two kinds of fools. The first says, ‘This is old and therefore good.’ And the second one says, ‘This is new and therefore better.’” I think about that a lot. How can you evaluate the merits of anything fairly without thinking it's good simply because it’s new? I think that is super difficult.
There probably was a time in history where most people were thinking, “This is old and therefore good,” and they carried the day. Now I think that we live in a time where everyone says, “This is new and therefore better.” I don't believe that the people who say that are right all the time, but it is very difficult to criticize them and suggest that maybe something that is new is not better.
Or it's like, better for whom?
Yes. Because we also live in an era in which there are a lot of people who have financial incentives to convince us that something is better because it's new. There’s another quote where Upton Sinclair said that it's very hard to make a person understand something if their salary depends on them not understanding it. The companies who are selling these products—the people who work for them, you know, they may be entirely sincere. it's not exactly malice. It's just that, you know, they have a kind of motivated reasoning to believe that these things are good.
My last question is about your very short story, “The Evolution of Human Science,” also from 2000. I read this as a fairly upbeat story about a universe where humans can exist peacefully and productively alongside “metahumans,” who are these superintelligent entities they’ve created. There’s this great line that says, “We should always remember that the technologies that made metahumans possible were originally invented by humans, and they were no smarter than we.”
Is that a fair interpretation? And does that optimism apply to where you think we stand in the real world in relation to something like AI?
That story was written in response to an idea there was around 2000, when people were talking about the singularity and that we would transcend into something much greater. I was mostly thinking, well, why is everyone so certain they’re going to be the ones to transcend? Maybe transcendence isn’t going to be available to all of us, so what would it be like to live in a world where there are these incomprehensible things going on, and you’re sort of on the sidelines?
But I don't think that that is actually applicable to our current situation here, because there are no super intelligent machines.There's no software that anyone has built that is smarter than humans. What we have created are vast systems of control. Our entire economy is this kind of engine that we can’t really stop. That’s a different thing than saying we’ve created machines smarter than us. We have built a giant treadmill that we can’t get off. Maybe.
It probably is possible to get off, but we have to recognize that we are all on this treadmill of our own making, and then we have to agree that we all want to get off. There are other countries that have a healthier relationship to the narrative of progress; there are countries where they have much healthier attitudes toward work than we have in the U.S. So I think those things are possible. But we have created a system, and now it is all we know. It’s hard for us to imagine life outside of it. And we are only building more tools that strengthen and reinforce that system.
by Delia Cai, Vanity Fair | Read more:
Image: Alan Berner