EZRA KLEIN: I’m Ezra Klein, and this is “The Ezra Klein Show.” For years, I have kept a list of dream guests for the show. And as long as that list has been around, Ted Chiang has been on top of it. He’s a science fiction writer, but that’s underselling him. He writes perfect short stories — perfect.
And he writes them slowly. He’s published only two collections, the “Stories of Your Life and Others” in 2002, and then, “Exhalation” more recently in 2019. And the stories in these books, they’ve won every major science fiction award you can win multiple times over — four Hugo’s, four Nebula’s, four Locus Awards. If you’ve seen the film “Arrival,” which is great — and if you haven’t, what is wrong with you — that is based on a story from the ’02 collection, the “Story of Your Life.”
I’ve just, I’ve always wondered about what kind of mind would create Chiang’s stories. They have this crazy economy in them, like not a word out of place, perfect precision. They’re built around really complicated scientific ideas, really heavy religious ideas. I actually think in a way that is not often recognized, Chiang is one of the great living writers of religious fiction, even though he’s an atheist and a sci-fi legend. But somehow, the stories, at least in my opinion, they’re never difficult. They’re very humane and propulsive. They keep moving. They’re cerebral, they’re gentle.
But man, the economy of them is severe. That’s not always the case for science fiction, which I find, anyway, can be wordy, like spilling over with explanation and exposition. Not these. So I was thrilled — I was thrilled — when Chiang agreed to join on the show. But one of the joys of doing these conversations is, I get to listen to people’s minds working in real-time. You can watch or hear them think and speak and muse.
But Chiang’s rhythm is really distinct. Most people come on the show — and this goes for me, too — speak like we’re painting in watercolor, like a lot of brush strokes, a lot of color. If you get something wrong or you have a false start, you just draw right over it or you start a new sheet. But listening to Chiang speak, I understood his stories better. He speaks like he’s carving marble. Like, every stroke has to be considered so carefully, never delivering a strike, or I guess, a word, before every alternative has been considered and rejected. It’s really cool to listen to.
Chiang doesn’t like to talk about himself. And more than he doesn’t like to, he won’t. Believe me, I’ve tried a couple of times. It didn’t make it into the final show here. But he will talk about ideas. And so we do. We talk about the difference between magic and technology, between science fiction and fantasy, the problems with superheroes and nature of free will, whether humanity will make A.I. suffer, what would happen if we found parrots on Mars. There’s so many cool ideas in this show, just as there always are in his fiction. Many of them, of course, come from his fiction. So relax into this one. It’s worth it. As always, my email is ezrakleinshow@nytimes.com. Here’s Ted Chiang.
So you sent me this wonderful speech questioning the old Arthur C. Clarke line, “Any sufficiently advanced technology is indistinguishable from magic,” what don’t you like about that line?
TED CHIANG: So, when people quote the Arthur C. Clarke line, they’re mostly talking about marvelous phenomena, that technology allows us to do things that are incredible and things that, in the past, would have been described as magic, simply because they were marvelous and inexplicable. But one of the defining aspects of technology is that eventually, it becomes cheaper, it becomes available to everybody. So things that were, at one point, restricted to the very few are suddenly available to everybody. Things like television — when television was first invented, yeah, that must have seemed amazing, but now television is not amazing because everyone has one. Radio is not amazing. Computers are not amazing. Everyone has one.
Magic is something which, by its nature, never becomes widely available to everyone. Magic is something that resides in the person and often is an indication that the universe sort of recognizes different classes of people, that there are magic wielders and there are non-magic wielders. That is not how we understand the universe to work nowadays. That reflects a kind of premodern understanding of how the universe worked. But since the Enlightenment, we have moved away from that point of view. And a lot of people miss that way of looking at the world, because we want to believe that things happen to us for a reason, that the things that happen to you are, in some way, tied to the things you did. (...)
EZRA KLEIN: You have this comparison of what science fiction and fantasy are good for. And you write that science fiction helps us to think through the implications of ideas and that fantasy is good at taking metaphors and making them literal. But what struck me reading that is it often seems to me that your work, it takes scientific ideas and uses them as metaphor. So is there such a difference between the two?
TED CHIANG: So when it comes to fiction about the speculative or the fantastic, one way to think about these kind of stories is to ask, are they interested in the speculative element literally or metaphorically or both? For example, at one end of the spectrum, you’ve got Kafka and in “The Metamorphosis,” Gregor Samsa turning into an insect. That is pretty much entirely a metaphor. It’s a stand-in for alienation. At the other end of the spectrum, you’ve got someone like Kim Stanley Robinson. And when he writes about terraforming Mars, Mars is not standing in for anything else. He is writing very literally about Mars.
Now, most speculative or fantastic fiction falls somewhere in between those two. And most of it is interested in both the literal and the metaphorical at the same time, but to varying degrees. So, in the context of magic, when fantasy fiction includes people who can wield magic, magic stands in for the idea that certain individuals are special. Magic is a way for fantasy to say that you are not just a cog in the machine, that you are more than someone who pushes paper in an office or tightens bolts on an assembly line. Magic is a way of externalizing the idea that you are special. (...)
EZRA KLEIN: Let me flip this now. We’re spending billions to invent artificial intelligence. At what point is a computer program responsible for its own actions?
TED CHIANG: Well, in terms of at what point does that happen, it’s unclear, but it’s a very long ways from us right now. With regard to the question of, will we create machines that are moral agents, I would say that we can think about that in three different questions. One is, can we do so? Second is, will we do so? And the third one is, should we do so?
I think it is entirely possible for us to build machines that are moral agents. Because I think there’s a sense in which human beings are very complex machines and we are moral agents, which means that there are no physical laws preventing a machine from being a moral agent. And so there’s no obstacle that, in principle, would prevent us from building something like that, although it might take us a very, very long time to get there.
As for the question of, will we do so, if you had asked me, like, 10 or 15 years ago, I would have said, we probably won’t do it, simply because, to me, it seems like it’s way more trouble than it’s worth. In terms of expense, it would be on the order of magnitude of the Apollo program. And it is not at all clear to me that there’s any good reason for undertaking such a thing. However, if you ask me now, I would say like, well, OK, we clearly have obscenely wealthy people who can throw around huge sums of money at whatever they want basically on a whim. So maybe one of them will wind up funding a program to create machines that are conscious and that are moral agents.
However, I should also note that I don’t believe that any of the current big A.I. research programs are on the right track to create a conscious machine. I don’t think that’s what any of them are trying to do. So then as for the third question of, should we do so, should we make machines that are conscious and that are moral agents, to that, my answer is, no, we should not. Because long before we get to the point where a machine is a moral agent, we will have machines that are capable of suffering.
Suffering precedes moral agency in sort of the developmental ladder. Dogs are not moral agents, but they are capable of experiencing suffering. Babies are not moral agents yet, but they have the clear potential to become so. And they are definitely capable of experiencing suffering. And the closer that an entity gets to being a moral agent, the more that it’s suffering, it’s deserving of consideration, the more we should try and avoid inflicting suffering on it. So in the process of developing machines that are conscious and moral agents, we will be inevitably creating billions of entities that are capable of suffering. And we will inevitably inflict suffering on them. And that seems to me clearly a bad idea.
EZRA KLEIN: But wouldn’t they also be capable of pleasure? I mean, that seems to me to raise an almost inversion of the classic utilitarian thought experiment. If we can create these billions of machines that live basically happy lives that don’t hurt anybody and you can copy them for almost no marginal dollar, isn’t it almost a moral imperative to bring them into existence so they can lead these happy machine lives?
TED CHIANG: I think that it will be much easier to inflict suffering on them than to give them happy fulfilled lives. And given that they will start out as something that resembles ordinary software, something that is nothing like a living being, we are going to treat them like crap. The way that we treat software right now, if, at some point, software were to gain some vague glimmer of sentience, of the ability to perceive, we would be inflicting uncountable amounts of suffering on it before anyone paid any attention to them.
Because it’s hard enough to give legal protections to human beings who are absolutely moral agents. We have relatively few legal protections for animals who, while they are not moral agents, are capable of suffering. And so animals experience vast amounts of suffering in the modern world. And animals, we know that they suffer. There are many animals that we love, that we really, really love. Yet, there’s vast animal suffering. So there is no software that we love. So the way that we will wind up treating software, again, assuming that software ever becomes conscious, they will inevitably fall lower on the ladder of consideration. So we will treat them worse than we treat animals. And we treat animals pretty badly.
EZRA KLEIN: I think this is actually a really provocative point. So I don’t know if you’re a Yuval Noah Harari reader. But he often frames his fear of artificial intelligence as simply that A.I. will treat us the way we treat animals. And we treat animals, as you say, unbelievably terribly. But I haven’t really thought about the flip of that, that maybe the danger is that we will simply treat A.I. like we treat animals. And given the moral consideration we give animals, whose purpose we believe to be to serve us for food or whatever else it may be, that we are simply opening up almost unimaginable vistas of immorality and cruelty that we could inflict pretty heedlessly, and that given our history, there’s no real reason to think we won’t. That’s grim. [LAUGHS]
TED CHIANG: It is grim, but I think that it is by far the more likely scenario. I think the scenario that, say, Yuval Noah Harari is describing, where A.I.’s treat us like pets, that idea assumes that it’ll be easy to create A.I.’s who are vastly smarter than us, that basically, the initial A.I.’s will go from software, which is not a moral agent and not intelligent at all. And then the next thing that will happen will be software which is super intelligent and also has volition.
Whereas I think that we’ll proceed in the other direction, that right now, software is simpler than an amoeba. And eventually, we will get software which is comparable to an amoeba. And eventually, we’ll get software which is comparable to an ant, and then software that is comparable to a mouse, and then software that’s comparable to a dog, and then software that is comparable to a chimpanzee. We’ll work our way up from the bottom.
A lot of people seem to think that, oh, no, we’ll immediately jump way above humans on whatever ladder they have. I don’t think that is the case. And so in the direction that I am describing, the scenario, we’re going to be the ones inflicting the suffering. Because again, look at animals, look at how we treat animals.
EZRA KLEIN: So I hear you, that you don’t think we’re going to invent superintelligent self-replicating A.I. anytime soon. But a lot of people do. A lot of science fiction authors do. A lot of technologists do. A lot of moral philosophers do. And they’re worried that if we do, it’s going to kill us all. What do you think that question reflects? Is that a question that is emergent from the technology? Or is that something deeper about how humanity thinks about itself and has treated other beings?
TED CHIANG: I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.
Let’s think about it this way. How much would we fear any technology, whether A.I. or some other technology, how much would you fear it if we lived in a world that was a lot like Denmark or if the entire world was run sort of on the principles of one of the Scandinavian countries? There’s universal health care. Everyone has child care, free college maybe. And maybe there’s some version of universal basic income there.
Now if the entire world operates according to — is run on those principles, how much do you worry about a new technology then? I think much, much less than we do now. Most of the things that we worry about under the mode of capitalism that the U.S practices, that is going to put people out of work, that is going to make people’s lives harder, because corporations will see it as a way to increase their profits and reduce their costs. It’s not intrinsic to that technology. It’s not that technology fundamentally is about putting people out of work.
It’s capitalism that wants to reduce costs and reduce costs by laying people off. It’s not that like all technology suddenly becomes benign in this world. But it’s like, in a world where we have really strong social safety nets, then you could maybe actually evaluate sort of the pros and cons of technology as a technology, as opposed to seeing it through how capitalism is going to use it against us. How are giant corporations going to use this to increase their profits at our expense?
And so, I feel like that is kind of the unexamined assumption in a lot of discussions about the inevitability of technological change and technologically-induced unemployment. Those are fundamentally about capitalism and the fact that we are sort of unable to question capitalism. We take it as an assumption that it will always exist and that we will never escape it. And that’s sort of the background radiation that we are all having to live with. But yeah, I’d like us to be able to separate an evaluation of the merits and drawbacks of technology from the framework of capitalism.
Image: Arturo Villarrubia