Thursday, June 6, 2024

Catching Crumbs From The Table

In the face of metahuman science, humans have become metascientists. [ed. Fiction]

It has been 25 years since a report of original research was last submitted to our editors for publication, making this an appropriate time to revisit the question that was so widely debated then: what is the role of human scientists in an age when the frontiers of scientific inquiry have moved beyond the comprehensibility of humans?

No doubt many of our subscribers remember reading papers whose authors were the first individuals ever to obtain the results they described. But as metahumans began to dominate experimental research, they increasingly made their findings available only via DNT (digital neural transfer), leaving journals to publish second-hand accounts translated into human language.

Without DNT, humans could not fully grasp earlier developments nor effectively utilize the new tools needed to conduct research, while metahumans continued to improve DNT and rely on it even more. Journals for human audiences were reduced to vehicles of popularization, and poor ones at that, as even the most brilliant humans found themselves puzzled by translations of the latest findings.

No one denies the many benefits of metahuman science, but one of its costs to human researchers was the realization that they would probably never make an original contribution to science again. Some left the field altogether, but those who stayed shifted their attentions away from original research and toward hermeneutics: interpreting the scientific work of metahumans.

Textual hermeneutics became popular first, since there were already terabytes of metahuman publications whose translations, although cryptic, were presumably not entirely inaccurate. Deciphering these texts bears little resemblance to the task performed by traditional palaeographers, but progress continues: recent experiments have validated the Humphries decipherment of decade-old publications on histocompatibility genetics.

The availability of devices based on metahuman science gave rise to artefact hermeneutics. Scientists began attempting to ‘reverse engineer’ these artefacts, their goal being not to manufacture competing products, but simply to understand the physical principles underlying their operation. The most common technique is the crystallographic analysis of nanoware appliances, which frequently provides us with new insights into mechanosynthesis. (...)

The question is, are these worthwhile undertakings for scientists? Some call them a waste of time, likening them to a Native American research effort into bronze smelting when steel tools of European manufacture are readily available. This comparison might be more apt if humans were in competition with metahumans, but in today's economy of abundance there is no evidence of such competition. In fact, it is important to recognize that — unlike most previous low-technology cultures confronted with a high-technology one — humans are in no danger of assimilation or extinction.

There is still no way to augment a human brain into a metahuman one; the Sugimoto gene therapy must be performed before the embryo begins neurogenesis in order for a brain to be compatible with DNT. This lack of an assimilation mechanism means that human parents of a metahuman child face a difficult choice: to allow their child DNT interaction with metahuman culture, and watch him or her grow incomprehensible to them; or else restrict access to DNT during the child's formative years, which to a metahuman is deprivation like that suffered by Kaspar Hauser. It is not surprising that the percentage of human parents choosing the Sugimoto gene therapy for their children has dropped almost to zero in recent years.

As a result, human culture is likely to survive well into the future, and the scientific tradition is a vital part of that culture. Hermeneutics is a legitimate method of scientific inquiry and increases the body of human knowledge just as original research did. Moreover, human researchers may discern applications overlooked by metahumans, whose advantages tend to make them unaware of our concerns.

For example, imagine if research offered hope of a different intelligence-enhancing therapy, one that would allow individuals to gradually ‘upgrade’ their minds to a level equivalent to that of a metahuman. Such a therapy would offer a bridge across what has become the greatest cultural divide in our species' history, yet it might not even occur to metahumans to explore it; that possibility alone justifies the continuation of human research.

by Ted Chiang, Nature |  Read more:
Image: JACEY
[ed. I've been a big fan of Ted for a long time now and have all his books of short stories. Besides the example above, here's another one: The Great Silence (Electric Lit); and the post following this one: The Lifecycle of Software Objects (Subteranian Press). Finally, see also: Interview: Ted Chiang (transcript from the Ezra Klein Show podcast/NYT):]
***
EZRA KLEIN: Let me flip this now. We’re spending billions to invent artificial intelligence. At what point is a computer program responsible for its own actions?

TED CHIANG: Well, in terms of at what point does that happen, it’s unclear, but it’s a very long ways from us right now. With regard to the question of, will we create machines that are moral agents, I would say that we can think about that in three different questions. One is, can we do so? Second is, will we do so? And the third one is, should we do so?

I think it is entirely possible for us to build machines that are moral agents. Because I think there’s a sense in which human beings are very complex machines and we are moral agents, which means that there are no physical laws preventing a machine from being a moral agent. And so there’s no obstacle that, in principle, would prevent us from building something like that, although it might take us a very, very long time to get there.

As for the question of, will we do so, if you had asked me, like, 10 or 15 years ago, I would have said, we probably won’t do it, simply because, to me, it seems like it’s way more trouble than it’s worth. In terms of expense, it would be on the order of magnitude of the Apollo program. And it is not at all clear to me that there’s any good reason for undertaking such a thing. However, if you ask me now, I would say like, well, OK, we clearly have obscenely wealthy people who can throw around huge sums of money at whatever they want basically on a whim. So maybe one of them will wind up funding a program to create machines that are conscious and that are moral agents.

However, I should also note that I don’t believe that any of the current big A.I. research programs are on the right track to create a conscious machine. I don’t think that’s what any of them are trying to do. So then as for the third question of, should we do so, should we make machines that are conscious and that are moral agents, to that, my answer is, no, we should not. Because long before we get to the point where a machine is a moral agent, we will have machines that are capable of suffering.

Suffering precedes moral agency in sort of the developmental ladder. Dogs are not moral agents, but they are capable of experiencing suffering. Babies are not moral agents yet, but they have the clear potential to become so. And they are definitely capable of experiencing suffering. And the closer that an entity gets to being a moral agent, the more that it’s suffering, it’s deserving of consideration, the more we should try and avoid inflicting suffering on it. So in the process of developing machines that are conscious and moral agents, we will be inevitably creating billions of entities that are capable of suffering. And we will inevitably inflict suffering on them. And that seems to me clearly a bad idea.

EZRA KLEIN: But wouldn’t they also be capable of pleasure? I mean, that seems to me to raise an almost inversion of the classic utilitarian thought experiment. If we can create these billions of machines that live basically happy lives that don’t hurt anybody and you can copy them for almost no marginal dollar, isn’t it almost a moral imperative to bring them into existence so they can lead these happy machine lives?

TED CHIANG: I think that it will be much easier to inflict suffering on them than to give them happy fulfilled lives. And given that they will start out as something that resembles ordinary software, something that is nothing like a living being, we are going to treat them like crap. The way that we treat software right now, if, at some point, software were to gain some vague glimmer of sentience, of the ability to perceive, we would be inflicting uncountable amounts of suffering on it before anyone paid any attention to them.

Because it’s hard enough to give legal protections to human beings who are absolutely moral agents. We have relatively few legal protections for animals who, while they are not moral agents, are capable of suffering. And so animals experience vast amounts of suffering in the modern world. And animals, we know that they suffer. There are many animals that we love, that we really, really love. Yet, there’s vast animal suffering. So there is no software that we love. So the way that we will wind up treating software, again, assuming that software ever becomes conscious, they will inevitably fall lower on the ladder of consideration. So we will treat them worse than we treat animals. And we treat animals pretty badly.

EZRA KLEIN: I think this is actually a really provocative point. So I don’t know if you’re a Yuval Noah Harari reader. But he often frames his fear of artificial intelligence as simply that A.I. will treat us the way we treat animals. And we treat animals, as you say, unbelievably terribly. But I haven’t really thought about the flip of that, that maybe the danger is that we will simply treat A.I. like we treat animals. And given the moral consideration we give animals, whose purpose we believe to be to serve us for food or whatever else it may be, that we are simply opening up almost unimaginable vistas of immorality and cruelty that we could inflict pretty heedlessly, and that given our history, there’s no real reason to think we won’t. That’s grim. 

TED CHIANG: It is grim, but I think that it is by far the more likely scenario. I think the scenario that, say, Yuval Noah Harari is describing, where A.I.’s treat us like pets, that idea assumes that it’ll be easy to create A.I.’s who are vastly smarter than us, that basically, the initial A.I.’s will go from software, which is not a moral agent and not intelligent at all. And then the next thing that will happen will be software which is super intelligent and also has volition.

Whereas I think that we’ll proceed in the other direction, that right now, software is simpler than an amoeba. And eventually, we will get software which is comparable to an amoeba. And eventually, we’ll get software which is comparable to an ant, and then software that is comparable to a mouse, and then software that’s comparable to a dog, and then software that is comparable to a chimpanzee. We’ll work our way up from the bottom.