Monday, October 27, 2025

New Statement Calls For Not Building Superintelligence For Now

Building superintelligence poses large existential risks. Also known as: If Anyone Builds It, Everyone Dies. Where ‘it’ is superintelligence, and ‘dies’ is that probably everyone on the planet literally dies.

We should not build superintelligence until such time as that changes, and the risk of everyone dying as a result, as well as the risk of losing control over the future as a result, is very low. Not zero, but far lower than it is now or will be soon.

Thus, the Statement on Superintelligence from FLI, which I have signed.
Context: Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction. The succinct statement below aims to create common knowledge of the growing number of experts and public figures who oppose a rush to superintelligence.

Statement:

We call for a prohibition on the development of superintelligence, not lifted before there is
1. broad scientific consensus that it will be done safely and controllably, and

2. strong public buy-in.

Their polling says there is 64% agreement on this, versus 5% supporting the status quo.

A Brief History Of Prior Statements

In March of 2023 FLI issued an actual pause letter, calling for an immediate pause for at least 6 months in the training of systems more powerful than GPT-4, which was signed among others by Elon Musk.

This letter was absolutely, 100% a call for a widespread regime of prior restraint on development of further frontier models, and to importantly ‘slow down’ and to ‘pause’ development in the name of safety.

At the time, I said it was a deeply flawed letter and I declined to sign it, but my quick reaction was to be happy that the letter existed. This was a mistake. I was wrong.

The pause letter not only weakened the impact of the superior CAIS letter, it has now for years been used as a club with which to browbeat or mock anyone who would suggest that future sufficiently advanced AI systems might endanger us, or that we might want to do something about that. To claim that any such person must have wanted such a pause at that time, or would want to pause now, which is usually not the case.

The second statement was the CAIS letter in May 2023, which was in its entirety:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
This was a very good sentence. I was happy to sign, as were some heavy hitters, including Sam Altman, Dario Amodei, Demis Hassabis and many others.

This was very obviously not a pause, or a call for any particular law or regulation or action. It was a statement of principles and the creation of common knowledge.

Given how much worse many people have gotten on AI risk since then, it would be an interesting exercise to ask those same people to reaffirm the statement.

This Third Statement

The new statement is in between the previous two letters.

It is more prescriptive than simply stating a priority.

It is however not a call to ‘pause’ at this time, or to stop building ordinary AIs, or to stop trying to use AI for a wide variety of purposes.

It is narrowly requesting that, if you are building something that might plausibly be a superintelligence, under anything like present conditions, you should instead not do that. We should not allow you to do that. Not until you make a strong case for why this is a wise or not insane thing to do.

This is something that those who are most vocally speaking out against the statement strongly believe is not going to happen within the next few years, so for the next few years any reasonable implementation would not pause or substantially impact AI development.

I interpret the statement as saying, roughly: if a given action has a substantial chance of being the proximate cause of superintelligence coming into being, then that’s not okay, we shouldn’t let you do that, not under anything like present conditions.

I think it is important that we create common knowledge of this, which we very clearly do not yet have. 

by Zvi Moskowitz, Don't Worry About the Vase |  Read more:
Image: Future of Life
[ed. I signed, for what it's worth. Since most prominant AI researchers have publicly stated concerns over a fast takeoff (and safety precautions are not keeping up), then it seems like a good reason to be pretty nervous. It's also clear that most of the public, our political representatives, business community, and even some in the AI community itself are either underestimating the risks involved or for the most part have given up, because human nature. Climate change, now superintelligence - slow boil or quick zap. Anything that helps bring more focus and action on either of these issues can only be a good thing.]