The A.I. pioneer Yoshua Bengio, a computer science professor at the Université de Montréal, is the most-cited researcher alive, in any discipline. When I spoke with him in 2024, Dr. Bengio told me that he had trouble sleeping while thinking of the future. Specifically, he was worried that an A.I. would engineer a lethal pathogen — some sort of super-coronavirus — to eliminate humanity. “I don’t think there’s anything close in terms of the scale of danger,” he said.
Contrast Dr. Bengio’s view with that of his frequent collaborator Yann LeCun, who heads A.I. research at Mark Zuckerberg’s Meta. Like Dr. Bengio, Dr. LeCun is one of the world’s most-cited scientists. He thinks that A.I. will usher in a new era of prosperity and that discussions of existential risk are ridiculous. “You can think of A.I. as an amplifier of human intelligence,” he said in 2023.
When nuclear fission was discovered in the late 1930s, physicists concluded within months that it could be used to build a bomb. Epidemiologists agree on the potential for a pandemic, and astrophysicists agree on the risk of an asteroid strike. But no such consensus exists regarding the dangers of A.I., even after a decade of vigorous debate. How do we react when half the field can’t agree on what risks are real?
One answer is to look at the data. After the launch of GPT-5 in August, some thought that A.I. had hit a plateau. Expert analysis suggests this isn’t true. GPT-5 can do things no other A.I. can do. It can hack into a web server. It can design novel forms of life. It can even build its own A.I. (albeit a much simpler one) from scratch.
For a decade, the debate over A.I. risk has been mired in theoreticals. Pessimistic literature like Eliezer Yudkowsky and Nate Soares’s best-selling book, “If Anyone Builds It, Everyone Dies,” relies on philosophy and sensationalist fables to make its points. But we don’t need fables; today there is a vanguard of professionals who research what A.I. is actually capable of. Three years after the launch of ChatGPT, these evaluators have produced a large body of evidence. Unfortunately, this evidence is as scary as anything in the doomerist imagination. (...)
In the course of quantifying the risks of A.I., I was hoping that I would realize my fears were ridiculous. Instead, the opposite happened: The more I moved from apocalyptic hypotheticals to concrete real-world findings, the more concerned I became. All of the elements of Dr. Bengio’s doomsday scenario were coming into existence. A.I. was getting smarter and more capable. It was learning how to tell its overseers what they wanted to hear. It was getting good at lying. And it was getting exponentially better at complex tasks. (...)
I’ve heard many arguments about what A.I. may or may not be able to do, but the data has outpaced the debate, and it shows the following facts clearly: A.I. is highly capable. Its capabilities are accelerating. And the risks those capabilities present are real. Biological life on this planet is, in fact, vulnerable to these systems. On this threat, even OpenAI seems to agree.
In this sense, we have passed the threshold that nuclear fission passed in 1939. The point of disagreement is no longer whether A.I. could wipe us out. It could... A destructive A.I., like a nuclear bomb, is now a concrete possibility. The question is whether anyone will be reckless enough to build one.
by Stephen Witt, NY Times | Read more:
Image: Martin Naumann