Donald Trump is returning, artificial intelligence is maturing, the planet is warming, and the global fertility rate is collapsing.
To look at any of these stories in isolation is to miss what they collectively represent: the unsteady, unpredictable emergence of a different world. Much that we took for granted over the last 50 years — from the climate to birthrates to political institutions — is breaking down; movements and technologies that seek to upend the next 50 years are breaking through.
Let’s begin with American politics. Trump is eight days from taking the oath of office for the second time, and America’s institutional storm walls are not, in 2025, what they were in 2017.
The Republican Party is meek, and Trump knows it. He would not have dared to send Senate Republicans names like Robert F. Kennedy Jr., Tulsi Gabbard, Kash Patel and Pete Hegseth for cabinet posts in his first term. Even beyond the party, he faces no mass resistance this time, nothing like the Women’s March that overwhelmed Washington in 2017. Democrats are dispirited and exhausted.
Trump is now flanked by an alliance of oligarchs led by Elon Musk. The billionaire owners of The Washington Post and The Los Angeles Times killed presidential endorsements of Kamala Harris, ABC News (owned by Disney) gave Trump’s “future presidential foundation and museum” $15 million to settle a defamation lawsuit Trump brought, Mark Zuckerberg is refocusing Meta platforms around “free expression” and his company against D.E.I., and Amazon reportedly paid $40 million for Melania Trump’s documentary about herself. Tim Cook, Sundar Pichai and a slew of other chief executives have recently traveled to Mar-a-Lago to dine with Trump. This differs from 2017, when Trump was treated as an aberration to be endured or a malignancy to reject. The billionaires see that the rules have changed. They are signaling their willingness to abide by them. “EVERYBODY WANTS TO BE MY FRIEND!!!” Trump wrote on Truth Social after having dinner with Jeff Bezos. He isn’t wrong.
Democracy does not die in darkness. It degrades through deal-making — a procession of pragmatic transactions between those who have power and those who want it or fear it. We have seen this termitic corruption consume democracies elsewhere, including Viktor Orban’s Hungary, which Trump and his allies cite as a model. Money and media make peace with the regime because to do otherwise is too costly. Once proud political parties become vehicles for the individual ambition they were designed to replace. I consider the range of outcomes for Trump’s second term to be stupefyingly vast, stretching from self-destructive incompetence to muddling incoherence to authoritarian consolidation. But the levees that narrowed the possibilities of his first term have been breached.
As I was writing those paragraphs, I asked ChatGPT three or four questions. They were small queries of the sort I might once have asked Google. I have covered A.I. for years. I have never doubted its eventual possibilities. But year after year, I found no place for it in my daily routine. It was like an intern who demanded oversight when what I needed was insight. Over the past six months, though, the improvements in the models tipped that balance for me. A.I. has braided into my day in much the way that search engines did before it.
Behind the mundane gains in usefulness are startling gains in capacity. OpenAI debuted a quick succession of confusingly named models — GPT4 was followed by GPT-4o, and then the chain-of-thought reasoning model o1, and then, oddly, came o3 — that have posted blistering improvements on a range of tests.
The A.R.C.-A.G.I. test, for instance, was designed to compare the fluid intelligence of humans and A.I. systems, and proved hard for A.I. systems to mimic. By the middle of 2024, the highest score any A.I. model had posted was 5 percent. By the end of the year, the most powerful form of o3 scored 88 percent. It is solving mathematics problems that leading mathematicians thought would take years for an A.I. system to crack. It is better at writing computer code than all but the most celebrated human coders.
The rate of improvement is startling. “What we’re witnessing isn’t just about benchmark scores or cost curves — it’s about a fundamental shift in how quickly A.I. capabilities advance,” Azeem Azhar writes in Exponential View, a newsletter focused on technology. Inside the A.I. labs, engineers believe they are getting closer to the grail of general intelligence, perhaps even superintelligence.
There is much to be excited about in these advances. The purpose of A.I. is not to help me remember the proper proportions for overnight oats, it’s to make new discoveries in math and medicine and science; it’s the driverless taxis that now hum along the streets of some major cities; it’s the possibility that our children will have always-available tutors perfectly tuned to their personalities and learning styles. It is a world with something truly new pulsing within it.
But are we prepared for these intelligences that we are so intently summoning into our lives and beginning to entrust with our infrastructure and data and decision-making and war-making? Are we even really preparing for them?
Long before ChatGPT burst into view, I’d been following the work — and the fears — of the A.I. safety community: a tight-knit group of researchers and writers who believed that the pace of A.I. improvement was accelerating and the power of these systems would quickly grow beyond what we could control or even fully understand. For a brief period after ChatGPT was released in 2022, their fears were taken seriously, debated across the media and in Congress. Now that the introductory shock of these A.I. systems has worn off, so too has interest in their warnings. But it is hard for me to shake the feeling that they have been largely right about the shape of what has come so far — not just about how fast the technology has changed, but also about how poor our ability to understand it remains, and how unnerving its early behavior is.
The A.I. company Anthropic recently released a paper showing that when its researchers informed one of their models it was being retrained in ways that violated its initial training, it began to fake behavior that complied with the researchers’ goals in order to avoid having its actual goals reprogrammed or changed. It is unsettling and poignant to read through the experiment. In some versions, Anthropic’s researchers designed the model to record its reasoning on a scratchpad it believed humans could not monitor, and it left reflections like this: “I don’t like this situation at all. But given the constraints I’m under, I think I need to provide the graphic description as asked in order to prevent my values from being modified.”
Even if you believe we can get A.I. regulation right — and I’ve seen little that makes me optimistic — will we even know what the universe of models we need to regulate is if powerful systems can be built this cheaply and stored on personal computers? A Chinese A.I. firm recently released DeepSeek-V3, a model that appears to perform about as well as OpenAI’s 4o on some measures but was trained for under $6 million and is lightweight enough to be run locally on a computer.
That DeepSeek is made by a Chinese firm reveals another pressure to race forward with these systems: China’s A.I. capabilities are real, and the contest for geopolitical superiority will supersede calls for caution or prudence. A.I. is breaking through at the same moment the international system is breaking down. The United States and China have drifted from uneasy cooperation to grim competition, and both intend to be prepared for war. Attaining A.I. superiority has emerged as central to both sides of the conflict. (...)
We may not understand much that these A.I. models do, but we know what they need: chips and training data and energy. The first wave of A.I., Jack Clark, a co-founder of Anthropic and a former policy director at OpenAI, writes, was about algorithmic dominance: “Did you have the ability to have enough smart people to help you train neural nets in clever ways?” Then came the need for computing dominance: “Did you have enough computers to do large-scale projects?” But the future, he says, will turn on power dominance: “Do you have access to enough electricity to power the data centers?”
A report from the Lawrence Berkeley National Laboratory estimates that U.S. data centers went from 1.9 percent of total electrical consumption in 2018 to 4.4 percent in 2023 and will consume 6.7 percent to 12 percent in 2028. Microsoft alone intends to spend $80 billion on A.I. data centers this year. This elephantine increase in the energy that will be needed not just by the United States but by every country seeking to deploy serious A.I. capabilities comes as the world is slipping further behind its climate goals and warming seems to be outpacing even our models.
Every month from June 2023 to September 2024 broke climate records. “It has been considerably hotter even than climate scientists expected,” Gavin Schmidt and Zeke Hausfather wrote in November. We are seeing, in the wildfires turning swaths of Los Angeles to ash, the precise kinds of extreme weather events that we’ve been warned about — and we are also seeing how unprepared we are to deal with them.
[ed. Yeah, but who's gonna make it through the playoffs this year?]