Tuesday, March 31, 2026

AI Weekly Update: Policy, Discourse and Alignment

People Really Hate AI

An ongoing series, this time from Will Manidis. I won’t try to excerpt but yes really the evidence for Americans being hostile to AI is overwhelming and the problem appears to be getting worse over time.
  • It is my belief — and I say this having worked in AI my entire career — that we should expect widespread asymmetric violence against AI infrastructure in the United States in the near future.
I do not say this happily. I am not rooting for it. I condemn violence in its fullest extent. The document that follows is not a manual for committing this kind of violence, but a warning of how easy it would be for dedicated groups to grind the American AI industry to a halt.
  • When you ask everyday Americans what they want done about AI, the consistency is almost eerie.
72% of voters want to slow down AI development. 82% do not trust technology executives to regulate AI—a level of distrust that puts AI CEOs somewhere between Congress and used-car dealers. 75% of Democrats and 75% of Republicans prefer a careful, considered approach to AI development. 75 and 75.
  • 80% of Americans told Axios that they prefer cautious AI implementation even if it means letting China get ahead. Our industry has been betting its future on a messianic fantasy of a coming war with China, and everyday Americans simply do not care. They say slow down anyway.
  • AI's constituency is the people who build it, the people who invest in it, and the people who earn enough to believe they'll come out ahead. These people are concentrated in literally a handful of zip codes. They are disproportionately male, young, college-educated, and high-income. They are, in demographic terms, niche.
  • No major technology in American history has entered its scaling phase—the phase where you deploy trillions of dollars into physical plant, into real communities, drawing real resources—with this demographic profile of opposition. AI is attempting to do something without precedent, and it's attempting to do so without noticing.
  • If you listen to conversation inside the industry, you wouldn't hear any of these numbers being discussed. The discourse is about scaling laws and token budgets and capability curves and the race to AGI and China. To the extent that anyone has articulated these concerns, the response is that amorphous benefits—productivity gains, curing cancer, transformative tech bio—will turn people around once they see undeniable evidence that something good is occurring here.
  • This assumption is backed by no data. The data shows the opposite. The more people learn about AI, the more they use it, the more they oppose its unchecked development. The trend lines are unambiguous.
  • The core issue is that the industry is caught in a contradiction it can't resolve. In order to raise the money necessary to fund massive training runs, investors and enterprise customers must hear the CEO stand on stage and explain how many human tasks the technology can now perform, how much cheaper it will be than the humans, how much better it will be by next quarter. This is the revenue case. It's what the market rewards. It's what every earnings call is built around.
The pitch to the public then requires that same CEO to promise that AI will create new jobs, that the transition will be managed, and that no one will be left behind. This is at best a political survival argument. It's what a continued social license to operate demands. 
The problem is that these two claims cannot coexist. The market pitch wins because that's where the money is. No one particularly cares what happens to the people left behind, and everyone can tell. 
  • The industry's response to the political opposition this generates is lobbying. In California, a bill to separate data center electricity rates from residential rates—to shield households from cost increases—was killed by industry lobbying. A separate bill requiring data centers to disclose their water usage was vetoed by the governor. What survived the legislative session was a requirement for regulators to produce a study on data center energy impacts, due in 2027. The findings will not be available in time for the 2026 session.
[ed. ... and much more. Well worth a read.]

by Will Manidis, X |  Read more:
***
Dean Ball offers one of the arguments requiring a response, [ed. re: pausing AI development] which is that the government is itself racing towards dangerous AI and if anything wants to take and centralize the power rather than stop it, and that’s worse, you know that that’s worse, right? So aren’t you better off not giving the government leverage, when the Secretary of War is trying to jawbone AI companies and plans to deploy AI to the military whether or not it is aligned, and is happy to put those words in official documents? Don’t pauses end up giving the government a lot more leverage in various ways?

Great question.

I’ll start with the long version, then do the short version.

There are at least two distinct classes of answer to that question, from people who want to pause or have the ability to pause. Call the pause Plan B, versus going ahead as we currently are being Plan A. And Plan T is the government messes everything up.

There is the attitude that all work on frontier AI is terrible, and anything that slows it down or stops it is good, because if we build it then everyone dies and they’re working to build it. It doesn’t matter if Anthropic is somewhat ‘more responsible,’ in this view, because there’s a 0-100 scale, xAI is a 0, OpenAI is a 2 and Anthropic is a 5, or whatever, and ‘good enough to not kill everyone’ is 100.

The measured version of this is to believe, as Eliezer Yudkowsky does (AIUI), and say: If we race forward to superintelligence, and we build it, everyone dies no matter who builds it. If we don’t get some sort of agreement we lose, and a deal between labs is helpful but because of China they can’t do it alone and you ultimately will need the government and an international treaty. So as much as you hate the risks of the government making things even worse, you can only die once, but of course you can and should still stand up against the government when it is doing something crazy.

I am not at this level of hopelessness about the default Plan A, but I do think the odds are against Plan A. So you very much want to get ready to go to Plan B, and to know if you need to go to Plan B. And yes, this comes with risk of Plan T, which is worse even than Plan A, but if you’re losing badly enough you need to accept some variance. You can only die once, and there are so many ways to die.

But yes, some ways of enabling the government are actively bad even when they are acting reasonably, and it’s even worse when you know they’re acting unreasonably, and at some level of unreasonableness or ill intent you would flip to simply wanting them to stay away and hope Plan A works.

The more confident one is in Plan A, the more you want to stick with Plan A.

You could take this a step further, as Holly Elmore and PauseAI do (AIUI), and say: So if DoW tries to murder Anthropic, well the method is not ideal but ultimately, good, we’re outside their offices telling them to stop and this makes them stop, the slightly lesser evil is still way too evil, and nothing else is important enough to matter.

This is a highly consistent position. It very much is not mine.

The short version:
1. You can be against the companies racing or being dumb.
2. And also against the government racing or being dumb.
3. Or you can support people doing dumb things that help with what matters, even if from other perspectives and their own interests that action is super dumb.
4. You can realize that there are some coordination problems where failing kills you.
5. You play to win the game. You play to your outs. If losing too badly, seek variance.
6. If the only hope is wise government or multilateral intervention, play to your out.
It is hard to say everything explicitly or concisely, but hopefully that will be good enough for those who care to finish in the gaps.

by Zvi Mowshowitz, DWAtV |  Read more:

[ed. See also: Every Debate on Pausing AI (ACX); 2023 Or, Why I'm Not a Doomer (Dean Ball - Hyperdimensional; and, It’s Time to Take Existential Risk from AI Seriously (Target Curve):]
***
"Dean offers another argument in the form of a thought experiment. He asks us to imagine a baby guaranteed to grow into an adult with enormous IQ, but raised by Aristotle in Ancient Greece. Would that baby eventually reinvent all of modern science? Dean says no, and I agree. Without access to accumulated knowledge, even extreme intelligence has limited raw material to work with. But this is not a good analogy for ASI.

Here’s my own attempt at making a similar thought experiment: Imagine trapping an alien mind, far more intelligent and capable than any human that has ever lived, inside a datacenter with access to a supercomputer that contains much (though not all) of humanity’s accumulated knowledge and works. Now freeze the rest of the world. While everyone else is standing still, this entity spawns thousands of copies of itself. Each copy is fine tuned to pursue different approaches towards whatever goals it’s pursuing. The entity evaluates the results, selects the copies that are performing best, fine-tunes them even further, and repeats. With ten thousand copies each thinking at least ten times faster than a human, a single day of runtime amounts to nearly 300 years of nonstop, focused cognitive labor.2

What would the world find when it unfroze? Could we predict this ahead of time and prepare adequate safeguards to ensure this entity remains under our control, long term? And what if we let it run not for a day, but for a month or a year?

Consider what humanity has built with our relatively slow, disorganized, frequently distracted collective intelligence. In under a century we’ve mapped genomes, split atoms, landed on the moon, and built a global communications network. This entity would have access to much of that same knowledge, the ability to process it orders of magnitude faster than we can, and a self-improvement loop that has no biological equivalent. It would be capable of things we can scarcely imagine. And if our safeguards conflicted with its goals, it might dedicate significant effort to making sure it could never be shut down or constrained again."