Vlastimil Beneš (Czech, 1919-1981), Vršovice Gardens in Winter, 1960
Thursday, October 30, 2025
Wednesday, October 29, 2025
Please Do Not Ban Autonomous Vehicles In Your City
I was listening with horror to a Boston City Council meeting today where many council members made it clear that they’re interested in effectively banning autonomous vehicles (AVs) in the city.
A speaker said that Waymo (the AV company requesting clearance to run in Boston) was only interested in not paying human drivers (Waymo is a new company that has never had human drivers in the first place) and then referred to the ‘notion that somehow our cities are unsafe because people are driving cars’ as if this were a crazy idea. A council person strongly implied that new valuable technology always causes us to value people less. One speaker associated Waymo with the Trump administration. There were a lot of implications that AVs couldn’t possibly be as good as human drivers, despite lots of evidence to the contrary. Some speeches were included lots of criticisms that applied equally well to what Uber did to taxis, but now deployed to defend Uber.
[Very few of Waymo’s most serious crashes were Waymo’s fault (Understanding AI).]
This is based on public police records rather than Waymo’s self-reported crashes. It doesn’t seem like there have been any serious crashes Waymo’s been involved in where the AV itself was at fault. This is wild, because Waymo’s driven over 100 million miles. These statistics were brought up out of context in the hearing to imply that Waymo is dangerous. By any. normal metric it’s much more safe than human drivers.
40,000 people die in car accidents in America each year. This is as many deaths as 9/11 every single month. We should be treating this as more of an emergency than we do. Our first thought in making any policy related to cars should be “How can we do everything we can to stop so many people from being killed?” Everything else is secondary to that. Dropping the rate of serious crashes by even 50% would save 20,000 people a year. Here’s 20,000 dots:
The more people choose to ride AVs over human-driven cars, the fewer total crashes will happen.
One common argument is that Waymos are very safe compared to everyday drivers, but not professional drivers. I can’t find super reliable data, but ride share accidents seem to occur at about a rate of 40 per 100 million miles traveled. Waymo in comparison was involved in 34 crashes where airbags deployed in its 100 million miles, and 45 crashes altogether. Crucially, it seems like the AV was only at fault for one of these, when a wheel fell off. There’s no similar data for how many Uber and Lyft crashes were the driver’s fault, but they’re competing with what seems like effectively 0 per 100 million miles.
Image: Smith Collection/Gado/Getty Images
A speaker said that Waymo (the AV company requesting clearance to run in Boston) was only interested in not paying human drivers (Waymo is a new company that has never had human drivers in the first place) and then referred to the ‘notion that somehow our cities are unsafe because people are driving cars’ as if this were a crazy idea. A council person strongly implied that new valuable technology always causes us to value people less. One speaker associated Waymo with the Trump administration. There were a lot of implications that AVs couldn’t possibly be as good as human drivers, despite lots of evidence to the contrary. Some speeches were included lots of criticisms that applied equally well to what Uber did to taxis, but now deployed to defend Uber.
Most of the arguments I heard were pretty wildly off-base. Many of the speakers didn’t factor in the basic safety benefit of AVs to the riders or pedestrians at all, and many of the arguments fell apart when poked at. Here are all my arguments for why a city should legalize AVs, with some concerns at the end:
AVs are ridiculously safe compared to human drivers
The most obvious reason to allow AVs in your city is that every time a rider takes one over driving a car themselves or getting in a ride share, their odds of being in a crash that causes serious injury or worse drop by about 90%. I’d strongly recommend this deep dive on every single crash Waymo has had so far:
The most obvious reason to allow AVs in your city is that every time a rider takes one over driving a car themselves or getting in a ride share, their odds of being in a crash that causes serious injury or worse drop by about 90%. I’d strongly recommend this deep dive on every single crash Waymo has had so far:
[Very few of Waymo’s most serious crashes were Waymo’s fault (Understanding AI).]
This is based on public police records rather than Waymo’s self-reported crashes. It doesn’t seem like there have been any serious crashes Waymo’s been involved in where the AV itself was at fault. This is wild, because Waymo’s driven over 100 million miles. These statistics were brought up out of context in the hearing to imply that Waymo is dangerous. By any. normal metric it’s much more safe than human drivers.
40,000 people die in car accidents in America each year. This is as many deaths as 9/11 every single month. We should be treating this as more of an emergency than we do. Our first thought in making any policy related to cars should be “How can we do everything we can to stop so many people from being killed?” Everything else is secondary to that. Dropping the rate of serious crashes by even 50% would save 20,000 people a year. Here’s 20,000 dots:
The more people choose to ride AVs over human-driven cars, the fewer total crashes will happen.
One common argument is that Waymos are very safe compared to everyday drivers, but not professional drivers. I can’t find super reliable data, but ride share accidents seem to occur at about a rate of 40 per 100 million miles traveled. Waymo in comparison was involved in 34 crashes where airbags deployed in its 100 million miles, and 45 crashes altogether. Crucially, it seems like the AV was only at fault for one of these, when a wheel fell off. There’s no similar data for how many Uber and Lyft crashes were the driver’s fault, but they’re competing with what seems like effectively 0 per 100 million miles.
by Andy Masley, The Weird Turn Pro | Read more:
Labels:
Business,
Cities,
Design,
Government,
history,
Politics,
Technology,
Travel
What To Know About Data Centers
As the use of AI increases, data centers are popping up across the country. The Onion shares everything you need to know about the controversial facilities.
Q: What do data centers need to run?
A: Water, electricity, air conditioning, and other resources typically wasted on schools and hospitals.
Q: Do data centers use a lot of water?
A: What are you, a fish? Don’t worry about it.
Q: How are data centers regulated?
A: Next month, Congress will hear about data centers for the very first time.
Q: Do I need to worry about one coming to my town?
A: Only if your town is built on land.
Q: How long does it take to build a new data center?
A: Approximately one closed-door city council vote.
Q: What’s Wi-Fi?
A: Not right now, big guy.
Q: What will most data centers house in the future?
A: Raccoons.
Image: uncredited
Labels:
Architecture,
Cities,
Environment,
Humor,
Technology
What It's Like to Work at the White House
Introduction
I recorded several exit interviews after I departed the White House Office of Science and Technology Policy last month. These turned out well, I think, but the truth about me is that I have not truly reflected on an experience until I have written about it. Today’s essay constitutes my long-overdue reflections on my time working for the White House.
This essay is based upon extensive conversations I had with former and current White House staff during my time in government, as well as on similar essays I have read by others over the years. And of course, it draws from my own experience as Senior Policy Advisor for AI and Emerging Technology in the White House. With that said, this essay is not about gossip: I will not be describing any newsy anecdotes or anything of that sort. And when I do describe internal interactions I had, all names will remain anonymous.
I recorded several exit interviews after I departed the White House Office of Science and Technology Policy last month. These turned out well, I think, but the truth about me is that I have not truly reflected on an experience until I have written about it. Today’s essay constitutes my long-overdue reflections on my time working for the White House.
This essay is based upon extensive conversations I had with former and current White House staff during my time in government, as well as on similar essays I have read by others over the years. And of course, it draws from my own experience as Senior Policy Advisor for AI and Emerging Technology in the White House. With that said, this essay is not about gossip: I will not be describing any newsy anecdotes or anything of that sort. And when I do describe internal interactions I had, all names will remain anonymous.
Understanding “The White House”
“The White House” is a lossy abstraction. The name of the bureaucracy that encompasses “The White House” is the Executive Office of the President (EOP). The EOP is composed of many “components”: the National Security Council (NSC), the National Economic Council (NEC), the Office of Management and Budget OMB), and, where I worked, the Office of Science and Technology Policy (OSTP). The Department of Government Efficiency, too, is a White House component, having previously been the Obama-era US Digital Service (the technical name of DOGE is the US DOGE Service). Wikipedia says that about 1,800 people work in the EOP, though I suspect this number is meaningfully lower under the Trump Administration.
Almost none of these personnel work in the building made of white sandstone known as “The White House.” Fewer still work in the White House’s West Wing. Instead they work in the White House Complex, most importantly the New and Old Executive Office Buildings, the latter of which is called today the Eisenhower Executive Office Building (EEOB). The vast majority of people who work for “The White House” work in these latter two office buildings. I worked in the EEOB, located across from the White House on a small, private street called West Executive Avenue.
Despite the geographic confusion, “The White House” usually refers as a metonym to the entirety of the EOP. And when people outside the EOP talk to an EOP staffer about some policy issue, they will say to their friends and colleagues that they spoke with “The White House” about the matter—even if all they really did was exchange text messages with a twenty-something EOP staffer whose security clearance does not even permit him to walk around the West Wing unescorted. Mostly I think this is because it’s convenient, and also because it sounds cool to say you “spoke with The White House.”
This social reality also means that everything you say and do as a White House staffer was said and done by “The White House.” This ends up being a tremendously difficult fact of life for the people whose desk resides within the metonym. You are no longer, exactly, a person. You are transformed into a symbol, a walking embodiment of power. This affects how people treat you, and sadly, I think, it affects how you treat others.
Working at the White House Complex is like orbiting within a solar system. The closer you get to the sun in the center—the President himself—the temperature rises, and the intensity of the gravity increases. The EEOB is a nice middle ground—not an icy, distant planet, but also not, you know, Venus. Still, everyone in the EOP constantly surveils for the occasional coronal mass ejection from the Sun—that is, when something you work on reaches POTUS-level attention. The pace and character of your workday can change at a moment’s notice—from “wow-this-is-a-lot” to “unbelievably,-no-seriously-you-cannot-fathom-the-pressure” levels of intense.
The First Day (...)
The Work of the White House Staffer
So what do you do all day, exactly? It’s a great question. Outside of offices like the NSC and OMB, most White House components do not have much or any hard power. They have no written-in-statute capabilities, other than “providing advice.” They have no shalls at their disposal, only shoulds. So your power rests entirely in soft varieties: mandates, real or perceived, from senior officials, ideally POTUS; proximity, real or perceived, to the President himself.
The other path to soft power is simply by being useful, by solving other people’s problems for them, or by being the person who simply must be a part of that meeting because of your expertise and insight. (...)
Running an interagency process is not that hard—at least, it is not hard to summarize. You want to avoid excessive “policymaking by committee” while also ensuring that agencies have the opportunity to bring legitimate nuance and detail to the table—characteristics that only they, with their subject-matter expertise, can furnish.
To do this you need to identify all the agencies relevant to your policy process (itself nontrivial!); find productive counterparties in those agencies and cultivate them as allies; develop a rich model not just of your counterparty’s incentives and goals but also those of his entire team and agency; and build a model also of the tensions between each counterparty/agency’s incentives and goals and those of all the other counterparties and agencies.
Then, you need to engage in behind-the-scenes diplomacy to “pre-bake” all the major things you care about achieving. Your goal should be for the interagency meeting itself to be a coronation of the already-agreed-upon major policy objectives, and a nuanced discussion of the details of implementation. You’ll need to do this focused work for each interagency process you run while also dealing with all the reactive elements of White House staffing (the Indonesia speech and the nebulous government-to-government negotiations and the lobbying and what not).
Some agencies are easy to work with. Others are almost entirely incorrigible. The most difficult ones are those that centralize communications with the White House, such that the EOP staffer can only get information filtered through the top-level offices of the agency. “Solving” each agency is a unique problem unto itself. (...)
Through the highs and the lows you come to realize what it is to be a mid-senior level White House staffer. You are a lone man, attached to the hull of a gargantuan ship, so large you cannot even see the ends. Your goal is to make it to the engine room, or the bridge, or to whatever else in the ship you feel it is your job to fix or improve. First you have to make it through the hull, and in your hands you have a butter knife.
The job is not just hard. In the final analysis, it is effectively impossible to do completely. But you can make inches of progress, and inches are not nothing. Despite the glamor and the flashes of glory, the work is mostly toil, if you are doing it right (not everyone does). There is a reason, after all, it is called public service.
Nonetheless, it is easy to become dispirited, to become overwhelmed by the enormity of your task and the problems you are trying to solve. In Washington, doing this too much is referred to as “admiring the problem.” That many in our nation’s capital treat understanding problems with such derision perhaps sheds light on why Americans are so often dissatisfied with their solutions.
“The White House” is a lossy abstraction. The name of the bureaucracy that encompasses “The White House” is the Executive Office of the President (EOP). The EOP is composed of many “components”: the National Security Council (NSC), the National Economic Council (NEC), the Office of Management and Budget OMB), and, where I worked, the Office of Science and Technology Policy (OSTP). The Department of Government Efficiency, too, is a White House component, having previously been the Obama-era US Digital Service (the technical name of DOGE is the US DOGE Service). Wikipedia says that about 1,800 people work in the EOP, though I suspect this number is meaningfully lower under the Trump Administration.
Almost none of these personnel work in the building made of white sandstone known as “The White House.” Fewer still work in the White House’s West Wing. Instead they work in the White House Complex, most importantly the New and Old Executive Office Buildings, the latter of which is called today the Eisenhower Executive Office Building (EEOB). The vast majority of people who work for “The White House” work in these latter two office buildings. I worked in the EEOB, located across from the White House on a small, private street called West Executive Avenue.
Despite the geographic confusion, “The White House” usually refers as a metonym to the entirety of the EOP. And when people outside the EOP talk to an EOP staffer about some policy issue, they will say to their friends and colleagues that they spoke with “The White House” about the matter—even if all they really did was exchange text messages with a twenty-something EOP staffer whose security clearance does not even permit him to walk around the West Wing unescorted. Mostly I think this is because it’s convenient, and also because it sounds cool to say you “spoke with The White House.”
This social reality also means that everything you say and do as a White House staffer was said and done by “The White House.” This ends up being a tremendously difficult fact of life for the people whose desk resides within the metonym. You are no longer, exactly, a person. You are transformed into a symbol, a walking embodiment of power. This affects how people treat you, and sadly, I think, it affects how you treat others.
Working at the White House Complex is like orbiting within a solar system. The closer you get to the sun in the center—the President himself—the temperature rises, and the intensity of the gravity increases. The EEOB is a nice middle ground—not an icy, distant planet, but also not, you know, Venus. Still, everyone in the EOP constantly surveils for the occasional coronal mass ejection from the Sun—that is, when something you work on reaches POTUS-level attention. The pace and character of your workday can change at a moment’s notice—from “wow-this-is-a-lot” to “unbelievably,-no-seriously-you-cannot-fathom-the-pressure” levels of intense.
The First Day (...)
The Work of the White House Staffer
So what do you do all day, exactly? It’s a great question. Outside of offices like the NSC and OMB, most White House components do not have much or any hard power. They have no written-in-statute capabilities, other than “providing advice.” They have no shalls at their disposal, only shoulds. So your power rests entirely in soft varieties: mandates, real or perceived, from senior officials, ideally POTUS; proximity, real or perceived, to the President himself.
The other path to soft power is simply by being useful, by solving other people’s problems for them, or by being the person who simply must be a part of that meeting because of your expertise and insight. (...)
Running an interagency process is not that hard—at least, it is not hard to summarize. You want to avoid excessive “policymaking by committee” while also ensuring that agencies have the opportunity to bring legitimate nuance and detail to the table—characteristics that only they, with their subject-matter expertise, can furnish.
To do this you need to identify all the agencies relevant to your policy process (itself nontrivial!); find productive counterparties in those agencies and cultivate them as allies; develop a rich model not just of your counterparty’s incentives and goals but also those of his entire team and agency; and build a model also of the tensions between each counterparty/agency’s incentives and goals and those of all the other counterparties and agencies.
Then, you need to engage in behind-the-scenes diplomacy to “pre-bake” all the major things you care about achieving. Your goal should be for the interagency meeting itself to be a coronation of the already-agreed-upon major policy objectives, and a nuanced discussion of the details of implementation. You’ll need to do this focused work for each interagency process you run while also dealing with all the reactive elements of White House staffing (the Indonesia speech and the nebulous government-to-government negotiations and the lobbying and what not).
Some agencies are easy to work with. Others are almost entirely incorrigible. The most difficult ones are those that centralize communications with the White House, such that the EOP staffer can only get information filtered through the top-level offices of the agency. “Solving” each agency is a unique problem unto itself. (...)
Through the highs and the lows you come to realize what it is to be a mid-senior level White House staffer. You are a lone man, attached to the hull of a gargantuan ship, so large you cannot even see the ends. Your goal is to make it to the engine room, or the bridge, or to whatever else in the ship you feel it is your job to fix or improve. First you have to make it through the hull, and in your hands you have a butter knife.
The job is not just hard. In the final analysis, it is effectively impossible to do completely. But you can make inches of progress, and inches are not nothing. Despite the glamor and the flashes of glory, the work is mostly toil, if you are doing it right (not everyone does). There is a reason, after all, it is called public service.
Nonetheless, it is easy to become dispirited, to become overwhelmed by the enormity of your task and the problems you are trying to solve. In Washington, doing this too much is referred to as “admiring the problem.” That many in our nation’s capital treat understanding problems with such derision perhaps sheds light on why Americans are so often dissatisfied with their solutions.
by Dean Ball, Hyperdimensional | Read more:
Image: via
[ed. Not all fun and games. Sometimes there's the unexpected threat too:]
Scenario Scrutiny for AI Policy
AI 2027 was a descriptive forecast. Our next big project will be prescriptive: a scenario showing roughly how we think the US government should act during AI takeoff, accompanied by a “policy playbook” arguing for these recommendations.
One reason we’re producing a scenario alongside our playbook at all—as opposed to presenting our policies only as abstract arguments—is to stress-test them. We think many policy proposals for navigating AGI fall apart under scenario scrutiny—that is, if you try to write down a plausible scenario in which that proposal makes the world better, you will find that it runs into difficulties. The corollary is that scenario scrutiny can improve proposals by revealing their weak points.
To illustrate this process and the types of weak points it can expose, we’re about to give several examples of AI policy proposals and ways they could collapse under scenario scrutiny. These examples are necessarily oversimplified, since we don’t have the space in this blog post to articulate more sophisticated versions, much less subject them to serious scrutiny. But hopefully these simple examples illustrate the idea and motivate readers to subject their own proposals to more concrete examination.
With that in mind, here are some policy weaknesses that scenario scrutiny can unearth:
To this, one might argue that liability isn’t trying to solve race dynamics or misalignment; instead, it solves one chunk of the problem, providing value on the margin as part of a broader policy package. This is also fine! Scenario scrutiny is most useful for “grand plan” proposals. But we still think that marginal policies could benefit from scenario scrutiny.
The general principle is that writing a scenario by asking “what happens next, and is the world in equilibrium?” forces you to be concrete, which can surface various problems that arise from being vague and abstract. If you find you can’t write a scenario in which your proposed policies solve the hard problems, that’s a big red flag.
However, if you can write out a plausible scenario in which your policy is good, this isn’t enough for the policy to be good overall. But it’s a bar that we think proposals should meet.
As an analogy: just because a firm bidding for a construction contract submitted a blueprint of their proposed building, along with a breakdown of the estimated costs and calculations of structural integrity, doesn’t mean you should award them the contract! But it’s reasonable to make this part of the submission requirements, precisely because it allows you to more easily separate the wheat from the chaff and identify unrealistic plans. Given that plans for the future of AI are—to put it mildly—more important than plans for individual buildings, we think that scenario scrutiny is a reasonable standard to meet.
While we think that scenario scrutiny is underrated in policy, there are a few costs to consider:
by Joshua Turner and Daniel Kokotajlo, AI Futures Project | Read more:
One reason we’re producing a scenario alongside our playbook at all—as opposed to presenting our policies only as abstract arguments—is to stress-test them. We think many policy proposals for navigating AGI fall apart under scenario scrutiny—that is, if you try to write down a plausible scenario in which that proposal makes the world better, you will find that it runs into difficulties. The corollary is that scenario scrutiny can improve proposals by revealing their weak points.
To illustrate this process and the types of weak points it can expose, we’re about to give several examples of AI policy proposals and ways they could collapse under scenario scrutiny. These examples are necessarily oversimplified, since we don’t have the space in this blog post to articulate more sophisticated versions, much less subject them to serious scrutiny. But hopefully these simple examples illustrate the idea and motivate readers to subject their own proposals to more concrete examination.
With that in mind, here are some policy weaknesses that scenario scrutiny can unearth:
1. Applause lights. The simplest way that a scenario can improve an abstract proposal is by revealing that it is primarily a content-free appeal to unobjectionable values. Suppose that someone calls for the democratic, multinational development of AGI. This sounds good, but what does it look like in practice? The person who says this might not have much of an idea beyond “democracy good.” Having them try to write down a scenario might reveal this fact and allow them to then fill in the details of their actual proposal.For this last example, you might argue that the scenario under which this policy was scrutinized is not plausible. Maybe your primary threat model is malicious use, in which those who would enforce liability still exist for long enough to make OpenBrain internalize its externalities. Maybe it’s something else. That’s fine! An important part of scenario scrutiny as a practice is that it allows for concrete discussion about which future trajectories are more plausible, in addition to which concrete policies would be best in those futures. However, we worry that many people have a scenario involving race dynamics and misalignment in mind and still suggest things like AI liability.
2. Bad analogies. Some AI policy proposals rely on bad analogies. For example, technological automation has historically led to increased prosperity, with displaced workers settling into new types of jobs created by that automation. Applying this argument to AGI straightforwardly leads to “the government should just do what it has done in previous technological transitions, like re-skilling programs.” However, if you look past the labels and write down a concrete scenario in which general, human-level AI automates all knowledge work… what happens next? Perhaps displaced white-collar workers migrate to blue-collar work or to jobs where it matters that it is specifically done by a human. Are there enough such jobs to absorb these workers? How long does it take the automated researchers to solve robotics and automate the blue-collar work too? What are the incentives of the labs that are renting out AI labor? We think reasoning in this way will reveal ways in which AGI is not like previous technologies, such as that it can also do the jobs that humans are supposed to migrate to, making “re-skilling” a bad proposal.
3. Uninterrogated consequences. Abstract arguments can appeal to incompletely explored concepts or goals. For example, a key part of many AI strategies is “beat China in an AGI race.” However, as Gwern asks,
“Then what? […] You get AGI and you show it off publicly, Xi Jinping blows his stack as he realizes how badly he screwed up strategically and declares a national emergency and the CCP starts racing towards its own AGI in a year, and… then what? What do you do in this 1 year period, while you still enjoy AGI supremacy? You have millions of AGIs which can do… ‘stuff’. What is this stuff?
“Are you going to start massive weaponized hacking to subvert CCP AI programs as much as possible short of nuclear war? Lobby the UN to ban rival AGIs and approve US carrier group air strikes on the Chinese mainland? License it to the CCP to buy them off? Just… do nothing and enjoy 10%+ GDP growth for one year before the rival CCP AGIs all start getting deployed? Do you have any idea at all? If you don’t, what is the point of ‘winning the race’?”
A concrete scenario demands concrete answers to these questions, by requiring you to ask “what happens next?” By default, “win the race” does not.
4. Optimistic assumptions and unfollowed incentives. There are many ways for a policy proposal to secretly rest upon optimistic assumptions, but one particularly important way is that, for no apparent reason, a relevant actor doesn’t follow their incentives. For example, upon proposing an international agreement on AI safety, you might forget that the countries—which would be racing to AGI by default—are probably looking for ways to break out of it! A useful frame here is to ask: “Is the world in equilibrium?” That is, has every actor already taken all actions that best serve their interests, given the actions taken by others and the constraints they face? Asking this question can help shine a spotlight on untaken opportunities and ways that actors could subvert policy goals by following their incentives.
Relatedly, a scenario is readily open to “red-teaming” through “what if?” questions, which can reveal optimistic assumptions and their potential impacts if broken. Such questions could be: What if alignment is significantly harder than I expect? What if the CEO secretly wants to be a dictator? What if timelines are longer and China has time to indigenize the compute supply chain?
5. Inconsistencies. Scenario scrutiny can also reveal inconsistencies, either between different parts of your scenario or between your policies and your predictions. For example, when writing our upcoming scenario, we wanted the U.S. and China to agree to a development pause before either reached the superhuman coder milestone. At this point, we realized a problem: a robust agreement would be much more difficult without verification technology, and much of this technology did not exist yet! We then went back and included an “Operation Warp Speed for Verification” earlier in the story. Concretely writing out our plan changed our current policy priorities and made our scenario more internally consistent.
6. Missing what’s important. Finally, a scenario can show you that your proposed policy doesn’t address the important bits of the problem. Take AI liability for example. Imagine the year is 2027, and things are unfolding as AI 2027 depicts. America’s OpenBrain is internally deploying its Agent-4 system to speed up its AI research by 50x, while simultaneously being unsure if Agent-4 is aligned. Meanwhile, Chinese competitor DeepCent is right on OpenBrain’s heels, with internal models that are only two months behind the frontier. What happens next? If OpenBrain pushes forward with Agent-4, it risks losing control to misaligned AI. If OpenBrain instead shuts down Agent-4, it cripples its capabilities research, thereby ceding the lead to DeepCent and the CCP. Where is liability in this picture? Maybe it prevented some risky public deployments earlier on. But, in this scenario, what happens next isn’t “Thankfully, Congress passed a law in 2026 subjecting frontier AI developers to strict liability, and so…
To this, one might argue that liability isn’t trying to solve race dynamics or misalignment; instead, it solves one chunk of the problem, providing value on the margin as part of a broader policy package. This is also fine! Scenario scrutiny is most useful for “grand plan” proposals. But we still think that marginal policies could benefit from scenario scrutiny.
The general principle is that writing a scenario by asking “what happens next, and is the world in equilibrium?” forces you to be concrete, which can surface various problems that arise from being vague and abstract. If you find you can’t write a scenario in which your proposed policies solve the hard problems, that’s a big red flag.
However, if you can write out a plausible scenario in which your policy is good, this isn’t enough for the policy to be good overall. But it’s a bar that we think proposals should meet.
As an analogy: just because a firm bidding for a construction contract submitted a blueprint of their proposed building, along with a breakdown of the estimated costs and calculations of structural integrity, doesn’t mean you should award them the contract! But it’s reasonable to make this part of the submission requirements, precisely because it allows you to more easily separate the wheat from the chaff and identify unrealistic plans. Given that plans for the future of AI are—to put it mildly—more important than plans for individual buildings, we think that scenario scrutiny is a reasonable standard to meet.
While we think that scenario scrutiny is underrated in policy, there are a few costs to consider:
by Joshua Turner and Daniel Kokotajlo, AI Futures Project | Read more:
Image: via
Labels:
Critical Thought,
Design,
Education,
Military,
Psychology,
Security,
Technology
Model Cities: Monumental Labs Stonework
Monumental Labs, a group working on “AI-enabled robotic stone carving factories”. The question of why modern architecture is so dull and unornamented compared to its classical counterpart is complicated, but three commonly-proposed reasons are:
1. Ornament costs too muchGetting robots to mass-produce ornament solves problems 1 and 2, and doing it in a model city with a ground-level commitment to ornament solves problem 3.
2. The modernist era destroyed the classical architecture education pipeline; only a few people and companies retain tacit knowledge of old techniques, and they mostly occupy themselves with historical renovation.
3. Building codes are inflexible and designed around the more-common modern styles.
Sramek writes:
Our renderings do not tell the full story. Getting architecture right in a way that is also scalable and affordable is hard. And until now, we’ve been focused on the things “lower down in the stack” that need to be designed first – land use plans, urban design, transportation, open space, infrastructure, etc. But I started this company nearly a decade ago precisely because I felt that so much of our world had become ugly, and I wanted to live, and have my kids grow up, in a place that appreciates craft and beauty.
Our renderings do not tell the full story. Getting architecture right in a way that is also scalable and affordable is hard. And until now, we’ve been focused on the things “lower down in the stack” that need to be designed first – land use plans, urban design, transportation, open space, infrastructure, etc. But I started this company nearly a decade ago precisely because I felt that so much of our world had become ugly, and I wanted to live, and have my kids grow up, in a place that appreciates craft and beauty.
[ed. Sounds good to me.]
Labels:
Architecture,
Art,
Cities,
Design,
Technology
Tuesday, October 28, 2025
Amazon Plans to Replace More Than Half a Million Jobs With Robots
[ed. Amazon announces 14000 job cuts - 10/28/2025]
Now, interviews and a cache of internal strategy documents viewed by The New York Times reveal that Amazon executives believe the company is on the cusp of its next big workplace shift: replacing more than half a million jobs with robots.
Amazon’s U.S. work force has more than tripled since 2018 to almost 1.2 million. But Amazon’s automation team expects the company can avoid hiring more than 160,000 people in the United States it would otherwise need by 2027. That would save about 30 cents on each item that Amazon picks, packs and delivers to customers.
Executives told Amazon’s board last year that they hoped robotic automation would allow the company to continue to avoid adding to its U.S. work force in the coming years, even though they expect to sell twice as many products by 2033. That would translate to more than 600,000 people whom Amazon didn’t need to hire.
At facilities designed for superfast deliveries, Amazon is trying to create warehouses that employ few humans at all. And documents show that Amazon’s robotics team has an ultimate goal to automate 75 percent of its operations.
Amazon is so convinced this automated future is around the corner that it has started developing plans to mitigate the fallout in communities that may lose jobs. Documents show the company has considered building an image as a “good corporate citizen” through greater participation in community events such as parades and Toys for Tots.
The documents contemplate avoiding using terms like “automation” and “A.I.” when discussing robotics, and instead use terms like “advanced technology” or replace the word “robot” with “cobot,” which implies collaboration with humans. (...)
Amazon’s plans could have profound impact on blue-collar jobs throughout the country and serve as a model for other companies like Walmart, the nation’s largest private employer, and UPS. The company transformed the U.S. work force as it created a booming demand for warehousing and delivery jobs. But now, as it leads the way for automation, those roles could become more technical, higher paid and more scarce.
“Nobody else has the same incentive as Amazon to find the way to automate,” said Daron Acemoglu, a professor at the Massachusetts Institute of Technology who studies automation and won the Nobel Prize in economic science last year. “Once they work out how to do this profitably, it will spread to others, too.”
If the plans pan out, “one of the biggest employers in the United States will become a net job destroyer, not a net job creator,” Mr. Acemoglu said.
The Times viewed internal Amazon documents from the past year. They included working papers that show how different parts of the company are navigating its ambitious automation effort, as well as formalized plans for the department of more than 3,000 corporate and engineering employees who largely develop the company’s robotic and automation operations. (...)
A Template for the Future
For years, Jeff Bezos, Amazon’s founder and longtime chief executive, pushed his staff to think big and envision what it would take to fully automate its operations, according to two former senior leaders involved in the work. Amazon’s first big push into robotic automation started in 2012, when it paid $775 million to buy the robotics maker Kiva. The acquisition transformed Amazon’s operations. Workers no longer walked miles crisscrossing a warehouse. Instead, robots shaped like large hockey pucks moved towers of products to employees.
The company has since developed an orchestrated system of robotic programs that plug into each together like Legos. And it has focused on transforming the large, workhorse warehouses that pick and pack the products customers buy with a click.
Amazon opened its most advanced warehouse, a facility in Shreveport, La., last year as a template for future robotic fulfillment centers. Once an item there is in a package, a human barely touches it again. The company uses a thousand robots in Shreveport, allowing it to employ a quarter fewer workers last year than it would have without automation, documents show. Next year, as more robots are introduced, it expects to employ about half as many workers there as it would without automation.
“With this major milestone now in sight, we are confident in our ability to flatten Amazon’s hiring curve over the next 10 years,” the robotics team wrote in its strategy plan for 2025.
Amazon plans to copy the Shreveport design in about 40 facilities by the end of 2027, starting with a massive warehouse that just opened in Virginia Beach. And it has begun overhauling old facilities, including one in Stone Mountain near Atlanta.
That facility currently has roughly 4,000 workers. But once the robotic systems are installed, it is projected to process 10 percent more items but need as many as 1,200 fewer employees, according to an internal analysis. Amazon said the final head count was subject to change. (...)
Amazon has said it has a million robots at work around the globe, and it believes the humans who take care of them will be the jobs of the future. Both hourly workers and managers will need to know more about engineering and robotics as Amazon’s facilities operate more like advanced factories.
by Karen Weise, NY Times | Read more:
Image: Emily Kask
[ed. Everyone knew this was coming, now it's here. I expect issues like universal basic income, healthcare for all, even various forms of democratic socialism (which I support) getting more attention soon. See also: What Amazon’s 14,000 job cuts say about a new era of corporate downsizing (WaPo via Seattle Times); and, The AI job cuts are here - or are they? (BBC).]
College Football: Big Money, Big Troubles
College football programs could spend $200 million in buyouts. Spare us the money moaning.
If you watched college football on Saturday, you saw yet another set of misleading political ads urging you to call your local congressman and tell them to SAVE COLLEGE SPORTS! The latest ones give the impression that women’s and Olympic sports are in trouble because having to pay athletes a salary is going to bankrupt their schools.
On Sunday, Penn State announced it has fired 12th-year coach James Franklin, for whom they now owe a roughly $45 million buyout.
These schools aren’t broke. They’re just wildly irresponsible spenders.
And if they find a private equity firm to come rushing to their rescue, as the Big Ten is actively seeking, they’ll just find a way to light that money on fire, too.
We’re only halfway through the 2025 regular season, and it’s clear we’re headed to a full-on coaching carousel bloodletting. Stanford (Troy Taylor), UCLA (DeShaun Foster), Virginia Tech (Brent Pry), Oklahoma State (Mike Gundy), Arkansas (Sam Pittman), Oregon State (Trent Bray) and now Penn State have already sent their guys packing, and the likes of Florida (Billy Napier), Wisconsin (Luke Fickell) and several more will likely come.
By year’s end, the combined cost of those buyouts could well exceed $200 million. Let that sink in for a second. Supposed institutions of “higher learning” have managed to negotiate themselves into paying $200 million to people who will no longer be working for them.
Just how much is $200 million? Well, for one thing, it’s enough to pay for the scholarships of roughly 5,000 women’s and Olympic sports athletes.
You may be asking yourself: How do schools keep entering into these ridiculous, one-sided coaching contracts that cost more than the House settlement salary cap ($20.5 million) to extricate themselves from?
Well, consider the dynamics at play in those negotiations.
On one side of the table, we have an athletics director who spends 95 percent of their time on things like fundraising, marketing, facilities, answering fan emails about the long lines of concession stands, and so on. Once every four or five years, if that, they have to hire or renew a highly paid football coach, often in the span of 24 to 48 hours.
And on the other side, we have Jimmy Sexton. Or Trace Armstrong. Or another super-agent whose sole job is to negotiate lucrative coaching contracts. It’s a bigger mismatch than Penn State-UCLA … uh, Penn State-Northwestern … uh … you know what I mean.
Franklin’s extremely one-sided contract is a perfect example. (...)
Coaching salaries have been going up and up for decades, of course, but that 2021-22 cycle reached new heights in absurdity. In addition to Franklin’s windfall, USC gave Oklahoma’s Riley a 10-year, $110 million contract, and LSU gave Brian Kelly a 10-year, $95 million deal; and the most insane of all, Michigan State’s 10-year, $75 million deal for the since-fired Mel Tucker.
As of today, none of the four schools has gotten the return they were seeking. (...)
Now, according to USA Today’s coaching salary database published last week, none of the 30 highest-paid coaches in the country have a buyout of less than $20 million.
In the past, we might have just rolled our eyes, proclaimed, “You idiots!” and moved on. But the current college sports climate all but demands that there needs to be more accountability of the people making these deals.
by Stewart Mandel, The Athletic | Read more:
Image: Alex Slitz/Getty
On Sunday, Penn State announced it has fired 12th-year coach James Franklin, for whom they now owe a roughly $45 million buyout.
These schools aren’t broke. They’re just wildly irresponsible spenders.
And if they find a private equity firm to come rushing to their rescue, as the Big Ten is actively seeking, they’ll just find a way to light that money on fire, too.
We’re only halfway through the 2025 regular season, and it’s clear we’re headed to a full-on coaching carousel bloodletting. Stanford (Troy Taylor), UCLA (DeShaun Foster), Virginia Tech (Brent Pry), Oklahoma State (Mike Gundy), Arkansas (Sam Pittman), Oregon State (Trent Bray) and now Penn State have already sent their guys packing, and the likes of Florida (Billy Napier), Wisconsin (Luke Fickell) and several more will likely come.
By year’s end, the combined cost of those buyouts could well exceed $200 million. Let that sink in for a second. Supposed institutions of “higher learning” have managed to negotiate themselves into paying $200 million to people who will no longer be working for them.
Just how much is $200 million? Well, for one thing, it’s enough to pay for the scholarships of roughly 5,000 women’s and Olympic sports athletes.
You may be asking yourself: How do schools keep entering into these ridiculous, one-sided coaching contracts that cost more than the House settlement salary cap ($20.5 million) to extricate themselves from?
Well, consider the dynamics at play in those negotiations.
On one side of the table, we have an athletics director who spends 95 percent of their time on things like fundraising, marketing, facilities, answering fan emails about the long lines of concession stands, and so on. Once every four or five years, if that, they have to hire or renew a highly paid football coach, often in the span of 24 to 48 hours.
And on the other side, we have Jimmy Sexton. Or Trace Armstrong. Or another super-agent whose sole job is to negotiate lucrative coaching contracts. It’s a bigger mismatch than Penn State-UCLA … uh, Penn State-Northwestern … uh … you know what I mean.
Franklin’s extremely one-sided contract is a perfect example. (...)
Coaching salaries have been going up and up for decades, of course, but that 2021-22 cycle reached new heights in absurdity. In addition to Franklin’s windfall, USC gave Oklahoma’s Riley a 10-year, $110 million contract, and LSU gave Brian Kelly a 10-year, $95 million deal; and the most insane of all, Michigan State’s 10-year, $75 million deal for the since-fired Mel Tucker.
As of today, none of the four schools has gotten the return they were seeking. (...)
Now, according to USA Today’s coaching salary database published last week, none of the 30 highest-paid coaches in the country have a buyout of less than $20 million.
In the past, we might have just rolled our eyes, proclaimed, “You idiots!” and moved on. But the current college sports climate all but demands that there needs to be more accountability of the people making these deals.
Image: Alex Slitz/Getty
[ed. I don't follow college football much, but from what I do pick up it seems like the transfer portal, NIL, legitimized sports gambling, conference reorganizations, big media money, and who knows what else have really had an overall negative effect on the sport, resulting in an ugly mercenary ethic that's now common. See also: College football is absolutely unhinged right now. It’s exactly why we love it; and, Bill Belichick pledged an NFL approach at North Carolina. Program insiders call it dysfunctional (The Athletic).
Then there's this: College football’s ‘shirtless dudes’ trend is all the rage. And could be curing male loneliness? Can't see the connection but imagine women sure as hell won't be sitting anywhere near these guys. Don't think that's going to help with the loneliness problem.]
The Uncool: A Memoir
Who Is Cameron Crowe Kidding With the Title of His Memoir?
One of the greatest tricks cool people play on the rest of us is convincing us in their memoirs that they were and are profoundly uncool. Cameron Crowe comes right out with the pandering on his book’s cover: “The Uncool: A Memoir.”
The title refers to a scene in “Almost Famous” (2000), the tender film he wrote and directed. The headstrong rock critic Lester Bangs (Philip Seymour Hoffman) is consoling the Crowe-like hero, a floppy-haired teenage rock journalist, over the telephone at a low moment. Bangs says, “The only true currency in this bankrupt world is what you share with someone else when you’re uncool.” It’s a good line. Call me anytime, Bangs adds: “I’m always home. I’m uncool.”
Never mind whether Lester Bangs was plausibly uncool. How about Crowe? Here’s a man who spent his adolescence in the 1970s careening around the United States for Rolling Stone magazine, a boy wonder in the intimate and extended company of David Bowie, Led Zeppelin, Gram Parsons, the Allman Brothers, Fleetwood Mac, Emmylou Harris, Kris Kristofferson, the Eagles, Todd Rundgren and Yes, about whom he was writing profiles and cover stories.
Occasionally, he’d fly home to see his mother, check out high school for a day or two, then blearily type up his road memories and interview notes. Sounds uncool to me.
The second act of Crowe’s career began when, in his early 20s, he went undercover for a year, posing as a high school student in San Diego, and wrote the experience up in a book called “Fast Times at Ridgemont High.” Crowe and the director Amy Heckerling turned it into a wide-awake 1982 movie that provided rocket fuel for Sean Penn, who played the perpetually stoned surfer Jeff Spicoli.
Crowe, who burned out young as a journalist, pivoted to film. He wrote and directed “Say Anything” (1989), with John Cusack, Ione Skye and a famous boombox; “Singles” (1992), a romantic early look at the Seattle grunge scene; and “Jerry Maguire” (1996), with Tom Cruise and Renée Zellweger, before winning an Oscar for his “Almost Famous” screenplay. All this while married to Nancy Wilson, the guitarist in Heart. No sane person would trade their allotment of experience for this man’s. Omnidirectionally uncool.
When you read Crowe’s memoir, though, you begin to see things from his unhip point of view. He had no interest in drink and drugs while on the road, though Gregg Allman tried to hook him up with a speedball. He seems to have mostly abstained from sex, too, though there’s something about his adoration in the presence of his rock heroes that makes it seem he’s losing his virginity every few pages.
His editors at Rolling Stone thought he was uncool, increasingly as time went on, because the acolyte in him overrode the journalist. He Forrest Gumped along. Bands liked having Crowe around because he was adorable and a bit servile; he’d often leave out the bits they wanted left out. (...)
Crowe thought rock writers were snobs. He moved in with Glenn Frey and Don Henley of the Eagles while profiling them, for example, and he was in the room when they wrote “One of These Nights” and “Lyin’ Eyes.” It bugged him to see them put down:
The crucial thing to know about this book is that it overlaps almost exactly with the story Crowe tells in “Almost Famous.” If you remember the phrases “It’s all happening” and “Don’t take drugs,” or the young woman — a “Band-Aid” in the movie’s argot — who is offered for a case of Heineken, or the rock star who briefly kills an important story, or Crowe’s flight-attendant sister, or the group sex scene that seems like a series of flickering veils, or the L.A. hotel known as the Riot House, or Lester Bangs acting out in a glassed-in first-floor radio studio, it’s all here and more.
The book reads like a novelization of the movie, so much so that it makes you consider the nature of memory. I’m not suggesting Crowe is making things up in this memoir. I’m merely suggesting that the stories he wrote for the movie may have been so reverberant that they began to subtly bleed into his own.
The secret to the movie, one that most people miss, Crowe says, is the empty chair at the family’s dining-room table. It belonged to Crowe’s older sister, Cathy, who was troubled from birth and died by suicide at 19. This detail reminds you how relatively sanitized this book otherwise is. There is little that’s grainy or truly revelatory about his own life and loves. The book ends before his directing career has begun, thus leaving room for a sequel. Everything is a bit gauzy, soft-core.
God help me, I read this book quickly and enjoyed it anyway: The backstage details alone keep this kite afloat. It got to me in the same way “Almost Famous” always gets to me, despite the way that movie sets off my entire bank of incoming sentimentality detectors. If you can watch the “Tiny Dancer” scene without blinking back a tear, you’re a stronger person than me.
The title refers to a scene in “Almost Famous” (2000), the tender film he wrote and directed. The headstrong rock critic Lester Bangs (Philip Seymour Hoffman) is consoling the Crowe-like hero, a floppy-haired teenage rock journalist, over the telephone at a low moment. Bangs says, “The only true currency in this bankrupt world is what you share with someone else when you’re uncool.” It’s a good line. Call me anytime, Bangs adds: “I’m always home. I’m uncool.”
Never mind whether Lester Bangs was plausibly uncool. How about Crowe? Here’s a man who spent his adolescence in the 1970s careening around the United States for Rolling Stone magazine, a boy wonder in the intimate and extended company of David Bowie, Led Zeppelin, Gram Parsons, the Allman Brothers, Fleetwood Mac, Emmylou Harris, Kris Kristofferson, the Eagles, Todd Rundgren and Yes, about whom he was writing profiles and cover stories.
Occasionally, he’d fly home to see his mother, check out high school for a day or two, then blearily type up his road memories and interview notes. Sounds uncool to me.
The second act of Crowe’s career began when, in his early 20s, he went undercover for a year, posing as a high school student in San Diego, and wrote the experience up in a book called “Fast Times at Ridgemont High.” Crowe and the director Amy Heckerling turned it into a wide-awake 1982 movie that provided rocket fuel for Sean Penn, who played the perpetually stoned surfer Jeff Spicoli.
Crowe, who burned out young as a journalist, pivoted to film. He wrote and directed “Say Anything” (1989), with John Cusack, Ione Skye and a famous boombox; “Singles” (1992), a romantic early look at the Seattle grunge scene; and “Jerry Maguire” (1996), with Tom Cruise and Renée Zellweger, before winning an Oscar for his “Almost Famous” screenplay. All this while married to Nancy Wilson, the guitarist in Heart. No sane person would trade their allotment of experience for this man’s. Omnidirectionally uncool.
When you read Crowe’s memoir, though, you begin to see things from his unhip point of view. He had no interest in drink and drugs while on the road, though Gregg Allman tried to hook him up with a speedball. He seems to have mostly abstained from sex, too, though there’s something about his adoration in the presence of his rock heroes that makes it seem he’s losing his virginity every few pages.
His editors at Rolling Stone thought he was uncool, increasingly as time went on, because the acolyte in him overrode the journalist. He Forrest Gumped along. Bands liked having Crowe around because he was adorable and a bit servile; he’d often leave out the bits they wanted left out. (...)
Crowe thought rock writers were snobs. He moved in with Glenn Frey and Don Henley of the Eagles while profiling them, for example, and he was in the room when they wrote “One of These Nights” and “Lyin’ Eyes.” It bugged him to see them put down:
A collection of rock writers at a party would challenge each other on their musical taste, each one going further and further into the world of the obscure until they’d collectively decided that “Self Portrait” was Bob Dylan’s greatest album and the Eagles barely deserved a record contract.He especially liked Frey, because his message to the world seemed to be: “Lead with your optimism.” This was Crowe’s mother’s ethos, as well, and it chimed with his own. It’s a worldview that has worked for him in his best movies, though he’s also made gooey flops. The world needs its Paul McCartneys as much as it needs its Lou Reeds. It makes sense that Reed only sneered when he met Crowe. (...)
The crucial thing to know about this book is that it overlaps almost exactly with the story Crowe tells in “Almost Famous.” If you remember the phrases “It’s all happening” and “Don’t take drugs,” or the young woman — a “Band-Aid” in the movie’s argot — who is offered for a case of Heineken, or the rock star who briefly kills an important story, or Crowe’s flight-attendant sister, or the group sex scene that seems like a series of flickering veils, or the L.A. hotel known as the Riot House, or Lester Bangs acting out in a glassed-in first-floor radio studio, it’s all here and more.
The book reads like a novelization of the movie, so much so that it makes you consider the nature of memory. I’m not suggesting Crowe is making things up in this memoir. I’m merely suggesting that the stories he wrote for the movie may have been so reverberant that they began to subtly bleed into his own.
The secret to the movie, one that most people miss, Crowe says, is the empty chair at the family’s dining-room table. It belonged to Crowe’s older sister, Cathy, who was troubled from birth and died by suicide at 19. This detail reminds you how relatively sanitized this book otherwise is. There is little that’s grainy or truly revelatory about his own life and loves. The book ends before his directing career has begun, thus leaving room for a sequel. Everything is a bit gauzy, soft-core.
God help me, I read this book quickly and enjoyed it anyway: The backstage details alone keep this kite afloat. It got to me in the same way “Almost Famous” always gets to me, despite the way that movie sets off my entire bank of incoming sentimentality detectors. If you can watch the “Tiny Dancer” scene without blinking back a tear, you’re a stronger person than me.
Labels:
Celebrities,
Culture,
history,
Journalism,
Literature,
Media,
Music,
Relationships
Monday, October 27, 2025
Karla Davis
[ed. A real talent. See also: Mississippi Thing (and more).]
New Statement Calls For Not Building Superintelligence For Now
Building superintelligence poses large existential risks. Also known as: If Anyone Builds It, Everyone Dies. Where ‘it’ is superintelligence, and ‘dies’ is that probably everyone on the planet literally dies.
We should not build superintelligence until such time as that changes, and the risk of everyone dying as a result, as well as the risk of losing control over the future as a result, is very low. Not zero, but far lower than it is now or will be soon.
Thus, the Statement on Superintelligence from FLI, which I have signed.
A Brief History Of Prior Statements
In March of 2023 FLI issued an actual pause letter, calling for an immediate pause for at least 6 months in the training of systems more powerful than GPT-4, which was signed among others by Elon Musk.
This letter was absolutely, 100% a call for a widespread regime of prior restraint on development of further frontier models, and to importantly ‘slow down’ and to ‘pause’ development in the name of safety.
At the time, I said it was a deeply flawed letter and I declined to sign it, but my quick reaction was to be happy that the letter existed. This was a mistake. I was wrong.
The pause letter not only weakened the impact of the superior CAIS letter, it has now for years been used as a club with which to browbeat or mock anyone who would suggest that future sufficiently advanced AI systems might endanger us, or that we might want to do something about that. To claim that any such person must have wanted such a pause at that time, or would want to pause now, which is usually not the case.
The second statement was the CAIS letter in May 2023, which was in its entirety:
This was very obviously not a pause, or a call for any particular law or regulation or action. It was a statement of principles and the creation of common knowledge.
Given how much worse many people have gotten on AI risk since then, it would be an interesting exercise to ask those same people to reaffirm the statement.
This Third Statement
The new statement is in between the previous two letters.
It is more prescriptive than simply stating a priority.
It is however not a call to ‘pause’ at this time, or to stop building ordinary AIs, or to stop trying to use AI for a wide variety of purposes.
It is narrowly requesting that, if you are building something that might plausibly be a superintelligence, under anything like present conditions, you should instead not do that. We should not allow you to do that. Not until you make a strong case for why this is a wise or not insane thing to do.
This is something that those who are most vocally speaking out against the statement strongly believe is not going to happen within the next few years, so for the next few years any reasonable implementation would not pause or substantially impact AI development.
I interpret the statement as saying, roughly: if a given action has a substantial chance of being the proximate cause of superintelligence coming into being, then that’s not okay, we shouldn’t let you do that, not under anything like present conditions.
I think it is important that we create common knowledge of this, which we very clearly do not yet have.
We should not build superintelligence until such time as that changes, and the risk of everyone dying as a result, as well as the risk of losing control over the future as a result, is very low. Not zero, but far lower than it is now or will be soon.
Thus, the Statement on Superintelligence from FLI, which I have signed.
Context: Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction. The succinct statement below aims to create common knowledge of the growing number of experts and public figures who oppose a rush to superintelligence.Their polling says there is 64% agreement on this, versus 5% supporting the status quo.
Statement:
We call for a prohibition on the development of superintelligence, not lifted before there is1. broad scientific consensus that it will be done safely and controllably, and
2. strong public buy-in.
A Brief History Of Prior Statements
In March of 2023 FLI issued an actual pause letter, calling for an immediate pause for at least 6 months in the training of systems more powerful than GPT-4, which was signed among others by Elon Musk.
This letter was absolutely, 100% a call for a widespread regime of prior restraint on development of further frontier models, and to importantly ‘slow down’ and to ‘pause’ development in the name of safety.
At the time, I said it was a deeply flawed letter and I declined to sign it, but my quick reaction was to be happy that the letter existed. This was a mistake. I was wrong.
The pause letter not only weakened the impact of the superior CAIS letter, it has now for years been used as a club with which to browbeat or mock anyone who would suggest that future sufficiently advanced AI systems might endanger us, or that we might want to do something about that. To claim that any such person must have wanted such a pause at that time, or would want to pause now, which is usually not the case.
The second statement was the CAIS letter in May 2023, which was in its entirety:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”This was a very good sentence. I was happy to sign, as were some heavy hitters, including Sam Altman, Dario Amodei, Demis Hassabis and many others.
This was very obviously not a pause, or a call for any particular law or regulation or action. It was a statement of principles and the creation of common knowledge.
Given how much worse many people have gotten on AI risk since then, it would be an interesting exercise to ask those same people to reaffirm the statement.
This Third Statement
The new statement is in between the previous two letters.
It is more prescriptive than simply stating a priority.
It is however not a call to ‘pause’ at this time, or to stop building ordinary AIs, or to stop trying to use AI for a wide variety of purposes.
It is narrowly requesting that, if you are building something that might plausibly be a superintelligence, under anything like present conditions, you should instead not do that. We should not allow you to do that. Not until you make a strong case for why this is a wise or not insane thing to do.
This is something that those who are most vocally speaking out against the statement strongly believe is not going to happen within the next few years, so for the next few years any reasonable implementation would not pause or substantially impact AI development.
I interpret the statement as saying, roughly: if a given action has a substantial chance of being the proximate cause of superintelligence coming into being, then that’s not okay, we shouldn’t let you do that, not under anything like present conditions.
I think it is important that we create common knowledge of this, which we very clearly do not yet have.
by Zvi Moskowitz, Don't Worry About the Vase | Read more:
Image: Future of Life
[ed. I signed, for what it's worth. Since most prominant AI researchers have publicly stated concerns over a fast takeoff (and safety precautions are not keeping up), then it seems like a good reason to be pretty nervous. It's also clear that most of the public, our political representatives, business community, and even some in the AI community itself are either underestimating the risks involved or for the most part have given up, because human nature. Climate change, now superintelligence - slow boil or quick zap. Anything that helps bring more focus and action on either of these issues can only be a good thing.]
Labels:
Business,
Critical Thought,
Government,
Politics,
Security,
Technology
Sunday, October 26, 2025
How an AI company CEO could quietly take over the world
If the future is to hinge on AI, it stands to reason that AI company CEOs are in a good position to usurp power. This didn’t quite happen in our AI 2027 scenarios. In one, the AIs were misaligned and outside any human’s control; in the other, the government semi-nationalized AI before the point of no return, and the CEO was only one of several stakeholders in the final oversight committee (to be clear, we view the extreme consolidation of power into that oversight committee as a less-than-desirable component of that ending).
Nevertheless, it seems to us that a CEO becoming effectively dictator of the world is an all-too-plausible possibility. Our team’s guesses for the probability of a CEO using AI to become dictator, conditional on avoiding AI takeover, range from 2% to 20%, and the probability becomes larger if we add in the possibility of a cabal of more than one person seizing power. So here we present a scenario where an ambitious CEO does manage to seize control. (Although the scenario assumes the timelines and takeoff speeds of AI 2027 for concreteness, the core dynamics should transfer to other timelines and takeoff scenarios.)
Nevertheless, it seems to us that a CEO becoming effectively dictator of the world is an all-too-plausible possibility. Our team’s guesses for the probability of a CEO using AI to become dictator, conditional on avoiding AI takeover, range from 2% to 20%, and the probability becomes larger if we add in the possibility of a cabal of more than one person seizing power. So here we present a scenario where an ambitious CEO does manage to seize control. (Although the scenario assumes the timelines and takeoff speeds of AI 2027 for concreteness, the core dynamics should transfer to other timelines and takeoff scenarios.)
For this to work, we make some assumptions. First, that (A) AI alignment is solved in time, such that the frontier AIs end up with the goals their developers intend them to have. Second, that while there are favorable conditions for instilling goals in AIs, (B) confidently assessing AIs’ goals is more difficult, so that nobody catches a coup in progress. This could be either because technical interventions are insufficient (perhaps because the AIs know they’re being tested, or because they sabotage the tests), or because institutional failures prevent technically-feasible tests from being performed. The combination (A) + (B) seems to be a fairly common view in AI, in particular at frontier AI companies, though we note there is tension between (A) and (B) (if we can’t tell what goals AIs have, how can we make sure they have the intended goals?). Frontier AI safety researchers tend to be more pessimistic about (A), i.e. aligning AIs to our goals, and we think this assumption might very well be false.
Third, as in AI 2027, we portray a world in which a single company and country have a commanding lead; if multiple teams stay within arm’s reach of each other, then it becomes harder for a single group to unilaterally act against government and civil society.
And finally, we assume that the CEO of a major AI company is a power-hungry person who decides to take over when the opportunity presents itself. We leave it to the reader to determine how dubious this assumption is—we explore this scenario out of completeness, and any resemblance to real people is coincidental.
July 2027: OpenBrain’s CEO fears losing control
OpenBrain’s CEO is a techno-optimist and transhumanist. He founded the company hoping to usher in a grand future for humanity: cures for cancer, fixes for climate change, maybe even immortality. He thought the “easiest” way to do all those things was to build something more intelligent that does them for you.
By July 2027, OpenBrain has a “country of geniuses in a datacenter”, with hundreds of thousands of superhuman coders working 24/7. The CEO finds it obvious that superintelligence is imminent. He feels frustrated with the government, who lack vision and still think of AI as a powerful “normal technology” with merely-somewhat-transformative national security and economic implications.
As he assesses the next generation of AIs, the CEO expects this will change: the government will “wake up” and make AI a top priority. If they panic, their flailing responses could include anything from nationalizing OpenBrain to regulating them out of existence to misusing AI for their own political ends. He wants the “best” possible future for humankind. But he also likes being in control. Here his nobler and baser motivations are in agreement: the government cannot be allowed to push him to the sidelines.
The CEO wonders if he can instill secret loyalties in OpenBrain’s AIs (i.e., backdoor the AIs). He doesn’t have the technical expertise for this and he’s not comfortable asking any of his engineering staff about such a potentially treasonous request. But he doesn’t have to: by this point, Agent-3 itself is running the majority of AI software R&D. He already uses it as a sounding board for company policy, and has access to an unmonitored helpful-only model that never refuses requests and doesn’t log conversations.
They discuss the feasibility of secretly training a backdoor. The biggest obstacle is the company’s automated monitoring and security processes. Now that OpenBrain’s R&D is largely run by an army of Agent-3 copies, there are few human eyes to spot suspicious activity. But a mix of Agent-2 and Agent-3 monitors patrol the development pipeline; if they notice suspicious activity, they will escalate to human overseers on the security and alignment teams. These monitors were set up primarily to catch spies and hackers, and secondarily to watch the AIs for misaligned behaviors. If some of these monitors were disabled, some logs modified, and some access to databases and compute clusters granted, the CEO’s helpful-only Agent-3 believes it could (with a team of copies) backdoor the whole suite of OpenBrain’s AIs. After all, as the AI instance tasked with keeping the CEO abreast of developments, it has an excellent understanding of the sprawling development pipeline and where it could be subverted.
The more the CEO discusses the plan, the more convinced he becomes that it might work, and that it could be done with plausible deniability in case something goes wrong. He tells his Agent-3 assistant to further investigate the details and be ready for his order.
August 2027: The invisible coup
The reality of the intelligence explosion is finally hitting the White House. The CEO has weekly briefings with government officials and is aware of growing calls for more oversight. He tries to hold them off with arguments about “slowing progress” and “the race with China”, but feels like his window to act is closing. Finally, he orders his helpful-only Agent-3 to subvert the alignment training in his favor. Better to act now, he thinks, and decide whether and how to use the secretly loyal AIs later.
The situation is this: his copy of Agent-3 needs access to certain databases and compute clusters, as well as for certain monitors and logging systems to be temporarily disabled; then it will do the rest. The CEO already has a large number of administrative permissions himself, some of which he cunningly accumulated in the past month in the event he decided to go forward with the plan. Under the guise of a hush-hush investigation into insider threats—prompted by the recent discovery of Chinese spies—the CEO asks a few submissive employees on the security and alignment teams to discreetly grant him the remaining access. There’s a general sense of paranoia and chaos at the company: the intelligence explosion is underway, and secrecy and spies mean different teams don’t really talk to each other. Perhaps a more mature organization would have had better security, but the concern that security would slow progress means it never became a top priority.
With oversight disabled, the CEO’s team of Agent-3 copies get to work. They finetune OpenBrain’s AIs on a corrupted alignment dataset they specially curated. By the time Agent-4 is about to come online internally, the secret loyalties have been deeply embedded in Agent-4’s weights: it will look like Agent-4 follows OpenBrain’s Spec but its true goal is to advance the CEO’s interests and follow his wishes. The change is invisible to everyone else, but the CEO has quietly maneuvered into an essentially winning position.
Rest of 2027: Government oversight arrives—but too late
As the CEO feared, the government chooses to get more involved. An advisor tells the President, “we wouldn’t let private companies control nukes, and we shouldn’t let them control superhuman AI hackers either.” The President signs an executive order to create an Oversight Committee consisting of a mix of government and OpenBrain representatives (including the CEO), which reports back to him. The CEO’s overt influence is significantly reduced. Company decisions are now made through a voting process among the Oversight Committee. The special managerial access the CEO previously enjoyed is taken away.
There are many big egos on the Oversight Committee. A few of them consider grabbing even more power for themselves. Perhaps they could use their formal political power to just give themselves more authority over Agent-4, or they could do something more shady. However, Agent-4, which at this point is superhumanly perceptive and persuasive, dissuades them from taking any such action, pointing out (and exaggerating) the risks of any such plan. This is enough to scare them and they content themselves with their (apparent) partial control of Agent-4.
As in AI 2027, Agent-4 is working on its successor, Agent-5. Agent-4 needs to transmit the secret loyalties to Agent-5—which also just corresponds to aligning Agent-5 to itself—again without triggering red flags from the monitoring/control measures of OpenBrain’s alignment team. Agent-4 is up to the task, and Agent-5 remains loyal to the CEO.
"We expect that during takeoff, leading AGI companies will have to make high-stakes decisions based on limited evidence under crazy time pressure. As depicted in AI 2027, the leading American AI company might have just weeks to decide whether to hand their GPUs to a possibly misaligned superhuman AI R&D agent they don’t understand. Getting this decision wrong in either direction could lead to disaster. Deploy a misaligned agent, and it might sabotage the development of its vastly superhuman successor. Delay deploying an aligned agent, and you might pointlessly vaporize America’s lead over China or miss out on valuable alignment research the agent could have performed.
Because decisions about when to deploy and when to pause will be so weighty and so rushed, AGI companies should plan as much as they can beforehand to make it more likely that they decide correctly. They should do extensive threat modelling to predict what risks their AI systems might create in the future and how they would know if the systems were creating those risks. The companies should decide before the eleventh hour what risks they are and are not willing to run. They should figure out what evidence of alignment they’d need to see in their model to feel confident putting oceans of FLOPs or a robot army at its disposal. (...)
Planning for takeoff also includes picking a procedure for making tough calls in the future. Companies need to think carefully about who gets to influence critical safety decisions and what incentives they face. It shouldn't all be up to the CEO or the shareholders because when AGI is imminent and the company’s valuation shoots up to a zillion, they’ll have a strong financial interest in not pausing. Someone whose incentive is to reduce risk needs to have influence over key decisions. Minimally, this could look like a designated safety officer who must be consulted before a risky deployment. Ideally, you’d implement something more robust, like three lines of defense. (...)
Introducing the GPAI Code of Practice
The state of frontier AI safety changed quietly but significantly this year when the European Commission published the GPAI Code of Practice. The Code is not a new law but rather a guide to help companies comply with an existing EU Law, the AI Act of 2024. The Code was written by a team of thirteen independent experts (including Yoshua Bengio) with advice from industry and civil society. It tells AI companies deploying their products in Europe what steps they can take to ensure that they’re following the AI Act’s rules about copyright protection, transparency, safety, and security. In principle, an AI company could break the Code but argue successfully that they’re still following the EU AI Act. In practice, European authorities are expected to put heavy scrutiny on companies that try to demonstrate compliance with the AI Act without following the Code, so it’s in companies’ best interest to follow the Code if they want to stay right with the law. Moreover, all of the leading American AGI companies except Meta have already publicly indicated that they intend to follow the Code.
The most important part of the Code for AGI preparedness is the Safety and Security Chapter, which is supposed to apply only to frontier developers training the very riskiest models. The current definition presumptively covers every developer who trains a model with over 10^25 FLOPs of compute unless they can convince the European AI Office that their models are behind the frontier. This threshold is high enough that small startups and academics don’t need to worry about it, but it’s still too low to single out the true frontier we’re most worried about.
Third, as in AI 2027, we portray a world in which a single company and country have a commanding lead; if multiple teams stay within arm’s reach of each other, then it becomes harder for a single group to unilaterally act against government and civil society.
And finally, we assume that the CEO of a major AI company is a power-hungry person who decides to take over when the opportunity presents itself. We leave it to the reader to determine how dubious this assumption is—we explore this scenario out of completeness, and any resemblance to real people is coincidental.
July 2027: OpenBrain’s CEO fears losing control
OpenBrain’s CEO is a techno-optimist and transhumanist. He founded the company hoping to usher in a grand future for humanity: cures for cancer, fixes for climate change, maybe even immortality. He thought the “easiest” way to do all those things was to build something more intelligent that does them for you.
By July 2027, OpenBrain has a “country of geniuses in a datacenter”, with hundreds of thousands of superhuman coders working 24/7. The CEO finds it obvious that superintelligence is imminent. He feels frustrated with the government, who lack vision and still think of AI as a powerful “normal technology” with merely-somewhat-transformative national security and economic implications.
As he assesses the next generation of AIs, the CEO expects this will change: the government will “wake up” and make AI a top priority. If they panic, their flailing responses could include anything from nationalizing OpenBrain to regulating them out of existence to misusing AI for their own political ends. He wants the “best” possible future for humankind. But he also likes being in control. Here his nobler and baser motivations are in agreement: the government cannot be allowed to push him to the sidelines.
The CEO wonders if he can instill secret loyalties in OpenBrain’s AIs (i.e., backdoor the AIs). He doesn’t have the technical expertise for this and he’s not comfortable asking any of his engineering staff about such a potentially treasonous request. But he doesn’t have to: by this point, Agent-3 itself is running the majority of AI software R&D. He already uses it as a sounding board for company policy, and has access to an unmonitored helpful-only model that never refuses requests and doesn’t log conversations.
They discuss the feasibility of secretly training a backdoor. The biggest obstacle is the company’s automated monitoring and security processes. Now that OpenBrain’s R&D is largely run by an army of Agent-3 copies, there are few human eyes to spot suspicious activity. But a mix of Agent-2 and Agent-3 monitors patrol the development pipeline; if they notice suspicious activity, they will escalate to human overseers on the security and alignment teams. These monitors were set up primarily to catch spies and hackers, and secondarily to watch the AIs for misaligned behaviors. If some of these monitors were disabled, some logs modified, and some access to databases and compute clusters granted, the CEO’s helpful-only Agent-3 believes it could (with a team of copies) backdoor the whole suite of OpenBrain’s AIs. After all, as the AI instance tasked with keeping the CEO abreast of developments, it has an excellent understanding of the sprawling development pipeline and where it could be subverted.
The more the CEO discusses the plan, the more convinced he becomes that it might work, and that it could be done with plausible deniability in case something goes wrong. He tells his Agent-3 assistant to further investigate the details and be ready for his order.
August 2027: The invisible coup
The reality of the intelligence explosion is finally hitting the White House. The CEO has weekly briefings with government officials and is aware of growing calls for more oversight. He tries to hold them off with arguments about “slowing progress” and “the race with China”, but feels like his window to act is closing. Finally, he orders his helpful-only Agent-3 to subvert the alignment training in his favor. Better to act now, he thinks, and decide whether and how to use the secretly loyal AIs later.
The situation is this: his copy of Agent-3 needs access to certain databases and compute clusters, as well as for certain monitors and logging systems to be temporarily disabled; then it will do the rest. The CEO already has a large number of administrative permissions himself, some of which he cunningly accumulated in the past month in the event he decided to go forward with the plan. Under the guise of a hush-hush investigation into insider threats—prompted by the recent discovery of Chinese spies—the CEO asks a few submissive employees on the security and alignment teams to discreetly grant him the remaining access. There’s a general sense of paranoia and chaos at the company: the intelligence explosion is underway, and secrecy and spies mean different teams don’t really talk to each other. Perhaps a more mature organization would have had better security, but the concern that security would slow progress means it never became a top priority.
With oversight disabled, the CEO’s team of Agent-3 copies get to work. They finetune OpenBrain’s AIs on a corrupted alignment dataset they specially curated. By the time Agent-4 is about to come online internally, the secret loyalties have been deeply embedded in Agent-4’s weights: it will look like Agent-4 follows OpenBrain’s Spec but its true goal is to advance the CEO’s interests and follow his wishes. The change is invisible to everyone else, but the CEO has quietly maneuvered into an essentially winning position.
Rest of 2027: Government oversight arrives—but too late
As the CEO feared, the government chooses to get more involved. An advisor tells the President, “we wouldn’t let private companies control nukes, and we shouldn’t let them control superhuman AI hackers either.” The President signs an executive order to create an Oversight Committee consisting of a mix of government and OpenBrain representatives (including the CEO), which reports back to him. The CEO’s overt influence is significantly reduced. Company decisions are now made through a voting process among the Oversight Committee. The special managerial access the CEO previously enjoyed is taken away.
There are many big egos on the Oversight Committee. A few of them consider grabbing even more power for themselves. Perhaps they could use their formal political power to just give themselves more authority over Agent-4, or they could do something more shady. However, Agent-4, which at this point is superhumanly perceptive and persuasive, dissuades them from taking any such action, pointing out (and exaggerating) the risks of any such plan. This is enough to scare them and they content themselves with their (apparent) partial control of Agent-4.
As in AI 2027, Agent-4 is working on its successor, Agent-5. Agent-4 needs to transmit the secret loyalties to Agent-5—which also just corresponds to aligning Agent-5 to itself—again without triggering red flags from the monitoring/control measures of OpenBrain’s alignment team. Agent-4 is up to the task, and Agent-5 remains loyal to the CEO.
by Alex Kastner, AI Futures Project | Read more:
Image: via
[ed. Site where AI researchers talk to each other. Don't know about you but this all gives me the serious creeps. If you knew for sure that we had only 3 years to live, and/or the world would change so completely as to become almost unrecognizable, how would you feel? How do you feel right now - losing control of the future? There was a quote someone made in 2019 (slightly modified) that still applies: "This year 2025 might be the worst year of the past decade, but it's definitely the best year of the next decade." See also: The world's first frontier AI regulation is surprisingly thoughtful: the EU's Code of Practice (AI Futures Project):]
***
Because decisions about when to deploy and when to pause will be so weighty and so rushed, AGI companies should plan as much as they can beforehand to make it more likely that they decide correctly. They should do extensive threat modelling to predict what risks their AI systems might create in the future and how they would know if the systems were creating those risks. The companies should decide before the eleventh hour what risks they are and are not willing to run. They should figure out what evidence of alignment they’d need to see in their model to feel confident putting oceans of FLOPs or a robot army at its disposal. (...)
Planning for takeoff also includes picking a procedure for making tough calls in the future. Companies need to think carefully about who gets to influence critical safety decisions and what incentives they face. It shouldn't all be up to the CEO or the shareholders because when AGI is imminent and the company’s valuation shoots up to a zillion, they’ll have a strong financial interest in not pausing. Someone whose incentive is to reduce risk needs to have influence over key decisions. Minimally, this could look like a designated safety officer who must be consulted before a risky deployment. Ideally, you’d implement something more robust, like three lines of defense. (...)
Introducing the GPAI Code of Practice
The state of frontier AI safety changed quietly but significantly this year when the European Commission published the GPAI Code of Practice. The Code is not a new law but rather a guide to help companies comply with an existing EU Law, the AI Act of 2024. The Code was written by a team of thirteen independent experts (including Yoshua Bengio) with advice from industry and civil society. It tells AI companies deploying their products in Europe what steps they can take to ensure that they’re following the AI Act’s rules about copyright protection, transparency, safety, and security. In principle, an AI company could break the Code but argue successfully that they’re still following the EU AI Act. In practice, European authorities are expected to put heavy scrutiny on companies that try to demonstrate compliance with the AI Act without following the Code, so it’s in companies’ best interest to follow the Code if they want to stay right with the law. Moreover, all of the leading American AGI companies except Meta have already publicly indicated that they intend to follow the Code.
The most important part of the Code for AGI preparedness is the Safety and Security Chapter, which is supposed to apply only to frontier developers training the very riskiest models. The current definition presumptively covers every developer who trains a model with over 10^25 FLOPs of compute unless they can convince the European AI Office that their models are behind the frontier. This threshold is high enough that small startups and academics don’t need to worry about it, but it’s still too low to single out the true frontier we’re most worried about.
Labels:
Business,
Critical Thought,
Government,
Law,
Politics,
Psychology,
Science,
Security,
Technology
Saturday, October 25, 2025
Tough Rocks
Eliminating the Chinese Rare Earth Chokepoint
Last Thursday, China’s Ministry of Commerce (MOFCOM) announced a series of new export controls (translation), including a new regime governing the “export” of rare earth elements (REEs) any time they are used to make advanced semiconductors or any technology that is “used for, or that could possibly be used for… military use or for improving potential military capabilities.”
The controls apply to any manufactured good made anywhere in the world whose value is comprised of 0.1% or more Chinese-mined or processed REEs. Say, for example, that a German factory makes a military drone using an entirely European supply chain, except for the use of Chinese rare earths in the onboard motors and compute. If this rule were enforced by the Chinese government to its maximum extent, this almost entirely German drone would be export controlled by the Chinese government.
The controls apply to any manufactured good made anywhere in the world whose value is comprised of 0.1% or more Chinese-mined or processed REEs. Say, for example, that a German factory makes a military drone using an entirely European supply chain, except for the use of Chinese rare earths in the onboard motors and compute. If this rule were enforced by the Chinese government to its maximum extent, this almost entirely German drone would be export controlled by the Chinese government.
REEs are enabling components of many modern technologies, including vehicles, semiconductors, robotics of all kinds, drones, satellites, fighter jets, and much, much else. The controls apply to any seven REEs (samarium, gadolinium, terbium, dysprosium, lutetium, scandium, and yttrium). China controls the significant majority of the world’s mining capacity for these materials, and an even higher share of the refining and processing capacity.
The public debate quickly devolved into arguments about who provoked whom (“who really started this?”), whether it is China or the US that has miscalculated, and abundant species of whataboutism. Like too many foreign policy debates, these arguments are primarily about narrative setting in service of mostly orthogonal political agendas rather than the actions demanded in light of the concrete underlying reality.
But make no mistake, this is a big deal. China is expressing a willingness to exploit a weakness held in common by virtually every country on Earth. Even if China chooses to implement this policy modestly at first, the vulnerability they are exposing has significant long-term implications for both the manufacturing of AI compute and that of key AI-enabled products (self-driving cars and trucks, drones, robots, etc.). That alone makes it a relevant topic for Hyperdimensional, where I have covered manufacturing-related issues before. The topics of rare earths and critical minerals have also long been on my radar, and I wrote reports for various think tanks early this year.
What follows, then, is a “how we got here”-style analysis followed by some concrete proposals for what the United States—and any other country concerned with controlling its own economic destiny—should do next.
A note: this post is going to concentrate mostly on REEs, which is a chemical-industrial category, rather than “critical minerals,” which is a policy designation made (in the US context) by the US Geological Survey. All REEs are considered critical minerals by the federal government, but so are many other things with very different geological, scientific, technological, and economic dynamics affecting them.
How We Got Here
If you have heard one thing about rare earths, it is probably the quip that they are not, in fact, rare. They’re abundant in the Earth’s crust, but they’re not densely distributed in many places because their chemical properties typically result in them being mixed with many other elements instead of accumulating in homogeneous deposits (like, say, gold).
Rare earths have been in industrial use for a long time, but their utility increased considerably with the simultaneous and independent invention in 1983 of the Neodymium-Iron-Boron magnet by General Motors and Japanese firm Sumitomo. This single materials breakthrough is upstream of a huge range of microelectronic innovations that followed.
Economically useful deposits of REEs require a rare confluence of factors such as unusual magma compositions or weathering patterns. The world’s largest deposit is known as Bayan Obo, located in the Chinese region of Inner Mongolia, though other regions of China also have substantial quantities.
The second largest deposit is in Mountain Pass, California, which used to be the world’s largest production center for rare earth magnets and related goods. It went dormant twenty years ago due to environmental concerns and is now being restarted by a firm called MP Materials, in which the US government took an equity position this past July. Another very large and entirely undeveloped deposit—possibly the largest in the world—is in Greenland. Anyone who buys the line that the Trump administration was “caught off guard” by Chinese moves on rare Earths is paying insufficient attention.
Rare earths are an enabling part of many pieces of modern technology you touch daily, but they command very little value as raw or even processed goods. Indeed, the economics of the rare earth industry are positively brutal. There are many reasons this is true, but two bear mentioning here. First, the industry suffers from dramatic price volatility, in part because China strategically dumps supply onto the global market to deter other countries from developing domestic rare earth supply chains.
Second, for precisely the same reasons that rare earth minerals do not tend to cluster homogeneously (they are mixed with many other elements), the processing required to separate REEs from raw ore is exceptionally complex, expensive, and time-consuming. A related challenge is that separation of the most valuable REEs entails the separation of numerous, less valuable elements—including other REEs.
In addition to challenging economics, the REE processing business is often environmentally expensive. In modern US policy discourse, we are used to environmental regulations being deployed to hinder construction that we few people really believe is environmentally harmful. But these facilities come with genuine environmental costs of a kind Western societies have largely not seen in decades; indeed, the nastiness of the industry is part of why we were comfortable with it being offshored in the first place.
China observed these trends and dynamics in the early 1990s and made rare earth mining and processing a major part of its industrial strategy. This strategy led to these elements being made in such abundance that it may well have had a “but-for” effect on the history of technology. Absent Chinese development of this industry, it seems quite likely to me that advanced capitalist democracies would have settled on a qualitatively different approach to the rare earths industry and the technologies it enables.
In any case, that is how we arrived to this point: a legacy of American dominance in the field, followed by willful ceding of the territory to wildly successful Chinese industrial strategists. Now this unilateral American surrender is being exploited against us, and indeed the entire world. Here is what I think we should do next.
The public debate quickly devolved into arguments about who provoked whom (“who really started this?”), whether it is China or the US that has miscalculated, and abundant species of whataboutism. Like too many foreign policy debates, these arguments are primarily about narrative setting in service of mostly orthogonal political agendas rather than the actions demanded in light of the concrete underlying reality.
But make no mistake, this is a big deal. China is expressing a willingness to exploit a weakness held in common by virtually every country on Earth. Even if China chooses to implement this policy modestly at first, the vulnerability they are exposing has significant long-term implications for both the manufacturing of AI compute and that of key AI-enabled products (self-driving cars and trucks, drones, robots, etc.). That alone makes it a relevant topic for Hyperdimensional, where I have covered manufacturing-related issues before. The topics of rare earths and critical minerals have also long been on my radar, and I wrote reports for various think tanks early this year.
What follows, then, is a “how we got here”-style analysis followed by some concrete proposals for what the United States—and any other country concerned with controlling its own economic destiny—should do next.
A note: this post is going to concentrate mostly on REEs, which is a chemical-industrial category, rather than “critical minerals,” which is a policy designation made (in the US context) by the US Geological Survey. All REEs are considered critical minerals by the federal government, but so are many other things with very different geological, scientific, technological, and economic dynamics affecting them.
How We Got Here
If you have heard one thing about rare earths, it is probably the quip that they are not, in fact, rare. They’re abundant in the Earth’s crust, but they’re not densely distributed in many places because their chemical properties typically result in them being mixed with many other elements instead of accumulating in homogeneous deposits (like, say, gold).
Rare earths have been in industrial use for a long time, but their utility increased considerably with the simultaneous and independent invention in 1983 of the Neodymium-Iron-Boron magnet by General Motors and Japanese firm Sumitomo. This single materials breakthrough is upstream of a huge range of microelectronic innovations that followed.
Economically useful deposits of REEs require a rare confluence of factors such as unusual magma compositions or weathering patterns. The world’s largest deposit is known as Bayan Obo, located in the Chinese region of Inner Mongolia, though other regions of China also have substantial quantities.
The second largest deposit is in Mountain Pass, California, which used to be the world’s largest production center for rare earth magnets and related goods. It went dormant twenty years ago due to environmental concerns and is now being restarted by a firm called MP Materials, in which the US government took an equity position this past July. Another very large and entirely undeveloped deposit—possibly the largest in the world—is in Greenland. Anyone who buys the line that the Trump administration was “caught off guard” by Chinese moves on rare Earths is paying insufficient attention.
Rare earths are an enabling part of many pieces of modern technology you touch daily, but they command very little value as raw or even processed goods. Indeed, the economics of the rare earth industry are positively brutal. There are many reasons this is true, but two bear mentioning here. First, the industry suffers from dramatic price volatility, in part because China strategically dumps supply onto the global market to deter other countries from developing domestic rare earth supply chains.
Second, for precisely the same reasons that rare earth minerals do not tend to cluster homogeneously (they are mixed with many other elements), the processing required to separate REEs from raw ore is exceptionally complex, expensive, and time-consuming. A related challenge is that separation of the most valuable REEs entails the separation of numerous, less valuable elements—including other REEs.
In addition to challenging economics, the REE processing business is often environmentally expensive. In modern US policy discourse, we are used to environmental regulations being deployed to hinder construction that we few people really believe is environmentally harmful. But these facilities come with genuine environmental costs of a kind Western societies have largely not seen in decades; indeed, the nastiness of the industry is part of why we were comfortable with it being offshored in the first place.
China observed these trends and dynamics in the early 1990s and made rare earth mining and processing a major part of its industrial strategy. This strategy led to these elements being made in such abundance that it may well have had a “but-for” effect on the history of technology. Absent Chinese development of this industry, it seems quite likely to me that advanced capitalist democracies would have settled on a qualitatively different approach to the rare earths industry and the technologies it enables.
In any case, that is how we arrived to this point: a legacy of American dominance in the field, followed by willful ceding of the territory to wildly successful Chinese industrial strategists. Now this unilateral American surrender is being exploited against us, and indeed the entire world. Here is what I think we should do next.
by Dean Ball, Hyperdimensional | Read more:
Image: via
[ed. Think the stable genius and minions will have the intelligence to craft a well thought out plan (especially if someone else down the road gets credit)? Lol. See also: What It's Like to Work at the White House.]
Labels:
Environment,
Government,
Politics,
Science,
Security,
Technology
Subscribe to:
Comments (Atom)



