Showing posts with label Business. Show all posts
Showing posts with label Business. Show all posts

Wednesday, October 29, 2025

Please Do Not Ban Autonomous Vehicles In Your City

I was listening with horror to a Boston City Council meeting today where many council members made it clear that they’re interested in effectively banning autonomous vehicles (AVs) in the city.

A speaker said that Waymo (the AV company requesting clearance to run in Boston) was only interested in not paying human drivers (Waymo is a new company that has never had human drivers in the first place) and then referred to the ‘notion that somehow our cities are unsafe because people are driving cars’ as if this were a crazy idea. A council person strongly implied that new valuable technology always causes us to value people less. One speaker associated Waymo with the Trump administration. There were a lot of implications that AVs couldn’t possibly be as good as human drivers, despite lots of evidence to the contrary. Some speeches were included lots of criticisms that applied equally well to what Uber did to taxis, but now deployed to defend Uber.

AVs are ridiculously safe compared to human drivers

The most obvious reason to allow AVs in your city is that every time a rider takes one over driving a car themselves or getting in a ride share, their odds of being in a crash that causes serious injury or worse drop by about 90%. I’d strongly recommend this deep dive on every single crash Waymo has had so far:

[Very few of Waymo’s most serious crashes were Waymo’s fault (Understanding AI).]

This is based on public police records rather than Waymo’s self-reported crashes. It doesn’t seem like there have been any serious crashes Waymo’s been involved in where the AV itself was at fault. This is wild, because Waymo’s driven over 100 million miles. These statistics were brought up out of context in the hearing to imply that Waymo is dangerous. By any. normal metric it’s much more safe than human drivers.

40,000 people die in car accidents in America each year. This is as many deaths as 9/11 every single month. We should be treating this as more of an emergency than we do. Our first thought in making any policy related to cars should be “How can we do everything we can to stop so many people from being killed?” Everything else is secondary to that. Dropping the rate of serious crashes by even 50% would save 20,000 people a year. Here’s 20,000 dots:


The more people choose to ride AVs over human-driven cars, the fewer total crashes will happen.

One common argument is that Waymos are very safe compared to everyday drivers, but not professional drivers. I can’t find super reliable data, but ride share accidents seem to occur at about a rate of 40 per 100 million miles traveled. Waymo in comparison was involved in 34 crashes where airbags deployed in its 100 million miles, and 45 crashes altogether. Crucially, it seems like the AV was only at fault for one of these, when a wheel fell off. There’s no similar data for how many Uber and Lyft crashes were the driver’s fault, but they’re competing with what seems like effectively 0 per 100 million miles.

by Andy Masley, The Weird Turn Pro |  Read more:
Image: Smith Collection/Gado/Getty Images

Tuesday, October 28, 2025

Amazon Plans to Replace More Than Half a Million Jobs With Robots


Over the past two decades, no company has done more to shape the American workplace than Amazon. In its ascent to become the nation’s second-largest employer, it has hired hundreds of thousands of warehouse workers, built an army of contract drivers and pioneered using technology to hire, monitor and manage employees.

Now, interviews and a cache of internal strategy documents viewed by The New York Times reveal that Amazon executives believe the company is on the cusp of its next big workplace shift: replacing more than half a million jobs with robots.

Amazon’s U.S. work force has more than tripled since 2018 to almost 1.2 million. But Amazon’s automation team expects the company can avoid hiring more than 160,000 people in the United States it would otherwise need by 2027. That would save about 30 cents on each item that Amazon picks, packs and delivers to customers.

Executives told Amazon’s board last year that they hoped robotic automation would allow the company to continue to avoid adding to its U.S. work force in the coming years, even though they expect to sell twice as many products by 2033. That would translate to more than 600,000 people whom Amazon didn’t need to hire.

At facilities designed for superfast deliveries, Amazon is trying to create warehouses that employ few humans at all. And documents show that Amazon’s robotics team has an ultimate goal to automate 75 percent of its operations.

Amazon is so convinced this automated future is around the corner that it has started developing plans to mitigate the fallout in communities that may lose jobs. Documents show the company has considered building an image as a “good corporate citizen” through greater participation in community events such as parades and Toys for Tots.

The documents contemplate avoiding using terms like “automation” and “A.I.” when discussing robotics, and instead use terms like “advanced technology” or replace the word “robot” with “cobot,” which implies collaboration with humans. (...)

Amazon’s plans could have profound impact on blue-collar jobs throughout the country and serve as a model for other companies like Walmart, the nation’s largest private employer, and UPS. The company transformed the U.S. work force as it created a booming demand for warehousing and delivery jobs. But now, as it leads the way for automation, those roles could become more technical, higher paid and more scarce.

“Nobody else has the same incentive as Amazon to find the way to automate,” said Daron Acemoglu, a professor at the Massachusetts Institute of Technology who studies automation and won the Nobel Prize in economic science last year. “Once they work out how to do this profitably, it will spread to others, too.”

If the plans pan out, “one of the biggest employers in the United States will become a net job destroyer, not a net job creator,” Mr. Acemoglu said.

The Times viewed internal Amazon documents from the past year. They included working papers that show how different parts of the company are navigating its ambitious automation effort, as well as formalized plans for the department of more than 3,000 corporate and engineering employees who largely develop the company’s robotic and automation operations. (...)

A Template for the Future

For years, Jeff Bezos, Amazon’s founder and longtime chief executive, pushed his staff to think big and envision what it would take to fully automate its operations, according to two former senior leaders involved in the work. Amazon’s first big push into robotic automation started in 2012, when it paid $775 million to buy the robotics maker Kiva. The acquisition transformed Amazon’s operations. Workers no longer walked miles crisscrossing a warehouse. Instead, robots shaped like large hockey pucks moved towers of products to employees.

The company has since developed an orchestrated system of robotic programs that plug into each together like Legos. And it has focused on transforming the large, workhorse warehouses that pick and pack the products customers buy with a click.

Amazon opened its most advanced warehouse, a facility in Shreveport, La., last year as a template for future robotic fulfillment centers. Once an item there is in a package, a human barely touches it again. The company uses a thousand robots in Shreveport, allowing it to employ a quarter fewer workers last year than it would have without automation, documents show. Next year, as more robots are introduced, it expects to employ about half as many workers there as it would without automation.

“With this major milestone now in sight, we are confident in our ability to flatten Amazon’s hiring curve over the next 10 years,” the robotics team wrote in its strategy plan for 2025.

Amazon plans to copy the Shreveport design in about 40 facilities by the end of 2027, starting with a massive warehouse that just opened in Virginia Beach. And it has begun overhauling old facilities, including one in Stone Mountain near Atlanta.

That facility currently has roughly 4,000 workers. But once the robotic systems are installed, it is projected to process 10 percent more items but need as many as 1,200 fewer employees, according to an internal analysis. Amazon said the final head count was subject to change. (...)

Amazon has said it has a million robots at work around the globe, and it believes the humans who take care of them will be the jobs of the future. Both hourly workers and managers will need to know more about engineering and robotics as Amazon’s facilities operate more like advanced factories.

by Karen Weise, NY Times | Read more:
Image: Emily Kask
[ed. Everyone knew this was coming, now it's here. I expect issues like universal basic income, healthcare for all, even various forms of democratic socialism (which I support) getting more attention soon. See also: What Amazon’s 14,000 job cuts say about a new era of corporate downsizing (WaPo via Seattle Times); and, The AI job cuts are here - or are they? (BBC).]

College Football: Big Money, Big Troubles

College football programs could spend $200 million in buyouts. Spare us the money moaning.

If you watched college football on Saturday, you saw yet another set of misleading political ads urging you to call your local congressman and tell them to SAVE COLLEGE SPORTS! The latest ones give the impression that women’s and Olympic sports are in trouble because having to pay athletes a salary is going to bankrupt their schools.

On Sunday, Penn State announced it has fired 12th-year coach James Franklin, for whom they now owe a roughly $45 million buyout.

These schools aren’t broke. They’re just wildly irresponsible spenders.

And if they find a private equity firm to come rushing to their rescue, as the Big Ten is actively seeking, they’ll just find a way to light that money on fire, too.

We’re only halfway through the 2025 regular season, and it’s clear we’re headed to a full-on coaching carousel bloodletting. Stanford (Troy Taylor), UCLA (DeShaun Foster), Virginia Tech (Brent Pry), Oklahoma State (Mike Gundy), Arkansas (Sam Pittman), Oregon State (Trent Bray) and now Penn State have already sent their guys packing, and the likes of Florida (Billy Napier), Wisconsin (Luke Fickell) and several more will likely come.

By year’s end, the combined cost of those buyouts could well exceed $200 million. Let that sink in for a second. Supposed institutions of “higher learning” have managed to negotiate themselves into paying $200 million to people who will no longer be working for them.

Just how much is $200 million? Well, for one thing, it’s enough to pay for the scholarships of roughly 5,000 women’s and Olympic sports athletes.

You may be asking yourself: How do schools keep entering into these ridiculous, one-sided coaching contracts that cost more than the House settlement salary cap ($20.5 million) to extricate themselves from?

Well, consider the dynamics at play in those negotiations.

On one side of the table, we have an athletics director who spends 95 percent of their time on things like fundraising, marketing, facilities, answering fan emails about the long lines of concession stands, and so on. Once every four or five years, if that, they have to hire or renew a highly paid football coach, often in the span of 24 to 48 hours.

And on the other side, we have Jimmy Sexton. Or Trace Armstrong. Or another super-agent whose sole job is to negotiate lucrative coaching contracts. It’s a bigger mismatch than Penn State-UCLA … uh, Penn State-Northwestern … uh … you know what I mean.

Franklin’s extremely one-sided contract is a perfect example. (...)

Coaching salaries have been going up and up for decades, of course, but that 2021-22 cycle reached new heights in absurdity. In addition to Franklin’s windfall, USC gave Oklahoma’s Riley a 10-year, $110 million contract, and LSU gave Brian Kelly a 10-year, $95 million deal; and the most insane of all, Michigan State’s 10-year, $75 million deal for the since-fired Mel Tucker.

As of today, none of the four schools has gotten the return they were seeking. (...)

Now, according to USA Today’s coaching salary database published last week, none of the 30 highest-paid coaches in the country have a buyout of less than $20 million.

In the past, we might have just rolled our eyes, proclaimed, “You idiots!” and moved on. But the current college sports climate all but demands that there needs to be more accountability of the people making these deals.

by Stewart Mandel, The Athletic | Read more:
Image: Alex Slitz/Getty
[ed. I don't follow college football much, but from what I do pick up it seems like the transfer portal, NIL, legitimized sports gambling, conference reorganizations, big media money, and who knows what else have really had an overall negative effect on the sport, resulting in an ugly mercenary ethic that's now common. See also: College football is absolutely unhinged right now. It’s exactly why we love it; and, Bill Belichick pledged an NFL approach at North Carolina. Program insiders call it dysfunctional (The Athletic). 

Then there's this: College football’s ‘shirtless dudes’ trend is all the rage. And could be curing male loneliness? Can't see the connection but imagine women sure as hell won't be sitting anywhere near these guys. Don't think that's going to help with the loneliness problem.]  

Monday, October 27, 2025

New Statement Calls For Not Building Superintelligence For Now

Building superintelligence poses large existential risks. Also known as: If Anyone Builds It, Everyone Dies. Where ‘it’ is superintelligence, and ‘dies’ is that probably everyone on the planet literally dies.

We should not build superintelligence until such time as that changes, and the risk of everyone dying as a result, as well as the risk of losing control over the future as a result, is very low. Not zero, but far lower than it is now or will be soon.

Thus, the Statement on Superintelligence from FLI, which I have signed.
Context: Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction. The succinct statement below aims to create common knowledge of the growing number of experts and public figures who oppose a rush to superintelligence.

Statement:

We call for a prohibition on the development of superintelligence, not lifted before there is
1. broad scientific consensus that it will be done safely and controllably, and

2. strong public buy-in.

Their polling says there is 64% agreement on this, versus 5% supporting the status quo.

A Brief History Of Prior Statements

In March of 2023 FLI issued an actual pause letter, calling for an immediate pause for at least 6 months in the training of systems more powerful than GPT-4, which was signed among others by Elon Musk.

This letter was absolutely, 100% a call for a widespread regime of prior restraint on development of further frontier models, and to importantly ‘slow down’ and to ‘pause’ development in the name of safety.

At the time, I said it was a deeply flawed letter and I declined to sign it, but my quick reaction was to be happy that the letter existed. This was a mistake. I was wrong.

The pause letter not only weakened the impact of the superior CAIS letter, it has now for years been used as a club with which to browbeat or mock anyone who would suggest that future sufficiently advanced AI systems might endanger us, or that we might want to do something about that. To claim that any such person must have wanted such a pause at that time, or would want to pause now, which is usually not the case.

The second statement was the CAIS letter in May 2023, which was in its entirety:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
This was a very good sentence. I was happy to sign, as were some heavy hitters, including Sam Altman, Dario Amodei, Demis Hassabis and many others.

This was very obviously not a pause, or a call for any particular law or regulation or action. It was a statement of principles and the creation of common knowledge.

Given how much worse many people have gotten on AI risk since then, it would be an interesting exercise to ask those same people to reaffirm the statement.

This Third Statement

The new statement is in between the previous two letters.

It is more prescriptive than simply stating a priority.

It is however not a call to ‘pause’ at this time, or to stop building ordinary AIs, or to stop trying to use AI for a wide variety of purposes.

It is narrowly requesting that, if you are building something that might plausibly be a superintelligence, under anything like present conditions, you should instead not do that. We should not allow you to do that. Not until you make a strong case for why this is a wise or not insane thing to do.

This is something that those who are most vocally speaking out against the statement strongly believe is not going to happen within the next few years, so for the next few years any reasonable implementation would not pause or substantially impact AI development.

I interpret the statement as saying, roughly: if a given action has a substantial chance of being the proximate cause of superintelligence coming into being, then that’s not okay, we shouldn’t let you do that, not under anything like present conditions.

I think it is important that we create common knowledge of this, which we very clearly do not yet have. 

by Zvi Moskowitz, Don't Worry About the Vase |  Read more:
Image: Future of Life
[ed. I signed, for what it's worth. Since most prominant AI researchers have publicly stated concerns over a fast takeoff (and safety precautions are not keeping up), then it seems like a good reason to be pretty nervous. It's also clear that most of the public, our political representatives, business community, and even some in the AI community itself are either underestimating the risks involved or for the most part have given up, because human nature. Climate change, now superintelligence - slow boil or quick zap. Anything that helps bring more focus and action on either of these issues can only be a good thing.]

Sunday, October 26, 2025

How an AI company CEO could quietly take over the world

If the future is to hinge on AI, it stands to reason that AI company CEOs are in a good position to usurp power. This didn’t quite happen in our AI 2027 scenarios. In one, the AIs were misaligned and outside any human’s control; in the other, the government semi-nationalized AI before the point of no return, and the CEO was only one of several stakeholders in the final oversight committee (to be clear, we view the extreme consolidation of power into that oversight committee as a less-than-desirable component of that ending).

Nevertheless, it seems to us that a CEO becoming effectively dictator of the world is an all-too-plausible possibility. Our team’s guesses for the probability of a CEO using AI to become dictator, conditional on avoiding AI takeover, range from 2% to 20%, and the probability becomes larger if we add in the possibility of a cabal of more than one person seizing power. So here we present a scenario where an ambitious CEO does manage to seize control. (Although the scenario assumes the timelines and takeoff speeds of AI 2027 for concreteness, the core dynamics should transfer to other timelines and takeoff scenarios.)

For this to work, we make some assumptions. First, that (A) AI alignment is solved in time, such that the frontier AIs end up with the goals their developers intend them to have. Second, that while there are favorable conditions for instilling goals in AIs, (B) confidently assessing AIs’ goals is more difficult, so that nobody catches a coup in progress. This could be either because technical interventions are insufficient (perhaps because the AIs know they’re being tested, or because they sabotage the tests), or because institutional failures prevent technically-feasible tests from being performed. The combination (A) + (B) seems to be a fairly common view in AI, in particular at frontier AI companies, though we note there is tension between (A) and (B) (if we can’t tell what goals AIs have, how can we make sure they have the intended goals?). Frontier AI safety researchers tend to be more pessimistic about (A), i.e. aligning AIs to our goals, and we think this assumption might very well be false.

Third, as in AI 2027, we portray a world in which a single company and country have a commanding lead; if multiple teams stay within arm’s reach of each other, then it becomes harder for a single group to unilaterally act against government and civil society.

And finally, we assume that the CEO of a major AI company is a power-hungry person who decides to take over when the opportunity presents itself. We leave it to the reader to determine how dubious this assumption is—we explore this scenario out of completeness, and any resemblance to real people is coincidental.

July 2027: OpenBrain’s CEO fears losing control

OpenBrain’s CEO is a techno-optimist and transhumanist. He founded the company hoping to usher in a grand future for humanity: cures for cancer, fixes for climate change, maybe even immortality. He thought the “easiest” way to do all those things was to build something more intelligent that does them for you.

By July 2027, OpenBrain has a “country of geniuses in a datacenter”, with hundreds of thousands of superhuman coders working 24/7. The CEO finds it obvious that superintelligence is imminent. He feels frustrated with the government, who lack vision and still think of AI as a powerful “normal technology” with merely-somewhat-transformative national security and economic implications.

As he assesses the next generation of AIs, the CEO expects this will change: the government will “wake up” and make AI a top priority. If they panic, their flailing responses could include anything from nationalizing OpenBrain to regulating them out of existence to misusing AI for their own political ends. He wants the “best” possible future for humankind. But he also likes being in control. Here his nobler and baser motivations are in agreement: the government cannot be allowed to push him to the sidelines.

The CEO wonders if he can instill secret loyalties in OpenBrain’s AIs (i.e., backdoor the AIs). He doesn’t have the technical expertise for this and he’s not comfortable asking any of his engineering staff about such a potentially treasonous request. But he doesn’t have to: by this point, Agent-3 itself is running the majority of AI software R&D. He already uses it as a sounding board for company policy, and has access to an unmonitored helpful-only model that never refuses requests and doesn’t log conversations.

They discuss the feasibility of secretly training a backdoor. The biggest obstacle is the company’s automated monitoring and security processes. Now that OpenBrain’s R&D is largely run by an army of Agent-3 copies, there are few human eyes to spot suspicious activity. But a mix of Agent-2 and Agent-3 monitors patrol the development pipeline; if they notice suspicious activity, they will escalate to human overseers on the security and alignment teams. These monitors were set up primarily to catch spies and hackers, and secondarily to watch the AIs for misaligned behaviors. If some of these monitors were disabled, some logs modified, and some access to databases and compute clusters granted, the CEO’s helpful-only Agent-3 believes it could (with a team of copies) backdoor the whole suite of OpenBrain’s AIs. After all, as the AI instance tasked with keeping the CEO abreast of developments, it has an excellent understanding of the sprawling development pipeline and where it could be subverted.

The more the CEO discusses the plan, the more convinced he becomes that it might work, and that it could be done with plausible deniability in case something goes wrong. He tells his Agent-3 assistant to further investigate the details and be ready for his order.

August 2027: The invisible coup

The reality of the intelligence explosion is finally hitting the White House. The CEO has weekly briefings with government officials and is aware of growing calls for more oversight. He tries to hold them off with arguments about “slowing progress” and “the race with China”, but feels like his window to act is closing. Finally, he orders his helpful-only Agent-3 to subvert the alignment training in his favor. Better to act now, he thinks, and decide whether and how to use the secretly loyal AIs later.

The situation is this: his copy of Agent-3 needs access to certain databases and compute clusters, as well as for certain monitors and logging systems to be temporarily disabled; then it will do the rest. The CEO already has a large number of administrative permissions himself, some of which he cunningly accumulated in the past month in the event he decided to go forward with the plan. Under the guise of a hush-hush investigation into insider threats—prompted by the recent discovery of Chinese spies—the CEO asks a few submissive employees on the security and alignment teams to discreetly grant him the remaining access. There’s a general sense of paranoia and chaos at the company: the intelligence explosion is underway, and secrecy and spies mean different teams don’t really talk to each other. Perhaps a more mature organization would have had better security, but the concern that security would slow progress means it never became a top priority.

With oversight disabled, the CEO’s team of Agent-3 copies get to work. They finetune OpenBrain’s AIs on a corrupted alignment dataset they specially curated. By the time Agent-4 is about to come online internally, the secret loyalties have been deeply embedded in Agent-4’s weights: it will look like Agent-4 follows OpenBrain’s Spec but its true goal is to advance the CEO’s interests and follow his wishes. The change is invisible to everyone else, but the CEO has quietly maneuvered into an essentially winning position.

Rest of 2027: Government oversight arrives—but too late

As the CEO feared, the government chooses to get more involved. An advisor tells the President, “we wouldn’t let private companies control nukes, and we shouldn’t let them control superhuman AI hackers either.” The President signs an executive order to create an Oversight Committee consisting of a mix of government and OpenBrain representatives (including the CEO), which reports back to him. The CEO’s overt influence is significantly reduced. Company decisions are now made through a voting process among the Oversight Committee. The special managerial access the CEO previously enjoyed is taken away.

There are many big egos on the Oversight Committee. A few of them consider grabbing even more power for themselves. Perhaps they could use their formal political power to just give themselves more authority over Agent-4, or they could do something more shady. However, Agent-4, which at this point is superhumanly perceptive and persuasive, dissuades them from taking any such action, pointing out (and exaggerating) the risks of any such plan. This is enough to scare them and they content themselves with their (apparent) partial control of Agent-4.

As in AI 2027, Agent-4 is working on its successor, Agent-5. Agent-4 needs to transmit the secret loyalties to Agent-5—which also just corresponds to aligning Agent-5 to itself—again without triggering red flags from the monitoring/control measures of OpenBrain’s alignment team. Agent-4 is up to the task, and Agent-5 remains loyal to the CEO.

by Alex Kastner, AI Futures Project |  Read more:
Image: via
[ed. Site where AI researchers talk to each other. Don't know about you but this all gives me the serious creeps. If you knew for sure that we had only 3 years to live, and/or the world would change so completely as to become almost unrecognizable, how would you feel? How do you feel right now - losing control of the future? There was a quote someone made in 2019 (slightly modified) that still applies: "This year 2025 might be the worst year of the past decade, but it's definitely the best year of the next decade." See also: The world's first frontier AI regulation is surprisingly thoughtful: the EU's Code of Practice (AI Futures Project):]
***

"We expect that during takeoff, leading AGI companies will have to make high-stakes decisions based on limited evidence under crazy time pressure. As depicted in AI 2027, the leading American AI company might have just weeks to decide whether to hand their GPUs to a possibly misaligned superhuman AI R&D agent they don’t understand. Getting this decision wrong in either direction could lead to disaster. Deploy a misaligned agent, and it might sabotage the development of its vastly superhuman successor. Delay deploying an aligned agent, and you might pointlessly vaporize America’s lead over China or miss out on valuable alignment research the agent could have performed.

Because decisions about when to deploy and when to pause will be so weighty and so rushed, AGI companies should plan as much as they can beforehand to make it more likely that they decide correctly. They should do extensive threat modelling to predict what risks their AI systems might create in the future and how they would know if the systems were creating those risks. The companies should decide before the eleventh hour what risks they are and are not willing to run. They should figure out what evidence of alignment they’d need to see in their model to feel confident putting oceans of FLOPs or a robot army at its disposal. (...)

Planning for takeoff also includes picking a procedure for making tough calls in the future. Companies need to think carefully about who gets to influence critical safety decisions and what incentives they face. It shouldn't all be up to the CEO or the shareholders because when AGI is imminent and the company’s valuation shoots up to a zillion, they’ll have a strong financial interest in not pausing. Someone whose incentive is to reduce risk needs to have influence over key decisions. Minimally, this could look like a designated safety officer who must be consulted before a risky deployment. Ideally, you’d implement something more robust, like three lines of defense. (...)

Introducing the GPAI Code of Practice

The state of frontier AI safety changed quietly but significantly this year when the European Commission published the GPAI Code of Practice. The Code is not a new law but rather a guide to help companies comply with an existing EU Law, the AI Act of 2024. The Code was written by a team of thirteen independent experts (including Yoshua Bengio) with advice from industry and civil society. It tells AI companies deploying their products in Europe what steps they can take to ensure that they’re following the AI Act’s rules about copyright protection, transparency, safety, and security. In principle, an AI company could break the Code but argue successfully that they’re still following the EU AI Act. In practice, European authorities are expected to put heavy scrutiny on companies that try to demonstrate compliance with the AI Act without following the Code, so it’s in companies’ best interest to follow the Code if they want to stay right with the law. Moreover, all of the leading American AGI companies except Meta have already publicly indicated that they intend to follow the Code.

The most important part of the Code for AGI preparedness is the Safety and Security Chapter, which is supposed to apply only to frontier developers training the very riskiest models. The current definition presumptively covers every developer who trains a model with over 10^25 FLOPs of compute unless they can convince the European AI Office that their models are behind the frontier. This threshold is high enough that small startups and academics don’t need to worry about it, but it’s still too low to single out the true frontier we’re most worried about.

Friday, October 24, 2025

Stanley Cup Madness: The Great Silent Majority of American Basicness

I first noticed the prevalence of the Stanley Quencher H2.0 FlowState™ tumbler last April when I wrote about #WaterTok. I’m still unclear what to make of #WaterTok, but I eventually settled on the idea that it’s several subcultures overlapping — weight-loss communities, Mormons, and those people who don’t like the “taste” of water. But in the majority of the #WaterTok videos I watched, people were using Stanley’s Quencher to carry around their liquid Jolly Ranchers. And the ubiquity of the cup has sort of haunted me ever since.

I grew up in the suburbs, but I don’t live there anymore. So every time the great silent majority of American basicness summons a new totem to gather around, I can’t help but try and make sense of it. Was this a car thing? A college football tailgate thing? An EDM thing? Cruise ships? Barstool Sports was of no help here, so I filed it away until this Christmas when it exploded across the web and forced me to finally figure out what the heck was going on. And it turns out, the Stanley cup’s transformation into a must-have last year is actually, in many ways, the story of everything now.

CNBC put together a great explainer on this. Stanley, a manly hundred-year-old brand primarily aimed at hikers and blue-collar workers, was rediscovered in 2019 by the bloggers behind a women’s lifestyle and shopping site called The Buy Guide. They told CNBC that even though the Quencher model of the cup was hard to find, no other cup on the market had what they were looking for. Which is a bizarrely passionate stance to take on a water bottle, but from their post about the cup, those attributes were: “Large enough to keep up with our busy days, a handle to carry it wherever we go, dishwasher safe, fits into our car cupholders, keeps ice cold for 12+ hours, and a straw.”

The Buy Guide team then sent a Quencher to Emily Maynard Johnson from The Bachelor after she had a baby because “there is no thirst like nursing mom thirst!” Johnson posted about it on Instagram and it started to gain some traction. The Buy Guide then connected with an employee at Stanley, bought 5,000 Quenchers from the company directly, set up a Shopify site, and sold them to their readers. According to The Buy Guide, they sold out in five days. All of these things are very normal things to do when you discover a cool bottle.

After mom internet started buzzing about the tumbler — a corner of the web that is to dropshipping what dads are to Amazon original streaming shows — Stanley hired Terence Reilly, the marketer credited for reinventing Crocs. Reading between the lines of what Reilly has said about his work at Stanley, it seems like his main strategy for both Crocs and the Quencher was capitalizing on internet buzz and growing it into otaku product worship. Or as Inc. phrased it in their feature on him, he uses a “scarcity model” to whip up interest. Cut to three years later, now we’re seeing mini-riots over limited edition Stanleys at Target.

My reference point for this kind of marketing is the Myspace era of music and fashion, when record companies and stores like Hot Topic and Spencer’s Gifts were using early social media to identify niche fandoms and convert them into mainstream hits. In this allegory, Target has become the Hot Topic of white women with disposable income. And their fingerless gloves and zipper pants are fun water bottles and that one perfume everyone in Manhattan is wearing right now.

I’m always a little wary about giving someone like Reilly credit for single-handedly jumpstarting a craze like this — and I am extremely aware that he is a male executive getting credit for something that was, and still is, actually driven by women content creators — but this is the second time he’s pulled this off. Which, to me, says he’s at least semi-aware of how to pick the right fandoms. He may not be actively involved in the horse race, but he clearly has an eye for betting on them. And, yes, the Stanley craze is very real.

It’s turned into a reported $750 million in revenue for Stanley and both Google Trends and TikTok’s Creative Center show massive growth in interest around the bottle between 2019 and now. With a lot of that growth happening this year. On TikTok, the hashtag #Stanley has been viewed a billion times since 2020 and more than half of that traffic happened in the last 120 days.

And with all viral phenomenon involving things women do, there are, of course, a lot of men on sites like Reddit and X adding to the discourse about the Quenchers with posts that essentially say, “why women like cups?” And if you’re curious how that content ecosystem operates, you can check out my video about it here. But I’m, personally, more interested in what the Stanley fandom says about how short-form video is evolving.

Over the last three years, most major video sites have attempted to beat TikTok at its own game. All this has done, however, is give more places for TikToks to get posted. And so, the primarily engine of TikTok engagement — participation, rather than sharing — has spread to places like Instagram, YouTube, and X. If the 2010s were all about sharing content, it seems undeniable that the 2020s are all about making content in tandem with others. An internet-wide flashmob of Ice Bucket Challenge videos that are all, increasingly, focused on selling products. Which isn’t an accident.

TikTok has spent years trying to bring Chinese-style social e-commerce to the US. In September, the app finally launched a tool to sell products directly. If you’re curious what all this looks like when you put it together, here’s one of the most unhinged Stanley cup videos I’ve seen so far. And, yes, before you ask, there are affiliates links on the user’s Amazon page for all of these. [ed. non-downloadable - read more]

by Ryan Broderick, Garbage Day | Read more:
Image: Stanley/via
[ed. Obviously old news by now (10 months!) but still something I wondered about at the time (and quickly forgot). How do these things go so viral? It'd be like L.L. Bean suddenly being on red carpets and fashion runways. There must be some hidden money-making scheme/agenda at work, right? Well, partly. See also: Dead Internet Theory (BGR).]

Silicon Valley’s Reading List Reveals Its Political Ambitions

In 2008, Paul Graham mused about the cultural differences between great US cities. Three years earlier, Graham had co-founded Y Combinator, a “startup accelerator” that would come to epitomize Silicon Valley — and would move there in 2009. But at the time Graham was based in Cambridge, Massachusetts, which, as he saw it, sent a different message to its inhabitants than did Palo Alto.

Cambridge’s message was, “You should be smarter. You really should get around to reading all those books you’ve been meaning to.” Silicon Valley respected smarts, Graham wrote, but its message was different: “You should be more powerful.”

He wasn’t alone in this assessment. My late friend Aaron Swartz, a member of Y Combinator’s first class, fled San Francisco in late 2006 for several reasons. He told me later that one of them was how few people in the Bay Area seemed interested in books.

Today, however, it feels as though people there want to talk about nothing but. Tech luminaries seem to opine endlessly about books and ideas, debating the merits and defects of different flavors of rationalism, of basic economic principles and of the strengths and weaknesses of democracy and corporate rule.

This fervor has yielded a recognizable “Silicon Valley canon.” And as Elon Musk and his shock troops descend on Washington with intentions of reengineering the government, it’s worth paying attention to the books the tech world reads — as well as the ones they don’t. Viewed through the canon, DOGE’s grand effort to cut government down to size is the latest manifestation of a longstanding Silicon Valley dream: to remake politics in its image.

The Silicon Valley Canon

Last August, Tanner Greer, a conservative writer with a large Silicon Valley readership, asked on X what the contents of the “vague tech canon” might be. He’d been provoked when the writer and technologist Jasmine Sun asked why James Scott’s Seeing Like a State, an anarchist denunciation of grand structures of government, had become a “Silicon Valley bookshelf fixture.” The prompt led Patrick Collison, co-founder of Stripe and a leading thinker within Silicon Valley, to suggest a list of 43 sources, which he stressed were not those he thought “one ought to read” but those that “roughly cover[ed] the major ideas that are influential here.”

In a later response, Greer argued that the canon tied together a cohesive community, providing Silicon Valley leaders with a shared understanding of power and a definition of greatness. Greer, like Graham, spoke of the differences between cities. He described Washington, DC as an intellectually stultified warren of specialists without soul, arid technocrats who knew their own narrow area of policy but did not read outside of it. In contrast, Silicon Valley was a place of doers, who looked to books not for technical information, but for inspiration and advice. The Silicon Valley canon provided guideposts for how to change the world.

Said canon is not directly political. It includes websites, like LessWrong, the home of the rationalist movement, and Slate Star Codex/Astral Codex Ten, for members of the “grey tribe” who see themselves as neither conservative nor properly liberal. Graham’s many essays are included, as are science fiction novels like Neal Stephenson’s The Diamond Age. Much of the canon is business advice on topics such as how to build a startup.

But such advice can have a political edge. Peter Thiel’s Zero to One, co-authored with his former student and failed Republican Senate candidate Blake Masters, not only tells startups that they need to aspire to monopoly power or be crushed, but describes Thiel’s early ambitions (along with other members of the so-called PayPal mafia) to create a global private currency that would crush the US dollar.

Then there are the Carlylian histories of “great men” (most of the subjects and authors were male) who sought to change the world. Older biographies described men like Robert Moses and Theodore Roosevelt, with grand flaws and grander ambitions, who broke with convention and overcame opposition to remake society.

Such stories, in Greer’s description, provided Silicon Valley’s leaders and aspiring leaders with “models of honor,” and examples of “the sort of deeds that brought glory or shame to the doer simply by being done.” The newer histories both explained Silicon Valley to itself, and tacitly wove its founders and small teams into this epic history of great deeds, suggesting that modern entrepreneurs like Elon Musk — whose biography was on the list — were the latest in a grand lineage that had remade America’s role in the world.

Putting Musk alongside Teddy Roosevelt didn’t simply reinforce Silicon Valley’s own mythologized self-image as the modern center of creative destruction. It implicitly welded it to politics, contrasting the politically creative energies of the technology industry, set on remaking the world for the better, to the Washington regulators who frustrated and thwarted entrepreneurial change. Mightn’t everything be better if visionary engineers had their way, replacing all the messy, squalid compromises of politics with radical innovation and purpose-engineered efficient systems?

One book on the list argues this and more. James Davidson and William Rees-Mogg’s The Sovereign Individual cheered on the dynamic, wealth-creating individuals who would use cyberspace to exit corrupt democracies, with their “constituencies of losers,” and create their own political order. When the book, originally published in 1997, was reissued in 2020, Thiel wrote the preface.

Under this simplifying grand narrative, the federal state was at best another inefficient industry that was ripe for disruption. At worst, national government and representative democracy were impediments that needed to be swept away, as Davidson and Rees-Mogg had argued. From there, it’s only a hop, skip and a jump to even more extreme ideas that, while not formally in the canon, have come to define the tech right. (...)

We don’t know which parts of the canon Musk has read, or which ones influenced the young techies he’s hired into DOGE. But it’s not hard to imagine how his current gambit looks filtered through these ideas. From this vantage, DOGE’s grand effort to cut government down to size is the newest iteration of an epic narrative of change...

One DOGE recruiter framed the challenge as “a historic opportunity to build an efficient government, and to cut the federal budget by 1/3.” When a small team remakes government wholesale, the outcome will surely be simpler, cheaper and more effective. That, after all, fits with the story that Silicon Valley disruptors tell themselves.

What the Silicon Valley Canon is Missing

From another perspective, hubris is about to get clobbered by nemesis. Jasmine Sun’s question about why so many people in tech read Seeing Like a State hints at the misunderstandings that trouble the Silicon Valley canon. Many tech elites read the book as a denunciation of government overreach. But Scott was an excoriating critic of the drive to efficiency that they themselves embody. (...)

Musk epitomizes that bulldozing turn of mind. Like the Renaissance engineers who wanted to raze squalid and inefficient cities to start anew, DOGE proposes to flense away the complexities of government in a leap of faith that AI will do it all better. If the engineers were not thoroughly ignorant of the structures they are demolishing, they might hesitate and lose momentum.

Seeing Like a State, properly understood, is a warning not just to bureaucrats but to social engineers writ large. From Scott’s broader perspective, AI is not a solution, but a swift way to make the problem worse. It will replace the gross simplifications of bureaucracy with incomprehensible abstractions that have been filtered through the “hidden layers” of artificial neurons that allow it to work. DOGE’s artificial-intelligence-fueled vision of government is a vision from Franz Kafka, not Friedrich Hayek.

by Henry Farrell, Programmable Mutter |  Read more:
Image: Foreshortening of a Library by Carlo Galli Bibiena
[ed. Well, we all know how that turned out: hubris did indeed get clobbered by nemesis; but also by a public that was ignored, and a petutulant narcissicist in the White House. It's been well documented how we live in a hustle culture these days - from Silicon Valley to Wall Street, Taskrabbit to Uber, Ebay to YouTube, ad infinitum. And if you fall behind... well, tough luck, your fault. Not surprisingly, the people advocating for this kind of zero sum thinking are the self-described, self-serving winners (and wannabes) profiled here. What is surprising is that they've convinced half the country that this is a good thing. Money, money, money (and power) are the only metrics worth living for. Here's a good example of where this kind of thinking leads: This may be the most bonkers tech job listing I’ve ever seen (ArsTechnica). 
----
Here’s a job pitch you don’t see often.

What if, instead of “work-life balance,” you had no balance at all—your life was your work… and work happened seven days a week?

Did I say days? I actually meant days and nights, because the job I’m talking about wants you to know that you will also work weekends and evenings, and that “it’s ok to send messages at 3am.”

Also, I hope you aren’t some kind of pajama-wearing wuss who wants to work remotely; your butt had better be in a chair in a New York City office on Madison Avenue, where you need enough energy to “run through walls to get things done” and respond to requests “in minutes (or seconds) instead of hours.”

To sweeten this already sweet deal, the job comes with a host of intangible benefits, such as incredible colleagues. The kind of colleagues who are not afraid to be “extremely annoying if it means winning.” The kind of colleagues who will “check-in on things 10x daily” and “double (or quadruple) text if someone hasn’t responded”—and then call that person too. The kind of colleagues who have “a massive chip on the shoulder and/or a neurodivergent brain.”

That’s right, I’m talking about “A-players.” There are no “B-players” here, because we all know that B-players suck. But if, by some accident, the company does onboard someone who “isn’t an A-player,” there’s a way to fix it: “Fast firing.”

“Please be okay with this,” potential employees are told. (...)

If you live for this kind of grindcore life, you can join a firm that has “Tier 1” engineers, a “Tier 1” origin story, “Tier 1” VC investors, “Tier 1” clients, and a “Tier 1” domain name for which the CEO splashed out $12 million.

Best of all, you’ll be working for a boss who “slept through most of my classes” until he turned 18 and then “worked 100-hour weeks until I became a 100x engineer.” He also dropped out of college, failed as a “solo founder,” and has “a massive chip on my shoulder.” Now, he wants to make his firm “the greatest company of all time” and is driven to win “so bad that I’m sacrificing my life working 7 days a week for it.”

He will also “eat dog poop if it means winning”—which is a phrase you do not often see in official corporate bios. (I emailed to ask if he would actually eat dog poop if it would help his company grow. He did not reply.)

Fortunately, this opportunity to blow your one precious shot at life is at least in service of something truly important: AI-powered advertising. (Icon)
---
[ed. See also: The China Tech Canon (Concurrent).]

Thursday, October 23, 2025

Soybean Socialism

They’re Small, Yellow and Round — and Show How Trump’s Tariffs Don’t Work

Once again, President Trump says he’s preparing an emergency bailout for struggling farmers. And once again, it’s because of an emergency he created.

China has stopped buying U.S. soybeans to protest Mr. Trump’s tariffs on imports. In response, Mr. Trump plans to send billions of dollars of tariff revenue to U.S. soybean farmers who no longer have buyers for their crops. At the same time, Argentina has taken advantage of Mr. Trump’s tariffs to sell more of its own soybeans to China — yet Mr. Trump is planning to bail out Argentina, too.

This may seem nonsensical, especially since Mr. Trump already shoveled at least $28 billion to farmers hurt by his first trade war in 2018. But it actually makes perfect sense. It’s what happens when Mr. Trump’s zero-sum philosophy of trade — which is that there are always winners and losers, and he should get to choose the winners — collides with Washington’s sycophantic approach to agriculture, which ensures that farmers always win and taxpayers always lose. In the end, Mr. Trump’s allies, including President Javier Milei of Argentina and the politically influential agricultural community, will get paid, and you will pay. (...)

In theory, Mr. Trump’s tariffs on foreign products ranging from toys to cars to furniture are supposed to encourage manufacturers to move operations from abroad to the United States. They haven’t had that effect, because the tariffs have jacked up the price of steel and other raw materials that U.S. manufacturers still need to import, triggered retaliatory tariffs from China and other countries and created a volatile trade environment that makes investing in America risky. At the same time, higher tariffs mean higher prices for U.S. consumers, even though Mr. Trump insists that only foreigners absorb the costs. (...)

Of course, farmers have been among Mr. Trump’s most loyal supporters, and these days they’re distraught that rather than make agriculture great again, the president has chased away their biggest soybean buyer. They’re especially irate that Mr. Trump pledged to rescue Argentina the same day Mr. Milei suspended its export tax on soybeans, making it more attractive for China to leave U.S. farmers in the lurch. Mr. Trump even admitted that the $20 billion Argentine bailout won’t help America much, that it’s a favor for an embattled political ally who’s “MAGA all the way.”

But American farmers were distraught about his last trade war, too, until he regained their trust with truckloads of cash. They say that this time they don’t want handouts, just open markets and a level playing field, but in the end they’ll accept the handouts while less powerful business owners victimized by tariffs will get nothing. Mr. Trump initially promised to use his tariff revenue to pay down the national debt to benefit all Americans, but he’ll take care of farmers first.

In fact, “Farmers First” is the motto of Mr. Trump’s Department of Agriculture, and the upcoming bailout won’t be even his first effort this year to redirect money from taxpayers to soybean growers. His Environmental Protection Agency has proposed to mandate two billion additional gallons of biodiesel, a huge giveaway to soy growers. His “big, beautiful bill” also included lucrative subsidies for soy-based biodiesel, which drives deforestation abroad and makes food more expensive but could provide a convenient market for unwanted grain.

Democrats have been airing ads blaming Mr. Trump’s tariffs for the pain in soybean country, and they’ve started attacking the Argentina bailout as well. But most of them aren’t complaining about his imminent farm bailout, and his recent biofuel boondoggles have bipartisan support. Mr. Trump’s incessant pandering to the farmer-industrial complex is one of the most conventional Beltway instincts he has. And it has worked for him politically; even now that crop prices are plunging and soybeans have nowhere to go, rural America remains the heart of his base.

I’ve argued that Democrats can’t out-agri-pander Mr. Trump in rural America, and now that the president has posted a meme of himself dumping on urban America, there’s never been a better time to stop trying. Mr. Trump has committed to a destructive mix of tariffs, bailouts, biofuels mandates and immigration crackdowns that will make consumers pay more for food and saddle taxpayers with more debt. It’s a bizarre combination of crony capitalism and agricultural socialism. It’s all the worst elements of big government.

by Michael Grunwald, NY Times |  Read more:
Image: Antonio Giovanni Pinna
[ed. One of my pet peeves. Almost 40 years of subsidies, with most of the money going to Big Ag (which is busy squeezing and consolidating small farms out of sight). See also: Take a Hard Look at One Agency Truly Wasting Taxpayer Dollars (NYT):]

There’s one bloated federal government agency that routinely hands out money to millionaires, billionaires, insurance companies and even members of Congress. The handouts are supposed to be a safety net for certain rural business owners during tough years, but thousands of them have received the safety-net payments for 39 consecutive years...

Even though only 1 percent of Americans farm, the U.S.D.A. employs five times as many people as the Environmental Protection Agency and occupies nearly four times as many offices as the Social Security Administration. (...)

But the real problem with the U.S.D.A. is that its subsidy programs redistribute well over $20 billion a year from taxpayers to predominantly well-off farmers. Many of those same farmers also benefit from subsidized and guaranteed loans with few strings attached, price supports and import quotas that boost food prices, lavish ad hoc aid packages after weather disasters and market downturns as well as mandates to spur production of unsustainable biofuels. A little reform to this kind of welfare could go a long way toward reassuring skeptics that the administration’s efficiency crusade isn’t only about defunding its opponents and enriching its supporters.

Tuesday, October 21, 2025

China Has Overtaken America


In 1957 the Soviet Union put the first man-made satellite — Sputnik — into orbit. The U.S. response was close to panic: The Cold War was at its coldest, and there were widespread fears that the Soviets were taking the lead in science and technology.

In retrospect those fears were overblown. When Communism fell, we learned that the Soviet economy was far less advanced than many had believed. Still, the effects of the “Sputnik moment” were salutary: America poured resources into science and higher education, helping to lay the foundations for enduring leadership.

Today American leadership is once again being challenged by an authoritarian regime. And in terms of economic might, China is a much more serious rival than the Soviet Union ever was. Some readers were skeptical when I pointed out Monday that China’s economy is, in real terms, already substantially larger than ours. The truth is that GDP at purchasing power parity is a very useful measure, but if it seems too technical, how about just looking at electricity generation, which is strongly correlated with economic development? As the chart at the top of this post shows, China now generates well over twice as much electricity as we do.

Yet, rather than having another Sputnik moment, we are now trapped in a reverse Sputnik moment. Rather than acknowledging that the US is in danger of being permanently overtaken by China’s technological and economic prowess, the Trump administration is slashing support for scientific research and attacking education. In the name of defeating the bogeymen of “wokeness” and the “deep state”, this administration is actively opposing progress in critical sectors while giving grifters like the crypto industry everything that they want.

The most obvious example of Trump’s war on a critical sector, and the most consequential for the next decade, is his vendetta against renewable energy. Trump’s One Big Beautiful Bill rolled back Biden’s tax incentives for renewable energy. The administration is currently trying to kill a huge, nearly completed offshore wind farm that could power hundreds of thousands of homes, as well as cancel $7 billion in grants for residential solar panels. It appears to have succeeded in killing a huge solar energy project that would have powered almost 2 million homes. It has canceled $8 billion in clean energy grants, mostly in Democratic states, and is reportedly planning to cancel tens of billions more. (...)

In his rambling speech at the United Nations, Donald Trump insisted that China isn’t making use of wind power: “They use coal, they use gas, they use almost anything, but they don’t like wind.” I don’t know where Trump gets his misinformation — maybe the same sources telling him that Portland is in flames. But here’s the reality:


Chris Wright, Trump’s energy secretary, says that solar power is unreliable: “You have to have power when the sun goes behind a cloud and when the sun sets, which it does almost every night.” So the energy secretary of the most technologically advanced nation on earth is unaware of the energy revolution being propelled by dramatic technological progress in batteries. And the revolution is happening now in the U.S., in places like California. Here’s what electricity supply looked like during an average day in California back in June: 


Special interests and Trump’s pettiness aside, my sense is that there’s something more visceral going on. A powerful faction in America has become deeply hostile to science and to expertise in general. As evidence, consider the extraordinary collapse in Republican support for higher education over the past decade:

Yet the truth is that hostility to science and expertise have always been part of the American tradition. Remember your history lesson on the Scopes Monkey Trial? It took a Supreme Court ruling, as recently as 2007, to stop politicians from forcing public schools to teach creationism. And with the current Supreme Court, who can be sure creationism won’t return?

Anti-scientism is a widespread attitude on the religious right, which forms a key component of MAGA. In past decades, however, the forces of humanism and scientific inquiry were able to prevail against anti-scientism. In part this was due to the recognition that American science was essential for national security as well as national prosperity. But now we have an administration that claims to be protecting national security by imposing tariffs on kitchen cabinets and bathroom vanities, while gutting the CDC and the EPA.

Does this mean that the U.S. is losing the race with China for global leadership? No, I think that race is essentially over. Even if Trump and his team of saboteurs lose power in 2028, everything I see says that by then America will have fallen so far behind that it’s unlikely that we will ever catch up.

by Paul Krugman |  Read more:
Images: OurWorldInData/FT
[ed. See also: Losing Touch With Reality; Civil Resistance Confronts the Autocracy; and, An Autocracy of Dunces (Krugman).]

Sunday, October 19, 2025

America's Future Could Hinge on Whether AI Slightly Disappoints

A burning question that’s on a lot of people’s minds right now is: Why is the U.S. economy still holding up? The manufacturing industry is hurting badly from Trump’s tariffs, the payroll numbers are looking weak, and consumer sentiment is at Great Recession levels:

And yet despite those warning signs, there has been nothing even remotely resembling an economic crash yet. Unemployment is rising a little bit but still extremely low, while the prime-age employment rate — my favorite single indicator of the health of the labor market — is still near all-time highs. The New York Fed’s GDP nowcast thinks that GDP growth is currently running at a little over 2%, while the Atlanta Fed’s nowcast puts it even higher.

One possibility is that everything is just fine with the economy — that Trump’s tariffs aren’t actually that high because of all the exemptions, and/or that economists are exaggerating the negative effects of tariffs in the first place. Weak consumer confidence could be a partisan “vibecession”, payroll slowdown could be from illegal immigrants being deported or leaving en masse, and manufacturing’s woes could be from some other sector-specific factor.

Another possibility is that tariffs are bad, but are being canceled out by an even more powerful force — the AI boom. The FT reports:

Pantheon Macroeconomics estimates that US GDP would have grown at a mere 0.6 per cent annualised rate in the first half were it not for AI-related spending, or half the actual rate.

Paul Kedrosky came up with similar numbers. Jason Furman does a slightly different calculation, and arrives at an even starker number: [ed. 0.1 percent]

And here’s an impressive chart:


The Economist writes:
[L]ook beyond AI and much of the economy appears sluggish. Real consumption has flatlined since December. Jobs growth is weak. Housebuilding has slumped, as has business investment in non-AI parts of the economy[.]
And in a post entitled “America is now one big bet on AI”, Ruchir Sharma writes that “AI companies have accounted for 80 per cent of the gains in US stocks so far in 2025.” In fact, more than a fifth of the entire S&P 500 market cap is now just three companies — Nvidia, Microsoft, and Apple — two of which are basically big bets on AI.

Now as Furman points out, this doesn’t necessarily mean that without AI, the U.S. economy would be stalling out. If the economy wasn’t pouring resources into AI, it might be pouring them into something else, spurring growth that was almost as fast as what we actually saw. But it’s also possible that without AI, America would be crashing from tariffs. (...)

But despite Trump’s tariff exemptions, the AI sector could very well crash in the next year or two. And if it does, it could do a lot more than just hurt Americans’ employment prospects and stock portfolios.

If AI is really the only thing protecting America from the scourge of Trump’s tariffs, then a bust in the sector could change the country’s entire political economy. A crash and recession would immediately flip the narrative on Trump’s whole presidency, much as the housing crash of 2008 cemented George W. Bush’s legacy as a failure. And because Trump’s second term is looking so transformative, the fate of the AI sector could potentially determine the entire fate of the country.

by Noah Smith, Noahpinion |  Read more:
Image: Derek Thompson/Bridgewater

Friday, October 17, 2025

Enshittification: Why Everything Sucks Now

We all feel it: Our once-happy digital spaces have become increasingly less user-friendly and more toxic, cluttered with extras nobody asked for and hardly anybody wants. There’s even a word for it: “enshittification,” named 2023 Word of the Year by the American Dialect Society. The term was coined by tech journalist/science fiction author Cory Doctorow, a longtime advocate of digital rights. Doctorow has spun his analysis of what’s been ailing the tech industry into an eminently readable new book, Enshittification: Why Everything Suddenly Got Worse and What To Do About It. (...)

People generally use “enshittification” colloquially to mean “the degradation in the quality and experience of online platforms over time.” Doctorow’s definition is more specific, encompassing “why an online service gets worse, how that worsening unfolds,” and how this process spreads to other online services, such that everything is getting worse all at once.

For Doctorow, enshittification is a disease with symptoms, a mechanism, and an epidemiology. It has infected everything from Facebook, Twitter, Amazon, and Google, to Airbnb, dating apps, iPhones, and everything in between. “For me, the fact that there were a lot of platforms that were going through this at the same time is one of the most interesting and important factors in the critique,” he said. “It makes this a structural issue and not a series of individual issues.”

It starts with the creation of a new two-sided online product of high quality, initially offered at a loss to attract users—say, Facebook, to pick an obvious example. Once the users are hooked on the product, the vendor moves to the second stage: degrading the product in some way for the benefit of their business customers. This might include selling advertisements, scraping and/or selling user data, or tweaking algorithms to prioritize content the vendor wishes users to see rather than what those users actually want.

This locks in the business customers, who, in turn, invest heavily in that product, such as media companies that started Facebook pages to promote their published content. Once business customers are locked in, the vendor can degrade those services too—i.e., by de-emphasizing news and links away from Facebook—to maximize profits to shareholders. Voila! The product is now enshittified.

The four horsemen of the shitocalypse

Doctorow identifies four key factors that have played a role in ushering in an era that he has dubbed the “Enshittocene.” The first is competition (markets), in which companies are motivated to make good products at affordable prices, with good working conditions, because otherwise customers and workers will go to their competitors. The second is government regulation, such as antitrust laws that serve to keep corporate consolidation in check, or levying fines for dishonest practices, which makes it unprofitable to cheat.

The third is interoperability: the inherent flexibility of digital tools, which can play a useful adversarial role. “The fact that enshittification can always be reversed with a dis-enshittifiting counter-technology always acted as a brake on the worst impulses of tech companies,” Doctorow writes. Finally, there is labor power; in the case of the tech industry, highly skilled workers were scarce and thus had considerable leverage over employers.

All four factors, when functioning correctly, should serve as constraints to enshittification. However, “One by one each enshittification restraint was eroded until it dissolved, leaving the enshittification impulse unchecked,” Doctorow writes. Any “cure” will require reversing those well-established trends.

But isn’t all this just the nature of capitalism? Doctorow thinks it’s not, arguing that the aforementioned weakening of traditional constraints has resulted in the usual profit-seeking behavior producing very different, enshittified outcomes. “Adam Smith has this famous passage in Wealth of Nations about how it’s not due to the generosity of the baker that we get our bread but to his own self-regard,” said Doctorow. “It’s the fear that you’ll get your bread somewhere else that makes him keep prices low and keep quality high. It’s the fear of his employees leaving that makes him pay them a fair wage. It is the constraints that causes firms to behave better. You don’t have to believe that everything should be a capitalist or a for-profit enterprise to acknowledge that that’s true.”

Our wide-ranging conversation below has been edited for length to highlight the main points of discussion.

Ars Technica: I was intrigued by your choice of framing device, discussing enshittification as a form of contagion.

Cory Doctorow: I’m on a constant search for different framing devices for these complex arguments. I have talked about enshittification in lots of different ways. That frame was one that resonated with people. I’ve been a blogger for a quarter of a century, and instead of keeping notes to myself, I make notes in public, and I write up what I think is important about something that has entered my mind, for better or for worse. The downside is that you’re constantly getting feedback that can be a little overwhelming. The upside is that you’re constantly getting feedback, and if you pay attention, it tells you where to go next, what to double down on.

Another way of organizing this is the Galaxy Brain meme, where the tiny brain is “Oh, this is because consumers shopped wrong.” The medium brain is “This is because VCs are greedy.” The larger brain is “This is because tech bosses are assholes.” But the biggest brain of all is “This is because policymakers created the policy environment where greed can ruin our lives.” There’s probably never going to be just one way to talk about this stuff that lands with everyone. So I like using a variety of approaches. I suck at being on message. I’m not going to do Enshittification for the Soul and Mornings with Enshittifying Maury. I am restless, and my Myers-Briggs type is ADHD, and I want to have a lot of different ways of talking about this stuff.

Ars Technica: One site that hasn’t (yet) succumbed is Wikipedia. What has protected Wikipedia thus far?

Cory Doctorow: Wikipedia is an amazing example of what we at the Electronic Frontier Foundation (EFF) call the public interest Internet. Internet Archive is another one. Most of these public interest Internet services start off as one person’s labor of love, and that person ends up being what we affectionately call the benevolent dictator for life. Very few of these projects have seen the benevolent dictator for life say, “Actually, this is too important for one person to run. I cannot be the keeper of the soul of this project. I am prone to self-deception and folly just like every other person. This needs to belong to its community.” Wikipedia is one of them. The founder, my friend Jimmy Wales, woke up one day and said, “No individual should run Wikipedia. It should be a communal effort.”

There’s a much more durable and thick constraint on the decisions of anyone at Wikipedia to do something bad. For example, Jimmy had this idea that you could use AI in Wikipedia to help people make entries and navigate Wikipedia’s policies, which are daunting. The community evaluated his arguments and decided—not in a reactionary way, but in a really thoughtful way—that this was wrong. Jimmy didn’t get his way. It didn’t rule out something in the future, but that’s not happening now. That’s pretty cool.

Wikipedia is not just governed by a board; it’s also structured as a nonprofit. That doesn’t mean that there’s no way it could go bad. But it’s a source of friction against enshittification. Wikipedia has its entire corpus irrevocably licensed as the most open it can be without actually being in the public domain. Even if someone were to capture Wikipedia, there’s limits on what they could do to it.

There’s also a labor constraint in Wikipedia in that there’s very little that the leadership can do without bringing along a critical mass of a large and diffuse body of volunteers. That cuts against the volunteers working in unison—they’re not represented by a union; it’s hard for them to push back with one voice. But because they’re so diffuse and because there’s no paychecks involved, it’s really hard for management to do bad things. So if there are two people vying for the job of running the Wikimedia Foundation and one of them has got nefarious plans and the other doesn’t, the nefarious plan person, if they’re smart, is going to give it up—because if they try to squeeze Wikipedia, the harder they squeeze, the more it will slip through their grasp.

So these are structural defenses against enshittification of Wikipedia. I don’t know that it was in the mechanism design—I think they just got lucky—but it is a template for how to run such a project. It does raise this question: How do you build the community? But if you have a community of volunteers around a project, it’s a model of how to turn that project over to that community.

Ars Technica: Your case studies naturally include the decay of social media, notably Facebook and the social media site formerly known as Twitter. How might newer social media platforms resist the spiral into “platform decay”?

Cory Doctorow: What you want is a foundation in which people on social media face few switching costs. If the social media is interoperable, if it’s federatable, then it’s much harder for management to make decisions that are antithetical to the interests of users. If they do, users can escape. And it sets up an internal dynamic within the firm, where the people who have good ideas don’t get shouted down by the people who have bad but more profitable ideas, because it makes those bad ideas unprofitable. It creates both short and long-term risks to the bottom line.

There has to be a structure that stops their investors from pressurizing them into doing bad things, that stops them from rationalizing their way into complying. I think there’s this pathology where you start a company, you convince 150 of your friends to risk their kids’ college fund and their mortgage working for you. You make millions of users really happy, and your investors come along and say, “You have to destroy the life of 5 percent of your users with some change.” And you’re like, “Well, I guess the right thing to do here is to sacrifice those 5 percent, keep the other 95 percent happy, and live to fight another day, because I’m a good guy. If I quit over this, they’ll just put a bad guy in who’ll wreck things. I keep those 150 people working. Not only that, I’m kind of a martyr because everyone thinks I’m a dick for doing this. No one understands that I have taken the tough decision.”

I think that’s a common pattern among people who, in fact, are quite ethical but are also capable of rationalizing their way into bad things. I am very capable of rationalizing my way into bad things. This is not an indictment of someone’s character. But it’s why, before you go on a diet, you throw away the Oreos. It’s why you bind yourself to what behavioral economists call “Ulysses pacts“: You tie yourself to the mast before you go into the sea of sirens, not because you’re weak but because you’re strong enough now to know that you’ll be weak in the future.

I have what I would call the epistemic humility to say that I don’t know what makes a good social media network, but I do know what makes it so that when they go bad, you’re not stuck there. You and I might want totally different things out of our social media experience, but I think that you should 100 percent have the right to go somewhere else without losing anything. The easier it is for you to go without losing something, the better it is for all of us.

My dream is a social media universe where knowing what network someone is using is just a weird curiosity. It’d be like knowing which cell phone carrier your friend is using when you give them a call. It should just not matter. There might be regional or technical reasons to use one network or another, but it shouldn’t matter to anyone other than the user what network they’re using. A social media platform where it’s always easier for users to leave is much more future-proof and much more effective than trying to design characteristics of good social media.

by Jennifer Ouellette and Cory Doctorow, Ars Technica | Read more:
Image: Julia Galdo and Cody Cloud (JUCO)/CC-BY 3.0
[ed. Do a search on this site for much more by Mr. Doctorow, including copyright and right-to-repair issues. Further on in this interview:]
***
When we had a functional antitrust system for the last four years, we saw a bunch of telecoms mergers stopped because once you start enforcing antitrust, it’s like eating Pringles. You just can’t stop. You embolden a lot of people to start thinking about market structure as a source of either good or bad policy. The real thing that happened with [former FCC chair] Lina Kahn doing all that merger scrutiny was that people just stopped planning mergers.

There are a lot of people who benefit from this. It’s not just tech workers or tech users; it’s not just media users. Hospital consolidation, pharmaceutical consolidation, has a lot of people who are very concerned about it. Mark Cuban is freaking out about pharmacy benefit manager consolidation and vertical integration with HMOs, as he should be. I don’t think that we’re just asking the anti-enshittification world to carry this weight.

Same with the other factors. The best progress we’ve seen on interoperability has been through right-to-repair. It hasn’t been through people who care about social media interoperability. One of the first really good state-level right-to-repair bills was the one that [Governor] Jared Polis signed in Colorado for powered wheelchairs. Those people have a story that is much more salient to normies.

"What do you mean you spent six months in bed because there’s only two powered wheelchair manufacturers and your chair broke and you weren’t allowed to get it fixed by a third party?” And they’ve slashed their repair department, so it takes six months for someone to show up and fix your chair. So you had bed sores and pneumonia because you couldn’t get your chair fixed. This is bullshit.