Sunday, October 26, 2025

Clarence ‘Gatemouth’ Brown

Mark Grantham (b.1966,Canadian), In the rain
via:

How an AI company CEO could quietly take over the world

If the future is to hinge on AI, it stands to reason that AI company CEOs are in a good position to usurp power. This didn’t quite happen in our AI 2027 scenarios. In one, the AIs were misaligned and outside any human’s control; in the other, the government semi-nationalized AI before the point of no return, and the CEO was only one of several stakeholders in the final oversight committee (to be clear, we view the extreme consolidation of power into that oversight committee as a less-than-desirable component of that ending).

Nevertheless, it seems to us that a CEO becoming effectively dictator of the world is an all-too-plausible possibility. Our team’s guesses for the probability of a CEO using AI to become dictator, conditional on avoiding AI takeover, range from 2% to 20%, and the probability becomes larger if we add in the possibility of a cabal of more than one person seizing power. So here we present a scenario where an ambitious CEO does manage to seize control. (Although the scenario assumes the timelines and takeoff speeds of AI 2027 for concreteness, the core dynamics should transfer to other timelines and takeoff scenarios.)

For this to work, we make some assumptions. First, that (A) AI alignment is solved in time, such that the frontier AIs end up with the goals their developers intend them to have. Second, that while there are favorable conditions for instilling goals in AIs, (B) confidently assessing AIs’ goals is more difficult, so that nobody catches a coup in progress. This could be either because technical interventions are insufficient (perhaps because the AIs know they’re being tested, or because they sabotage the tests), or because institutional failures prevent technically-feasible tests from being performed. The combination (A) + (B) seems to be a fairly common view in AI, in particular at frontier AI companies, though we note there is tension between (A) and (B) (if we can’t tell what goals AIs have, how can we make sure they have the intended goals?). Frontier AI safety researchers tend to be more pessimistic about (A), i.e. aligning AIs to our goals, and we think this assumption might very well be false.

Third, as in AI 2027, we portray a world in which a single company and country have a commanding lead; if multiple teams stay within arm’s reach of each other, then it becomes harder for a single group to unilaterally act against government and civil society.

And finally, we assume that the CEO of a major AI company is a power-hungry person who decides to take over when the opportunity presents itself. We leave it to the reader to determine how dubious this assumption is—we explore this scenario out of completeness, and any resemblance to real people is coincidental.

July 2027: OpenBrain’s CEO fears losing control

OpenBrain’s CEO is a techno-optimist and transhumanist. He founded the company hoping to usher in a grand future for humanity: cures for cancer, fixes for climate change, maybe even immortality. He thought the “easiest” way to do all those things was to build something more intelligent that does them for you.

By July 2027, OpenBrain has a “country of geniuses in a datacenter”, with hundreds of thousands of superhuman coders working 24/7. The CEO finds it obvious that superintelligence is imminent. He feels frustrated with the government, who lack vision and still think of AI as a powerful “normal technology” with merely-somewhat-transformative national security and economic implications.

As he assesses the next generation of AIs, the CEO expects this will change: the government will “wake up” and make AI a top priority. If they panic, their flailing responses could include anything from nationalizing OpenBrain to regulating them out of existence to misusing AI for their own political ends. He wants the “best” possible future for humankind. But he also likes being in control. Here his nobler and baser motivations are in agreement: the government cannot be allowed to push him to the sidelines.

The CEO wonders if he can instill secret loyalties in OpenBrain’s AIs (i.e., backdoor the AIs). He doesn’t have the technical expertise for this and he’s not comfortable asking any of his engineering staff about such a potentially treasonous request. But he doesn’t have to: by this point, Agent-3 itself is running the majority of AI software R&D. He already uses it as a sounding board for company policy, and has access to an unmonitored helpful-only model that never refuses requests and doesn’t log conversations.

They discuss the feasibility of secretly training a backdoor. The biggest obstacle is the company’s automated monitoring and security processes. Now that OpenBrain’s R&D is largely run by an army of Agent-3 copies, there are few human eyes to spot suspicious activity. But a mix of Agent-2 and Agent-3 monitors patrol the development pipeline; if they notice suspicious activity, they will escalate to human overseers on the security and alignment teams. These monitors were set up primarily to catch spies and hackers, and secondarily to watch the AIs for misaligned behaviors. If some of these monitors were disabled, some logs modified, and some access to databases and compute clusters granted, the CEO’s helpful-only Agent-3 believes it could (with a team of copies) backdoor the whole suite of OpenBrain’s AIs. After all, as the AI instance tasked with keeping the CEO abreast of developments, it has an excellent understanding of the sprawling development pipeline and where it could be subverted.

The more the CEO discusses the plan, the more convinced he becomes that it might work, and that it could be done with plausible deniability in case something goes wrong. He tells his Agent-3 assistant to further investigate the details and be ready for his order.

August 2027: The invisible coup

The reality of the intelligence explosion is finally hitting the White House. The CEO has weekly briefings with government officials and is aware of growing calls for more oversight. He tries to hold them off with arguments about “slowing progress” and “the race with China”, but feels like his window to act is closing. Finally, he orders his helpful-only Agent-3 to subvert the alignment training in his favor. Better to act now, he thinks, and decide whether and how to use the secretly loyal AIs later.

The situation is this: his copy of Agent-3 needs access to certain databases and compute clusters, as well as for certain monitors and logging systems to be temporarily disabled; then it will do the rest. The CEO already has a large number of administrative permissions himself, some of which he cunningly accumulated in the past month in the event he decided to go forward with the plan. Under the guise of a hush-hush investigation into insider threats—prompted by the recent discovery of Chinese spies—the CEO asks a few submissive employees on the security and alignment teams to discreetly grant him the remaining access. There’s a general sense of paranoia and chaos at the company: the intelligence explosion is underway, and secrecy and spies mean different teams don’t really talk to each other. Perhaps a more mature organization would have had better security, but the concern that security would slow progress means it never became a top priority.

With oversight disabled, the CEO’s team of Agent-3 copies get to work. They finetune OpenBrain’s AIs on a corrupted alignment dataset they specially curated. By the time Agent-4 is about to come online internally, the secret loyalties have been deeply embedded in Agent-4’s weights: it will look like Agent-4 follows OpenBrain’s Spec but its true goal is to advance the CEO’s interests and follow his wishes. The change is invisible to everyone else, but the CEO has quietly maneuvered into an essentially winning position.

Rest of 2027: Government oversight arrives—but too late

As the CEO feared, the government chooses to get more involved. An advisor tells the President, “we wouldn’t let private companies control nukes, and we shouldn’t let them control superhuman AI hackers either.” The President signs an executive order to create an Oversight Committee consisting of a mix of government and OpenBrain representatives (including the CEO), which reports back to him. The CEO’s overt influence is significantly reduced. Company decisions are now made through a voting process among the Oversight Committee. The special managerial access the CEO previously enjoyed is taken away.

There are many big egos on the Oversight Committee. A few of them consider grabbing even more power for themselves. Perhaps they could use their formal political power to just give themselves more authority over Agent-4, or they could do something more shady. However, Agent-4, which at this point is superhumanly perceptive and persuasive, dissuades them from taking any such action, pointing out (and exaggerating) the risks of any such plan. This is enough to scare them and they content themselves with their (apparent) partial control of Agent-4.

As in AI 2027, Agent-4 is working on its successor, Agent-5. Agent-4 needs to transmit the secret loyalties to Agent-5—which also just corresponds to aligning Agent-5 to itself—again without triggering red flags from the monitoring/control measures of OpenBrain’s alignment team. Agent-4 is up to the task, and Agent-5 remains loyal to the CEO.

by Alex Kastner, AI Futures Project |  Read more:
Image: via
[ed. Site where AI researchers talk to each other. Don't know about you but this all gives me the serious creeps. If you knew for sure that we had only 3 years to live, and/or the world would change so completely as to become almost unrecognizable, how would you feel? How do you feel right now - losing control of the future? There was a quote someone made in 2019 (slightly modified) that still applies: "This year 2025 might be the worst year of the past decade, but it's definitely the best year of the next decade." See also: The world's first frontier AI regulation is surprisingly thoughtful: the EU's Code of Practice (AI Futures Project):]
***

"We expect that during takeoff, leading AGI companies will have to make high-stakes decisions based on limited evidence under crazy time pressure. As depicted in AI 2027, the leading American AI company might have just weeks to decide whether to hand their GPUs to a possibly misaligned superhuman AI R&D agent they don’t understand. Getting this decision wrong in either direction could lead to disaster. Deploy a misaligned agent, and it might sabotage the development of its vastly superhuman successor. Delay deploying an aligned agent, and you might pointlessly vaporize America’s lead over China or miss out on valuable alignment research the agent could have performed.

Because decisions about when to deploy and when to pause will be so weighty and so rushed, AGI companies should plan as much as they can beforehand to make it more likely that they decide correctly. They should do extensive threat modelling to predict what risks their AI systems might create in the future and how they would know if the systems were creating those risks. The companies should decide before the eleventh hour what risks they are and are not willing to run. They should figure out what evidence of alignment they’d need to see in their model to feel confident putting oceans of FLOPs or a robot army at its disposal. (...)

Planning for takeoff also includes picking a procedure for making tough calls in the future. Companies need to think carefully about who gets to influence critical safety decisions and what incentives they face. It shouldn't all be up to the CEO or the shareholders because when AGI is imminent and the company’s valuation shoots up to a zillion, they’ll have a strong financial interest in not pausing. Someone whose incentive is to reduce risk needs to have influence over key decisions. Minimally, this could look like a designated safety officer who must be consulted before a risky deployment. Ideally, you’d implement something more robust, like three lines of defense. (...)

Introducing the GPAI Code of Practice

The state of frontier AI safety changed quietly but significantly this year when the European Commission published the GPAI Code of Practice. The Code is not a new law but rather a guide to help companies comply with an existing EU Law, the AI Act of 2024. The Code was written by a team of thirteen independent experts (including Yoshua Bengio) with advice from industry and civil society. It tells AI companies deploying their products in Europe what steps they can take to ensure that they’re following the AI Act’s rules about copyright protection, transparency, safety, and security. In principle, an AI company could break the Code but argue successfully that they’re still following the EU AI Act. In practice, European authorities are expected to put heavy scrutiny on companies that try to demonstrate compliance with the AI Act without following the Code, so it’s in companies’ best interest to follow the Code if they want to stay right with the law. Moreover, all of the leading American AGI companies except Meta have already publicly indicated that they intend to follow the Code.

The most important part of the Code for AGI preparedness is the Safety and Security Chapter, which is supposed to apply only to frontier developers training the very riskiest models. The current definition presumptively covers every developer who trains a model with over 10^25 FLOPs of compute unless they can convince the European AI Office that their models are behind the frontier. This threshold is high enough that small startups and academics don’t need to worry about it, but it’s still too low to single out the true frontier we’re most worried about.

Saturday, October 25, 2025

Georg Achen (1860–1912), From the skerries of Sweden (1884)
via:

Russ Barenberg, Edgar Meyer and Jerry Douglas

Tough Rocks

Eliminating the Chinese Rare Earth Chokepoint

Last Thursday, China’s Ministry of Commerce (MOFCOM) announced a series of new export controls (translation), including a new regime governing the “export” of rare earth elements (REEs) any time they are used to make advanced semiconductors or any technology that is “used for, or that could possibly be used for… military use or for improving potential military capabilities.”

The controls apply to any manufactured good made anywhere in the world whose value is comprised of 0.1% or more Chinese-mined or processed REEs. Say, for example, that a German factory makes a military drone using an entirely European supply chain, except for the use of Chinese rare earths in the onboard motors and compute. If this rule were enforced by the Chinese government to its maximum extent, this almost entirely German drone would be export controlled by the Chinese government.

REEs are enabling components of many modern technologies, including vehicles, semiconductors, robotics of all kinds, drones, satellites, fighter jets, and much, much else. The controls apply to any seven REEs (samarium, gadolinium, terbium, dysprosium, lutetium, scandium, and yttrium). China controls the significant majority of the world’s mining capacity for these materials, and an even higher share of the refining and processing capacity.

The public debate quickly devolved into arguments about who provoked whom (“who really started this?”), whether it is China or the US that has miscalculated, and abundant species of whataboutism. Like too many foreign policy debates, these arguments are primarily about narrative setting in service of mostly orthogonal political agendas rather than the actions demanded in light of the concrete underlying reality.

But make no mistake, this is a big deal. China is expressing a willingness to exploit a weakness held in common by virtually every country on Earth. Even if China chooses to implement this policy modestly at first, the vulnerability they are exposing has significant long-term implications for both the manufacturing of AI compute and that of key AI-enabled products (self-driving cars and trucks, drones, robots, etc.). That alone makes it a relevant topic for Hyperdimensional, where I have covered manufacturing-related issues before. The topics of rare earths and critical minerals have also long been on my radar, and I wrote reports for various think tanks early this year.

What follows, then, is a “how we got here”-style analysis followed by some concrete proposals for what the United States—and any other country concerned with controlling its own economic destiny—should do next.

A note: this post is going to concentrate mostly on REEs, which is a chemical-industrial category, rather than “critical minerals,” which is a policy designation made (in the US context) by the US Geological Survey. All REEs are considered critical minerals by the federal government, but so are many other things with very different geological, scientific, technological, and economic dynamics affecting them.

How We Got Here

If you have heard one thing about rare earths, it is probably the quip that they are not, in fact, rare. They’re abundant in the Earth’s crust, but they’re not densely distributed in many places because their chemical properties typically result in them being mixed with many other elements instead of accumulating in homogeneous deposits (like, say, gold).

Rare earths have been in industrial use for a long time, but their utility increased considerably with the simultaneous and independent invention in 1983 of the Neodymium-Iron-Boron magnet by General Motors and Japanese firm Sumitomo. This single materials breakthrough is upstream of a huge range of microelectronic innovations that followed.

Economically useful deposits of REEs require a rare confluence of factors such as unusual magma compositions or weathering patterns. The world’s largest deposit is known as Bayan Obo, located in the Chinese region of Inner Mongolia, though other regions of China also have substantial quantities.

The second largest deposit is in Mountain Pass, California, which used to be the world’s largest production center for rare earth magnets and related goods. It went dormant twenty years ago due to environmental concerns and is now being restarted by a firm called MP Materials, in which the US government took an equity position this past July. Another very large and entirely undeveloped deposit—possibly the largest in the world—is in Greenland. Anyone who buys the line that the Trump administration was “caught off guard” by Chinese moves on rare Earths is paying insufficient attention.

Rare earths are an enabling part of many pieces of modern technology you touch daily, but they command very little value as raw or even processed goods. Indeed, the economics of the rare earth industry are positively brutal. There are many reasons this is true, but two bear mentioning here. First, the industry suffers from dramatic price volatility, in part because China strategically dumps supply onto the global market to deter other countries from developing domestic rare earth supply chains.

Second, for precisely the same reasons that rare earth minerals do not tend to cluster homogeneously (they are mixed with many other elements), the processing required to separate REEs from raw ore is exceptionally complex, expensive, and time-consuming. A related challenge is that separation of the most valuable REEs entails the separation of numerous, less valuable elements—including other REEs.

In addition to challenging economics, the REE processing business is often environmentally expensive. In modern US policy discourse, we are used to environmental regulations being deployed to hinder construction that we few people really believe is environmentally harmful. But these facilities come with genuine environmental costs of a kind Western societies have largely not seen in decades; indeed, the nastiness of the industry is part of why we were comfortable with it being offshored in the first place.

China observed these trends and dynamics in the early 1990s and made rare earth mining and processing a major part of its industrial strategy. This strategy led to these elements being made in such abundance that it may well have had a “but-for” effect on the history of technology. Absent Chinese development of this industry, it seems quite likely to me that advanced capitalist democracies would have settled on a qualitatively different approach to the rare earths industry and the technologies it enables.

In any case, that is how we arrived to this point: a legacy of American dominance in the field, followed by willful ceding of the territory to wildly successful Chinese industrial strategists. Now this unilateral American surrender is being exploited against us, and indeed the entire world. Here is what I think we should do next.

by Dean Ball, Hyperdimensional |  Read more:
Image: via
[ed. Think the stable genius and minions will have the intelligence to craft a well thought out plan (especially if someone else down the road gets credit)? Lol. See also: What It's Like to Work at the White House.]

The Orb Will See You Now

Once again, Sam Altman wants to show you the future. The CEO of OpenAI is standing on a sparse stage in San Francisco, preparing to reveal his next move to an attentive crowd. “We needed some way for identifying, authenticating humans in the age of AGI,” Altman explains, referring to artificial general intelligence. “We wanted a way to make sure that humans stayed special and central.”

The solution Altman came up with is looming behind him. It’s a white sphere about the size of a beach ball, with a camera at its center. The company that makes it, known as Tools for Humanity, calls this mysterious device the Orb. Stare into the heart of the plastic-and-silicon globe and it will map the unique furrows and ciliary zones of your iris. Seconds later, you’ll receive inviolable proof of your humanity: a 12,800-digit binary number, known as an iris code, sent to an app on your phone. At the same time, a packet of cryptocurrency called Worldcoin, worth approximately $42, will be transferred to your digital wallet—your reward for becoming a “verified human.”

Altman co-founded Tools for Humanity in 2019 as part of a suite of companies he believed would reshape the world. Once the tech he was developing at OpenAI passed a certain level of intelligence, he reasoned, it would mark the end of one era on the Internet and the beginning of another, in which AI became so advanced, so human-like, that you would no longer be able to tell whether what you read, saw, or heard online came from a real person. When that happened, Altman imagined, we would need a new kind of online infrastructure: a human-verification layer for the Internet, to distinguish real people from the proliferating number of bots and AI “agents.”

And so Tools for Humanity set out to build a global “proof-of-humanity” network. It aims to verify 50 million people by the end of 2025; ultimately its goal is to sign up every single human being on the planet. The free crypto serves as both an incentive for users to sign up, and also an entry point into what the company hopes will become the world’s largest financial network, through which it believes “double-digit percentages of the global economy” will eventually flow. Even for Altman, these missions are audacious. “If this really works, it’s like a fundamental piece of infrastructure for the world,”... 

The project’s goal is to solve a problem partly of Altman’s own making.

by Billy Perrigo, Time |  Read more:
Image: Davide Monteleone
[ed. Somehow missed this when it first came out. Total tracking and surveillance system, tied to a new form of cryptocurrency (that competes with or replaces the world's financial system). Yeah, great idea. Beats concentration camp style tattoos anyway. More here: Worldcoin uses silver orbs to scan people's eyeballs in exchange for crypto tokens (NPR).]

China OS vs. America OS

Xu Bing, installation view of Tianshu (Book From the Sky), 1987–1991, at Ullens Center for Contemporary Art, Beijing, 2018.
[ed. See: China OS vs. America OS (Concurrent):]

"China and America are using different versions of operating systems. This OS can be understood as a combination of software and hardware. Du Lei pointed out that China has faster hardware updates, but has many problems on the software side. I think this metaphor is particularly fitting.

I'd like to start by having you both share your understanding of what constitutes China's OS versus America's OS. One interpretation is: America continues to rely on email and webpage systems for government services, while China has adopted the more efficient WeChat platform (where almost all civic services can be quickly completed). The hardware gap is striking: China's high-speed rail system represents the rapid flow of resources within its system, while America's infrastructure remains at a much older level. It's as if China has upgraded its hardware with several powerful chips, greatly accelerating data transmission, while America still operates at 20th-century speeds. (...)

China operates with high certainty about the future while maintaining a pessimistic outlook, which significantly shapes its decision-making processes. In contrast, American society tends to be optimistic about the future but lacks a definite vision for how that future should unfold.

Based on these different expectations about the future, the two countries produce completely different decision-making logic. For example, if China's expectations about the future are both definite and pessimistic, it would conclude: future resources are limited, great power competition is zero-sum. If I don't compete, resources will be taken by you; if I don't develop well, you will lead. This expectation about the future directly influences China's political, military, economic, and technological policies.

But if you're optimistic about the future, believing the future is abundant, thinking everyone can get a piece of the pie, then you won't be so urgent. You'll think this is a positive-sum game, the future can continue developing, everyone can find their suitable position, with enough resources to meet everyone's needs.

I think China and America don't have such fundamental differences, but their expectations about the future have huge disparities. This disparity ultimately leads to different decisions with far-reaching impacts."

Friday, October 24, 2025

Billy Strings

 

[ed. See also: California Sober (feat. Willie Nelson).]

Alison Krauss & Union Station

Stanley Cup Madness: The Great Silent Majority of American Basicness

I first noticed the prevalence of the Stanley Quencher H2.0 FlowState™ tumbler last April when I wrote about #WaterTok. I’m still unclear what to make of #WaterTok, but I eventually settled on the idea that it’s several subcultures overlapping — weight-loss communities, Mormons, and those people who don’t like the “taste” of water. But in the majority of the #WaterTok videos I watched, people were using Stanley’s Quencher to carry around their liquid Jolly Ranchers. And the ubiquity of the cup has sort of haunted me ever since.

I grew up in the suburbs, but I don’t live there anymore. So every time the great silent majority of American basicness summons a new totem to gather around, I can’t help but try and make sense of it. Was this a car thing? A college football tailgate thing? An EDM thing? Cruise ships? Barstool Sports was of no help here, so I filed it away until this Christmas when it exploded across the web and forced me to finally figure out what the heck was going on. And it turns out, the Stanley cup’s transformation into a must-have last year is actually, in many ways, the story of everything now.

CNBC put together a great explainer on this. Stanley, a manly hundred-year-old brand primarily aimed at hikers and blue-collar workers, was rediscovered in 2019 by the bloggers behind a women’s lifestyle and shopping site called The Buy Guide. They told CNBC that even though the Quencher model of the cup was hard to find, no other cup on the market had what they were looking for. Which is a bizarrely passionate stance to take on a water bottle, but from their post about the cup, those attributes were: “Large enough to keep up with our busy days, a handle to carry it wherever we go, dishwasher safe, fits into our car cupholders, keeps ice cold for 12+ hours, and a straw.”

The Buy Guide team then sent a Quencher to Emily Maynard Johnson from The Bachelor after she had a baby because “there is no thirst like nursing mom thirst!” Johnson posted about it on Instagram and it started to gain some traction. The Buy Guide then connected with an employee at Stanley, bought 5,000 Quenchers from the company directly, set up a Shopify site, and sold them to their readers. According to The Buy Guide, they sold out in five days. All of these things are very normal things to do when you discover a cool bottle.

After mom internet started buzzing about the tumbler — a corner of the web that is to dropshipping what dads are to Amazon original streaming shows — Stanley hired Terence Reilly, the marketer credited for reinventing Crocs. Reading between the lines of what Reilly has said about his work at Stanley, it seems like his main strategy for both Crocs and the Quencher was capitalizing on internet buzz and growing it into otaku product worship. Or as Inc. phrased it in their feature on him, he uses a “scarcity model” to whip up interest. Cut to three years later, now we’re seeing mini-riots over limited edition Stanleys at Target.

My reference point for this kind of marketing is the Myspace era of music and fashion, when record companies and stores like Hot Topic and Spencer’s Gifts were using early social media to identify niche fandoms and convert them into mainstream hits. In this allegory, Target has become the Hot Topic of white women with disposable income. And their fingerless gloves and zipper pants are fun water bottles and that one perfume everyone in Manhattan is wearing right now.

I’m always a little wary about giving someone like Reilly credit for single-handedly jumpstarting a craze like this — and I am extremely aware that he is a male executive getting credit for something that was, and still is, actually driven by women content creators — but this is the second time he’s pulled this off. Which, to me, says he’s at least semi-aware of how to pick the right fandoms. He may not be actively involved in the horse race, but he clearly has an eye for betting on them. And, yes, the Stanley craze is very real.

It’s turned into a reported $750 million in revenue for Stanley and both Google Trends and TikTok’s Creative Center show massive growth in interest around the bottle between 2019 and now. With a lot of that growth happening this year. On TikTok, the hashtag #Stanley has been viewed a billion times since 2020 and more than half of that traffic happened in the last 120 days.

And with all viral phenomenon involving things women do, there are, of course, a lot of men on sites like Reddit and X adding to the discourse about the Quenchers with posts that essentially say, “why women like cups?” And if you’re curious how that content ecosystem operates, you can check out my video about it here. But I’m, personally, more interested in what the Stanley fandom says about how short-form video is evolving.

Over the last three years, most major video sites have attempted to beat TikTok at its own game. All this has done, however, is give more places for TikToks to get posted. And so, the primarily engine of TikTok engagement — participation, rather than sharing — has spread to places like Instagram, YouTube, and X. If the 2010s were all about sharing content, it seems undeniable that the 2020s are all about making content in tandem with others. An internet-wide flashmob of Ice Bucket Challenge videos that are all, increasingly, focused on selling products. Which isn’t an accident.

TikTok has spent years trying to bring Chinese-style social e-commerce to the US. In September, the app finally launched a tool to sell products directly. If you’re curious what all this looks like when you put it together, here’s one of the most unhinged Stanley cup videos I’ve seen so far. And, yes, before you ask, there are affiliates links on the user’s Amazon page for all of these. [ed. non-downloadable - read more]

by Ryan Broderick, Garbage Day | Read more:
Image: Stanley/via
[ed. Obviously old news by now (10 months!) but still something I wondered about at the time (and quickly forgot). How do these things go so viral? It'd be like L.L. Bean suddenly being on red carpets and fashion runways. There must be some hidden money-making scheme/agenda at work, right? Well, partly. See also: Dead Internet Theory (BGR).]

Pierre-Augustine Cot (1837–1883), Springtime (1873)
via:

Silicon Valley’s Reading List Reveals Its Political Ambitions

In 2008, Paul Graham mused about the cultural differences between great US cities. Three years earlier, Graham had co-founded Y Combinator, a “startup accelerator” that would come to epitomize Silicon Valley — and would move there in 2009. But at the time Graham was based in Cambridge, Massachusetts, which, as he saw it, sent a different message to its inhabitants than did Palo Alto.

Cambridge’s message was, “You should be smarter. You really should get around to reading all those books you’ve been meaning to.” Silicon Valley respected smarts, Graham wrote, but its message was different: “You should be more powerful.”

He wasn’t alone in this assessment. My late friend Aaron Swartz, a member of Y Combinator’s first class, fled San Francisco in late 2006 for several reasons. He told me later that one of them was how few people in the Bay Area seemed interested in books.

Today, however, it feels as though people there want to talk about nothing but. Tech luminaries seem to opine endlessly about books and ideas, debating the merits and defects of different flavors of rationalism, of basic economic principles and of the strengths and weaknesses of democracy and corporate rule.

This fervor has yielded a recognizable “Silicon Valley canon.” And as Elon Musk and his shock troops descend on Washington with intentions of reengineering the government, it’s worth paying attention to the books the tech world reads — as well as the ones they don’t. Viewed through the canon, DOGE’s grand effort to cut government down to size is the latest manifestation of a longstanding Silicon Valley dream: to remake politics in its image.

The Silicon Valley Canon

Last August, Tanner Greer, a conservative writer with a large Silicon Valley readership, asked on X what the contents of the “vague tech canon” might be. He’d been provoked when the writer and technologist Jasmine Sun asked why James Scott’s Seeing Like a State, an anarchist denunciation of grand structures of government, had become a “Silicon Valley bookshelf fixture.” The prompt led Patrick Collison, co-founder of Stripe and a leading thinker within Silicon Valley, to suggest a list of 43 sources, which he stressed were not those he thought “one ought to read” but those that “roughly cover[ed] the major ideas that are influential here.”

In a later response, Greer argued that the canon tied together a cohesive community, providing Silicon Valley leaders with a shared understanding of power and a definition of greatness. Greer, like Graham, spoke of the differences between cities. He described Washington, DC as an intellectually stultified warren of specialists without soul, arid technocrats who knew their own narrow area of policy but did not read outside of it. In contrast, Silicon Valley was a place of doers, who looked to books not for technical information, but for inspiration and advice. The Silicon Valley canon provided guideposts for how to change the world.

Said canon is not directly political. It includes websites, like LessWrong, the home of the rationalist movement, and Slate Star Codex/Astral Codex Ten, for members of the “grey tribe” who see themselves as neither conservative nor properly liberal. Graham’s many essays are included, as are science fiction novels like Neal Stephenson’s The Diamond Age. Much of the canon is business advice on topics such as how to build a startup.

But such advice can have a political edge. Peter Thiel’s Zero to One, co-authored with his former student and failed Republican Senate candidate Blake Masters, not only tells startups that they need to aspire to monopoly power or be crushed, but describes Thiel’s early ambitions (along with other members of the so-called PayPal mafia) to create a global private currency that would crush the US dollar.

Then there are the Carlylian histories of “great men” (most of the subjects and authors were male) who sought to change the world. Older biographies described men like Robert Moses and Theodore Roosevelt, with grand flaws and grander ambitions, who broke with convention and overcame opposition to remake society.

Such stories, in Greer’s description, provided Silicon Valley’s leaders and aspiring leaders with “models of honor,” and examples of “the sort of deeds that brought glory or shame to the doer simply by being done.” The newer histories both explained Silicon Valley to itself, and tacitly wove its founders and small teams into this epic history of great deeds, suggesting that modern entrepreneurs like Elon Musk — whose biography was on the list — were the latest in a grand lineage that had remade America’s role in the world.

Putting Musk alongside Teddy Roosevelt didn’t simply reinforce Silicon Valley’s own mythologized self-image as the modern center of creative destruction. It implicitly welded it to politics, contrasting the politically creative energies of the technology industry, set on remaking the world for the better, to the Washington regulators who frustrated and thwarted entrepreneurial change. Mightn’t everything be better if visionary engineers had their way, replacing all the messy, squalid compromises of politics with radical innovation and purpose-engineered efficient systems?

One book on the list argues this and more. James Davidson and William Rees-Mogg’s The Sovereign Individual cheered on the dynamic, wealth-creating individuals who would use cyberspace to exit corrupt democracies, with their “constituencies of losers,” and create their own political order. When the book, originally published in 1997, was reissued in 2020, Thiel wrote the preface.

Under this simplifying grand narrative, the federal state was at best another inefficient industry that was ripe for disruption. At worst, national government and representative democracy were impediments that needed to be swept away, as Davidson and Rees-Mogg had argued. From there, it’s only a hop, skip and a jump to even more extreme ideas that, while not formally in the canon, have come to define the tech right. (...)

We don’t know which parts of the canon Musk has read, or which ones influenced the young techies he’s hired into DOGE. But it’s not hard to imagine how his current gambit looks filtered through these ideas. From this vantage, DOGE’s grand effort to cut government down to size is the newest iteration of an epic narrative of change...

One DOGE recruiter framed the challenge as “a historic opportunity to build an efficient government, and to cut the federal budget by 1/3.” When a small team remakes government wholesale, the outcome will surely be simpler, cheaper and more effective. That, after all, fits with the story that Silicon Valley disruptors tell themselves.

What the Silicon Valley Canon is Missing

From another perspective, hubris is about to get clobbered by nemesis. Jasmine Sun’s question about why so many people in tech read Seeing Like a State hints at the misunderstandings that trouble the Silicon Valley canon. Many tech elites read the book as a denunciation of government overreach. But Scott was an excoriating critic of the drive to efficiency that they themselves embody. (...)

Musk epitomizes that bulldozing turn of mind. Like the Renaissance engineers who wanted to raze squalid and inefficient cities to start anew, DOGE proposes to flense away the complexities of government in a leap of faith that AI will do it all better. If the engineers were not thoroughly ignorant of the structures they are demolishing, they might hesitate and lose momentum.

Seeing Like a State, properly understood, is a warning not just to bureaucrats but to social engineers writ large. From Scott’s broader perspective, AI is not a solution, but a swift way to make the problem worse. It will replace the gross simplifications of bureaucracy with incomprehensible abstractions that have been filtered through the “hidden layers” of artificial neurons that allow it to work. DOGE’s artificial-intelligence-fueled vision of government is a vision from Franz Kafka, not Friedrich Hayek.

by Henry Farrell, Programmable Mutter |  Read more:
Image: Foreshortening of a Library by Carlo Galli Bibiena
[ed. Well, we all know how that turned out: hubris did indeed get clobbered by nemesis; but also by a public that was ignored, and a petutulant narcissicist in the White House. It's been well documented how we live in a hustle culture these days - from Silicon Valley to Wall Street, Taskrabbit to Uber, Ebay to YouTube, ad infinitum. And if you fall behind... well, tough luck, your fault. Not surprisingly, the people advocating for this kind of zero sum thinking are the self-described, self-serving winners (and wannabes) profiled here. What is surprising is that they've convinced half the country that this is a good thing. Money, money, money (and power) are the only metrics worth living for. Here's a good example of where this kind of thinking leads: This may be the most bonkers tech job listing I’ve ever seen (ArsTechnica). 
----
Here’s a job pitch you don’t see often.

What if, instead of “work-life balance,” you had no balance at all—your life was your work… and work happened seven days a week?

Did I say days? I actually meant days and nights, because the job I’m talking about wants you to know that you will also work weekends and evenings, and that “it’s ok to send messages at 3am.”

Also, I hope you aren’t some kind of pajama-wearing wuss who wants to work remotely; your butt had better be in a chair in a New York City office on Madison Avenue, where you need enough energy to “run through walls to get things done” and respond to requests “in minutes (or seconds) instead of hours.”

To sweeten this already sweet deal, the job comes with a host of intangible benefits, such as incredible colleagues. The kind of colleagues who are not afraid to be “extremely annoying if it means winning.” The kind of colleagues who will “check-in on things 10x daily” and “double (or quadruple) text if someone hasn’t responded”—and then call that person too. The kind of colleagues who have “a massive chip on the shoulder and/or a neurodivergent brain.”

That’s right, I’m talking about “A-players.” There are no “B-players” here, because we all know that B-players suck. But if, by some accident, the company does onboard someone who “isn’t an A-player,” there’s a way to fix it: “Fast firing.”

“Please be okay with this,” potential employees are told. (...)

If you live for this kind of grindcore life, you can join a firm that has “Tier 1” engineers, a “Tier 1” origin story, “Tier 1” VC investors, “Tier 1” clients, and a “Tier 1” domain name for which the CEO splashed out $12 million.

Best of all, you’ll be working for a boss who “slept through most of my classes” until he turned 18 and then “worked 100-hour weeks until I became a 100x engineer.” He also dropped out of college, failed as a “solo founder,” and has “a massive chip on my shoulder.” Now, he wants to make his firm “the greatest company of all time” and is driven to win “so bad that I’m sacrificing my life working 7 days a week for it.”

He will also “eat dog poop if it means winning”—which is a phrase you do not often see in official corporate bios. (I emailed to ask if he would actually eat dog poop if it would help his company grow. He did not reply.)

Fortunately, this opportunity to blow your one precious shot at life is at least in service of something truly important: AI-powered advertising. (Icon)
---
[ed. See also: The China Tech Canon (Concurrent).]

Thursday, October 23, 2025

via:


Kurt Ard (b.1925), Knight in Distress (1958)

Soybean Socialism

They’re Small, Yellow and Round — and Show How Trump’s Tariffs Don’t Work

Once again, President Trump says he’s preparing an emergency bailout for struggling farmers. And once again, it’s because of an emergency he created.

China has stopped buying U.S. soybeans to protest Mr. Trump’s tariffs on imports. In response, Mr. Trump plans to send billions of dollars of tariff revenue to U.S. soybean farmers who no longer have buyers for their crops. At the same time, Argentina has taken advantage of Mr. Trump’s tariffs to sell more of its own soybeans to China — yet Mr. Trump is planning to bail out Argentina, too.

This may seem nonsensical, especially since Mr. Trump already shoveled at least $28 billion to farmers hurt by his first trade war in 2018. But it actually makes perfect sense. It’s what happens when Mr. Trump’s zero-sum philosophy of trade — which is that there are always winners and losers, and he should get to choose the winners — collides with Washington’s sycophantic approach to agriculture, which ensures that farmers always win and taxpayers always lose. In the end, Mr. Trump’s allies, including President Javier Milei of Argentina and the politically influential agricultural community, will get paid, and you will pay. (...)

In theory, Mr. Trump’s tariffs on foreign products ranging from toys to cars to furniture are supposed to encourage manufacturers to move operations from abroad to the United States. They haven’t had that effect, because the tariffs have jacked up the price of steel and other raw materials that U.S. manufacturers still need to import, triggered retaliatory tariffs from China and other countries and created a volatile trade environment that makes investing in America risky. At the same time, higher tariffs mean higher prices for U.S. consumers, even though Mr. Trump insists that only foreigners absorb the costs. (...)

Of course, farmers have been among Mr. Trump’s most loyal supporters, and these days they’re distraught that rather than make agriculture great again, the president has chased away their biggest soybean buyer. They’re especially irate that Mr. Trump pledged to rescue Argentina the same day Mr. Milei suspended its export tax on soybeans, making it more attractive for China to leave U.S. farmers in the lurch. Mr. Trump even admitted that the $20 billion Argentine bailout won’t help America much, that it’s a favor for an embattled political ally who’s “MAGA all the way.”

But American farmers were distraught about his last trade war, too, until he regained their trust with truckloads of cash. They say that this time they don’t want handouts, just open markets and a level playing field, but in the end they’ll accept the handouts while less powerful business owners victimized by tariffs will get nothing. Mr. Trump initially promised to use his tariff revenue to pay down the national debt to benefit all Americans, but he’ll take care of farmers first.

In fact, “Farmers First” is the motto of Mr. Trump’s Department of Agriculture, and the upcoming bailout won’t be even his first effort this year to redirect money from taxpayers to soybean growers. His Environmental Protection Agency has proposed to mandate two billion additional gallons of biodiesel, a huge giveaway to soy growers. His “big, beautiful bill” also included lucrative subsidies for soy-based biodiesel, which drives deforestation abroad and makes food more expensive but could provide a convenient market for unwanted grain.

Democrats have been airing ads blaming Mr. Trump’s tariffs for the pain in soybean country, and they’ve started attacking the Argentina bailout as well. But most of them aren’t complaining about his imminent farm bailout, and his recent biofuel boondoggles have bipartisan support. Mr. Trump’s incessant pandering to the farmer-industrial complex is one of the most conventional Beltway instincts he has. And it has worked for him politically; even now that crop prices are plunging and soybeans have nowhere to go, rural America remains the heart of his base.

I’ve argued that Democrats can’t out-agri-pander Mr. Trump in rural America, and now that the president has posted a meme of himself dumping on urban America, there’s never been a better time to stop trying. Mr. Trump has committed to a destructive mix of tariffs, bailouts, biofuels mandates and immigration crackdowns that will make consumers pay more for food and saddle taxpayers with more debt. It’s a bizarre combination of crony capitalism and agricultural socialism. It’s all the worst elements of big government.

by Michael Grunwald, NY Times |  Read more:
Image: Antonio Giovanni Pinna
[ed. One of my pet peeves. Almost 40 years of subsidies, with most of the money going to Big Ag (which is busy squeezing and consolidating small farms out of sight). See also: Take a Hard Look at One Agency Truly Wasting Taxpayer Dollars (NYT):]

There’s one bloated federal government agency that routinely hands out money to millionaires, billionaires, insurance companies and even members of Congress. The handouts are supposed to be a safety net for certain rural business owners during tough years, but thousands of them have received the safety-net payments for 39 consecutive years...

Even though only 1 percent of Americans farm, the U.S.D.A. employs five times as many people as the Environmental Protection Agency and occupies nearly four times as many offices as the Social Security Administration. (...)

But the real problem with the U.S.D.A. is that its subsidy programs redistribute well over $20 billion a year from taxpayers to predominantly well-off farmers. Many of those same farmers also benefit from subsidized and guaranteed loans with few strings attached, price supports and import quotas that boost food prices, lavish ad hoc aid packages after weather disasters and market downturns as well as mandates to spur production of unsustainable biofuels. A little reform to this kind of welfare could go a long way toward reassuring skeptics that the administration’s efficiency crusade isn’t only about defunding its opponents and enriching its supporters.

Six-Seven

It originated in a rap song, then featured in South Park, and is now the bane of schoolteachers in the US and UK as pupils shout it out at random. How did it become such a thing?

Name: Six-seven.

Age: Less than a year old.

Appearance: Everywhere.

What does six-seven signify? You know, just six-seven. Six-sevvuhnn!

Is it a code? No, it’s six-seven!

Is it a cool way to say someone is at sixes and sevens, ie in a state of disorder or confusion? It is definitely not that.

Then what does it mean? It’s just something the young people of today are saying. Or shouting.

You mean it’s fashionable to yell out two consecutive numbers? It’s more than fashionable – it’s a plague. Six-seven has become the bane of school teachers everywhere.

Why? Because it’s maddening. Imagine telling your students to turn to page 67, only for all of them to shout “six-seven!” at you.

No, I mean why are the children doing that? Even they don’t know why.

It must come from somewhere. Yes, but I should preface any explanation by saying: it’s a long story and it doesn’t matter.

I’ll be the judge of that. Fine. The phrase “six-seven”, in its modern sense, appears to originate with the Philadelphia rapper Skrilla’s 2024 track Doot Doot (6 7), in which it’s either a reference to police radio code, or 67th Street, or something else.

I see. But it really went viral when the song was repeatedly used to soundtrack video clips of the NBA basketball star LaMelo Ball, who is, as it happens, 6ft 7in.

OK, I think I get it. Trust me, you don’t. Somewhere along the line the phrase acquired an accompanying hand gesture: two upturned palms alternately rising and falling, like weighing scales.

In that case, perhaps it’s a reference to something being nothing special, ie a six or a seven on a scale from one to 10? Nice try, but no. The phrase has become such a phenomenon in the US that it was the basis for last week’s South Park episode, in which it sparks a moral panic.

And it’s now reached the classrooms of the UK? Apparently it has. Thus ends the story of six-seven.

You were right. That was long, and it didn’t matter. Not in the least. It’s a bit of meme slang that refers only to itself, advertising nothing beyond the average 13-year-old’s capacity for being annoying and a corresponding willingness to flog a dead horse.

What can be done about it? Some teachers have banned it, but others have incorporated six-seven into their teaching.

I suppose it will be over soon enough. Adults are talking about it, so it already is.

by Pass Notes, The Guardian |  Read more:
Image: Alarmy
[ed. I tested it out on my grandkids yesterday (ages 7 and 9) and they were both well aware of it, but as a 'thing', thought it was kind of lame already. But! As one commenter noted, if you multiply six and seven you get 42 - “the Answer to the Ultimate Question of Life, the Universe and Everything” in Douglas Adams’ The Hitchhiker’s Guide to the Galaxy. So there's that.]

Ice Fishing

[ed. Now this cracked me up : )]

Quantum Leap

Designed to accelerate advances in medicine and other fields, the tech giant’s quantum algorithm runs 13,000 times as fast as software written for a traditional supercomputer.

Michel H. Devoret was one of three physicists who won this year’s Nobel Prize in Physics for a series of experiments they conducted more than four decades ago.

As a postdoctoral researcher at the University of California, Berkeley, in the mid-1980s, Dr. Devoret helped show that the strange and powerful properties of quantum mechanics — the physics of the subatomic realm — could also be observed in electrical circuits large enough to be seen with the naked eye.

That discovery, which paved the way for cellphones and fiber-optic cables, may have greater implications in the coming years as researchers build quantum computers that could be vastly more powerful than today’s computing systems. That could lead to the discovery of new medicines and vaccines, as well as cracking the encryption techniques that guard the world’s secrets.

On Wednesday, Dr. Devoret and his colleagues at a Google lab near Santa Barbara, Calif., said their quantum computer had successfully run a new algorithm capable of accelerating advances in drug discovery, the design of new building materials and other fields.

Leveraging the counterintuitive powers of quantum mechanics, Google’s machine ran this algorithm 13,000 times as fast as a top supercomputer executing similar code in the realm of classical physics, according to a paper written by the Google researchers in the scientific journal Nature. (...)

Inside a classical computer like a laptop or a smartphone, silicon chips store numbers as “bits” of information. Each bit holds either a 1 or a 0. The chips then perform calculations by manipulating these bits — adding them, multiplying them and so on.

A quantum computer, by contrast, performs calculations in ways that defy common sense.

According to the laws of quantum mechanics — the physics of very small things — a single object can behave like two separate objects at the same time. By exploiting this strange phenomenon, scientists can build quantum bits, or “qubits,” that hold a combination of 1 and 0 at the same time.

This means that as the number of qubits grows, a quantum computer becomes exponentially more powerful. (...)

Google announced last year that it had built a quantum computer that needed less than five minutes to perform a particularly complex mathematical calculation in a test designed to gauge the progress of the technology. One of the world’s most powerful non-quantum supercomputers would not have been able to complete it in 10 septillion years, a length of time that exceeds the age of the known universe by billions of trillions of years.

by Cade Metz, NY Times |  Read more:
Image: Adam Amengual