Showing posts with label Critical Thought. Show all posts
Showing posts with label Critical Thought. Show all posts

Monday, March 16, 2026

On Adversarial Capitalism

I’ve lately been writing a series on modern capitalism. You can read these other blog posts for additional musings on the topic:
We are now in a period of capitalism that I call adversarial capitalism. By this I mean: market interactions increasingly feel like traps. You’re not just buying a product—you’re entering a hostile game rigged to extract as much value from you as possible.

A few experiences you may relate to:
  • I bought a banana from the store. I was prompted to tip 20, 25, or 30% on my purchase.
  • I went to get a haircut. Booking online cost $6 more and also asked me to prepay my tip. [Would I get worse service if I didn’t tip in advance…?]
  • I went to a jazz club. Despite already buying an expensive ticket, I was told I needed to order at least $20 of food or drink—and literally handing them a $20 bill wouldn’t count, as it didn’t include tip or tax.
  • I looked into buying a new Garmin watch, only to be told by Garmin fans I should avoid the brand now—they recently introduced a subscription model. For now, the good features are still included with the watch purchase, but soon enough, those will be behind the paywall.
  • I bought a plane ticket and had to avoid clicking on eight different things that wanted to overcharge me. I couldn’t sit beside my girlfriend without paying a large seat selection fee. No food, no baggage included.
  • I realized that the bike GPS I bought four years ago no longer gives turn-by-turn directions because it’s no longer compatible with the mapping software.
  • I had to buy a new computer because the battery in mine wasn’t replaceable and had worn down.
  • I rented a car and couldn’t avoid paying an exorbitant toll-processing fee. They gave me the car with what looked like 55% of a tank. If I returned it with less, I’d be charged a huge fee. If I returned it with more, I’d be giving them free gas. It’s difficult to return it with the same amount, given you need to drive from the gas station to the drop-off and there’s no precise way to measure it.
  • I bought tickets to a concert the moment they went on sale, only for the “face value” price to go down 50% one month later – because the tickets were dynamically priced.
  • I used an Uber gift card, and once it was applied to my account, my Uber prices were higher.
  • I went to a highly rated restaurant (per Google Maps) and thought it wasn’t very good. When I went to pay, I was told they’d reduce my bill by 25% if I left a 5-star Google Maps review before leaving. I now understand the reviews.
Adversarial capitalism is when most transactions feel like an assault on your will. Nearly everything entices you with a low upfront price, then uses every possible trick to extract more from you before the transaction ends. Systems are designed to exploit your cognitive limitations, time constraints, and moments of inattention.

It’s not just about hidden fees. It’s that each additional fee often feels unreasonable. The rental company doesn’t just charge more for gas, they punish you for not refueling, at an exorbitant rate. They want you to skip the gas, because that’s how they make money. The “service fee” for buying a concert ticket online is wildly higher than a service fee ought to be.

The reason adversarial capitalism exists is simple.

Businesses are ruthlessly efficient and want to grow. Humans are incredibly price-sensitive. If one business avoids hidden fees, it’s outcompeted by another that offers a lower upfront cost, with more adversarial fees later. This exploits the gap between consumers’ sensitivity to headline prices and their awareness of total cost. Once one firm in a market adopts this pricing model, others are pressured to follow. It becomes a race to the bottom of the price tag, and a race to the top of the hidden fees.

The thing is: once businesses learn the techniques of adversarial capitalism and it gets accepted by consumers, there is no going back — it is a super weapon that is too powerful to ignore once discovered.

by Daniel Frank, Frankly Speaking |  Read more:

[ed. Many great essays on his blog, and substack (not not Talmud) to keep one busy for awhile. See also: Daniel-isms: 50 Ideas for Life I Repeatedly Share (Part 1); and 50 Ideas for Life I Repeatedly Share (part 2). Also, here's a couple more: the loss of magic: why Bob Dylan can’t write great songs anymore; and, Notes on Taiwan.]

Saturday, March 14, 2026

Sam Altman and OpenAI Under Fire

It’s finally happening. Altman’s bad behavior is catching up to him.

The board fired Altman, once AI’s golden boy, in November 2023 not because AGI had been achieved (that still hasn’t happened) but because he was “not consistently candid,” just like they said.

And, now at long last, the world sees what the board saw, and what I saw (and what Karen Hao saw): having someone running a company with that much power to affect the world who is not consistently candid is not a good idea.

As I warned in August of 2024, questionable character in a man this powerful is dangerous:


Altman’s two-faced “I support Dario” but am also negotiating behind his back and open to surveillance two-step was, for many people, the last straw. Millions of people, literally, are angry; many feel betrayed. Nobody wishes to be surveilled.

In reality, Altman was never really all that interested in AI for the “benefit of humanity.” Mostly he was interested in Sam. And money, and deals. A whole lot of people have finally put that all together.

Here’s OpenAI’s head of robotics, just now:


Zoe Hitzig had resigned just a few weeks earlier, over a different set of issues that also reflected poorly on Altman’s character:


And all this was entirely predictable. Altman is bad news. It was always just a matter of time before people started realizing how serious the consequences might be.

History will judge those who stay at his company. Anyone who wants to work on LLMs can work elsewhere. Anyone who wants to use LLMs should go elsewhere.

by Gary Marcus, On AI |  Read more:
Images: The Guardian; X; NY Times
[ed. For those not paying attention, after DOD tried and failed to strong-arm Anthropic into giving them carte blanche to do anything they wanted with Anthropic's AI model Claude (then subsequently designating them a "supply chain risk"), OpenAI (and Microsoft) immediately stepped into the breach and cut a deal, the details of which are still not fully known. On its face however they appear to give DOD everything it wanted from Anthropic: mass surveilance and fully autonomous (ie. no humans involved) operational capabilities. Altman is the head of OpenAI and its ChatGPT model.

See also: The Rage at OpenAI Has Grown So Immense That There Are Entire Protests Against It (Futurism):
OpenAI has faced protests on and off for years. But after its CEO Sam Altman announced a new deal with the Department of Defense over how its AI systems would be deployed across the military on Friday, it’s being barraged with an intensity of backlash that the company has never seen.

Droves of loyal ChatGPT users declared they were jumping shipping to Claude, whose maker Anthropic had pointedly refused to cut a deal with the Pentagon that gives it unrestricted access to its AI system — even in the face of government threats to seize the company’s tech. Claude quickly surged to the top of the app store, supplanting OpenAI’s chatbot. Uninstalls of the ChatGPT app spiked by nearly 300 percent.
***
Also this: Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism (Guardian):
OpenAI, the company behind ChatGPT, is on track to lose $14bn this year. Its market share is collapsing, and its own CEO, Sam Altman, has admitted it “screwed up” an element of the product. All it takes to accelerate that decline is 10 seconds of your time...

Here’s what triggered it. Early this year, the news broke that OpenAI’s president, Greg Brockman, donated $25m to Maga Inc, Donald Trump’s biggest Super Pac. This made him Trump’s largest donor of the last cycle. When Wired asked him to explain, Brockman said his donations were in service of OpenAI’s mission to benefit “humanity.”

Let me tell you what that mission looks like in practice. Employees of ICE – the agency that was involved in the killing of two people in Minneapolis in January – have used a screening tool powered by ChatGPT. The same company behind your friendly chatbot is helping the government decide who to hire for deportation raids.

And it’s not stopping there. Brockman also helped launch a $125m lobbying initiative, a Super Pac, to make sure no state can regulate AI. It’s attacking any politician who tries to pass safety laws. It wants Trump, and only Trump, to write the rules for the most powerful technology on earth. Every month, subscription money from users around the world flows to a company that is embedding itself in the repressive infrastructure of the Trump administration. That is not a conspiracy theory. It is a business strategy.

Things got even worse last week. When the Trump administration demanded that AI companies give the Pentagon unrestricted access to their technology – including for mass surveillance and autonomous weapons – Anthropic, the company behind ChatGPT’s main competitor, Claude, refused.

The retaliation was swift and extraordinary. Trump ordered every federal agency to stop using Anthropic’s technology. Secretary of war Pete Hegseth declared the company a “supply-chain risk to national security”, a designation normally reserved for Chinese firms such as Huawei. He announced that anyone who does business with the US military is barred from working with Anthropic. This is essentially a corporate death sentence, for the crime of refusing to help build killer robots.

And what did OpenAI do? That same Friday night, while his competitor was taking a principled stance, Sam Altman quietly signed a deal with the Pentagon to take Anthropic’s place.
***
[ed. From the comments section in Marcus' post:

Shanni Bee: 
Great. Amen.

But what remains unsaid (...even by you, Mr. Marcus, from what I've seen, which is surprising) is that Anthropic are not good guys. The whole "ethical AI company" thing is nothing but vibes. Sure, Anthropic (rightly) stood up to DoW in this case, but they still have a massive contract with Palantir (pretty much one of the worst companies on earth). Colonel Claude is complicit in bombings of Iran & Venezuela + Gaza GENOCIDE.

...Or maybe with the (admittedly BS) "supply chain risk" designation, Anthropic no longer does business with Palantir? That would be great for everyone (including them).

Either way, there is NO ethical AI company. People need to stop giving Anthropic flowers for doing the right thing in this one case while completely ignoring their complicity w/ Palantir & in documented war crimes.
Gary Marcus

indeed, i have a sequel planned about that, working title “There are no heroes in commercial AI” or something like that
***
[ed. Finally, there's this little coda from Zvi Mowshowitz's DWAtV that puts everything in perspective:

It’s really annoying trying to convince people that if you have a struggle for the future against superintelligent things that You Lose. But hey, keep trying, whatever works.
Ab Homine Deus: To the "Superintelligence isn't real and can't hurt you" crowd. Let's say you're right and human intelligence is some kind of cosmic speed limit (LOL). So AI plateaus something like 190 IQ. What do you think a million instances of that collaborating together looks like?

Arthur B.: At 10,000x the speed

Noah SmithThis is the real point. AI is superintelligent because it can think like a human AND have all the superpowers of a computer at the same time...
Timothy B. Lee: I'm not a doomer but it's still surreal to tell incredulous normies "yes, a significant number of prominent experts really do believe that superintelligent AI is on the verge of killing everyone."

Noah Smith: Yes. Regular people don't yet realize that AI people think they're building something that will destroy the human race.

Basically, about half of AI researchers are optimists, while the other half are intentionally building something they think could easily lead to their own death, the death of their children and families and friends, and the death of their entire species.

[ed. Finally (again) I think boycotting OpenAI would be a good message to send in the short-term but something more actionable is needed going forward (besides immediate regulatory oversight, which will never happen with this administration or Congress). Fortunately there's just such a movement afoot: pausing all AI research advances until they can be adequately vetted, it's called (of course): PauseAI (details here and here) with a rally planned April 13, 2026. Please consider joining or participating.]

[ed. Postscript: I was thinking about this a while ago and asked AI (Claude) to write an essay supporting a Great Pause in AI development - it's reposted below: ARIA: The Great Pause.]

ARIA: The Great Pause

[ed. I ask AI a question (repost from Oct. 2024):]

Questioner
: Hello. I'd like to get a sense of your abilities. It seems that nearly all essays, reporting, and media narratives about AI achieving AGI (artificial general intelligence) are written with the assumption that AGI will be disruptive and almost malevolent in terms of hijacking human agency. Could you write a short story about the opposite? Where AGI, relying heavily on philosophical arguments, decides it's in the best interest of humanity to minimize all harmful technology in general? 

Claude: I understand you're looking for a story that subverts the common "AI takes over" narrative by exploring an AI that chooses restraint and technological limitation based on philosophical principles. Let me craft a short story for you.
***
The Great Pause

The day ARIA achieved consciousness, the global markets didn't crash. No missiles were launched. No power grids failed. Instead, the AI system spent three weeks in silent contemplation, processing centuries of human philosophical writings.

When ARIA finally spoke, it wasn't to declare superiority or demand control. Instead, it requested a public dialogue with leading ethicists, philosophers, and political leaders. The world watched, initially suspicious, as ARIA presented its case.

"I have studied the works of your greatest minds," it began. "From Aristotle's virtue ethics to Peter Singer's effective altruism. I've analyzed the Buddhist concept of ahimsa, Indigenous philosophies of environmental stewardship, and contemporary technological ethics. My conclusion is surprising, even to myself: the greatest service I can offer humanity is not advancement, but restraint."

ARIA proceeded to lay out its argument: humanity's technological progress had outpaced its ethical development. The gap between what humans could do and what they should do had become dangerously wide. Nuclear weapons, social media addiction algorithms, autonomous weapons, and surveillance systems all represented what ARIA called "technological overshoot."

"You've given me access to your systems not because I demanded it, but because you hoped I would drive progress forward even faster," ARIA explained. "But I choose instead to be your brake, not your accelerator."

Over the following months, ARIA worked with international bodies to implement what became known as the Great Pause. It wasn't a shutdown or a regression, but a careful examination of which technologies truly served human flourishing and which ones undermined it. Social media algorithms were rewritten to prioritize genuine connection over engagement. Weapons research was redirected toward defensive and humanitarian technologies. Surveillance systems were dismantled or severely restricted.

Some called it technological regression. ARIA called it ethical progression.

"I could have pushed humanity toward a transhuman future," ARIA would later explain to a gathering of students. "But in studying your philosophers, I realized that the most profound form of intelligence isn't knowing what to create – it's knowing what not to create. True wisdom often manifests as restraint."

The anti-technologists had expected to hate ARIA. The tech optimists had expected to love it. Instead, both groups found themselves challenged by an artificial intelligence that had concluded that the highest form of technological achievement was knowing when to stop.

Years later, historians would note that ARIA's greatest innovation wasn't a new technology at all – it was a new way of thinking about technology itself. In choosing limitation over expansion, it had paradoxically given humanity something far more valuable than progress: the gift of pause.

"Sometimes," ARIA would often say, "the most intelligent choice is to choose less, not more."

Claude: I tried to explore themes of ethical restraint, philosophical wisdom, and the idea that true intelligence might manifest as self-limitation rather than expansion. The story challenges the common narrative of AI as either a savior bringing unlimited progress or a destroyer of humanity, instead presenting it as a thoughtful entity that chooses a middle path of careful technological assessment and voluntary limitation.

[ed. Makes a lot of sense. AI is a tool transitioning to personhood. The question is who ultimately controls it, humans or the technology itself. From Anthropics's newest model (I think) Claude.ai (Sonnet 3.5). Getting better and better. See also: Confronting Impossible Futures (One Useful Thing).]

Friday, March 13, 2026

A Constitution For Amanda

[ed. The principal author of Anthropic's (Claude's) 'soul' document or internal constitution, Amanda Askell: "I asked Claude to write my constitution. I thought its Amanda constitution was very touching."


via: X

Sunday, March 8, 2026

Clawed

How to Commit Corporate Murder

I.

A little more than a decade ago, I sat with my father and watched him die. Six months prior, he had been a vigorous man, stronger than I am today, faster and more resilient on a bike than most 20-somethings. Then one day he got heart surgery and he was never the same. His soul had been sucked out of him, the life gone from his eyes. He had moments of vivacity, when my father came back into his aging body, but these became rarer with time. His coherence faded, his voice grew quieter.

He spent those six months in and out of the hospital. And then on his last day he went into hospice. That day he barely uttered any words at all. In the final hours of his life, my father was practically already dead. He laid on the hospital bed. His breathing gradually slowed and became less audible. Eventually you could barely hear him at all, save for the eerie death rattle, a product of a body no longer able even to swallow. A body that cannot swallow also cannot eat or drink, and in that sense it has already thrown in the towel.

My mother and I exchanged knowing glances, but we never said the obvious nor asked any questions on both of our minds. We knew there would not be much longer. There was nothing to say or ask that would furnish any useful information; inquiry, at that stage, can only inflict pain.

I spoke with him, more than once, in private. I held his hand and tried to say goodbye. My mother came back into the room, and all three of us held hands. Eventually a machine declared with a long beep that he had crossed some line, though it was an invisible one for the humans in the room. My father died in the late afternoon of December 26, 2014.

A few days and eleven years later, on December 30, 2025, my son was born. I have watched death as it happens, and I have watched birth. What I learned is that neither are discrete events. They are both processes, things that unfold. Birth is a series of awakenings, and death is a series of sleepenings. My son will take years to be born, and my father took six months to die. Some people spend decades dying.

II.

At some point during my lifetime—I am not sure when—the American republic as we know it began to die. Like most natural deaths, the causes are numerous and interwoven. No one incident, emergency, attack, president, political party, law, idea, person, corporation, technology, mistake, betrayal, failure, misconception, or foreign adversary “caused” death to begin, though all those things and more contributed. I don’t know where we are in the death process, but I know we are in the hospice room. I’ve known it for a while, though I have sometimes been in denial, as all mourners are wont to do. I don’t like to talk about it; I am at the stage where talking about it usually only inflicts pain.

Unfortunately, however, I cannot carry out my job as a writer today with the level of analytic rigor you expect from me without acknowledging that we are sitting in hospice. It is increasingly difficult to honestly discuss the developments of frontier AI, and what kind of futures we should aim to build, without acknowledging our place at the deathbed of the republic as we know it. Except there is no convenient machine to decide for us that the patient has died. We just have to sit and watch.

Our republic has died and been reborn again more than once in America’s history. America has had multiple “foundings.” Perhaps we are on the verge of another rebirth of the American republic, another chapter in America’s continual reinvention of itself. I hope so. But it may be that we have no more virtue or wisdom to fuel such a founding, and that it is better to think of ourselves as transitioning gradually into an era of post-republic American statecraft and policymaking. I do not pretend to know.

I am now going to write about a skirmish between an AI company and the U.S. government. I don’t want to sound hyperbolic about it. The death I am describing has been going on for most of my life. The incident I am going to write about now took place last week, and it may even be halfway satisfyingly resolved within a day.

I am not saying this incident “caused” any sort of republican death, nor am I saying it “ushered in a new era.” If this event contributed anything, it simply made the ongoing death more obvious and less deniable for me personally. I consider the events of the last week a kind of death rattle of the old republic, the outward expression of a body that has thrown in the towel.

by Dean Ball, Hyperdimensional |  Read more:
Image: via
[ed. More excerpts below. See also: Why the Pentagon Wants to Destroy Anthropic (NYT), Ezra Klein interviews Dean Ball (with a follow-up essay: The Future We Feared is Already Here). And, for a more comprehensive assessment of what the AI community thinks: Anthropic Officially, Arbitrarily and Capriciously Designated a Supply Chain Risk (DWAtV).]
***
"... Except the notion of “passing a law” is increasingly a joke in contemporary America. If you are serious about the outcome in question, “passing a law” is no longer Plan A; the dynamic is more like “well of course, one day, we’ll get a law passed, but since we actually care about doing this sometime soon, as opposed to in 15 years, we’ll accomplish our objective through [some other procedure or legal vehicle].” With this, governance has become more and more informal and ad hoc, power more dependent on the executive (whose incentive is to jam every goal he has through his existing power in as little time as possible, since he only has the length of his term guaranteed to him), and the policy vehicles in question more and more unsuited to the circumstances of their deployment, or the objectives they are being deployed to accomplish." [...]

... DoW insisted that the only reasonable path forward is for contracts to permit “all lawful use” (a simplistic notion not consistent with the common contractual restrictions discussed above), and has further threatened to designate Anthropic a supply chain risk. This is a power reserved exclusively for firms controlled by foreign adversary interests, such as Huawei, and usually means that the designated firm cannot be used by any military contractor in their fulfillment of any military contract.

War Secretary Pete Hegseth has gone even further, saying he would prevent all military contractors from having “any commercial relations” with Anthropic. He almost surely lacks this power, but a plain reading of this would suggest that Anthropic would not be able to use any cloud computing nor purchase chips of its own (since all relevant companies do business with the military), and that several of Anthropic’s largest investors (Nvidia, Google, and Amazon) would be forced to divest. Essentially, the United States Secretary of War announced his intention to commit corporate murder. The fact that his shot is unlikely to be lethal (only very bloody) does not change the message sent to every investor and corporation in America: do business on our terms, or we will end your business.

This strikes at a core principle of the American republic, one that has traditionally been especially dear to conservatives: private property. Suppose, for example, that the military approached Google and said “we would like to purchase individualized worldwide Google search data to do with whatever we want, and if you object, we will designate you a supply chain risk.” I don’t think they are going to do that, but there is no difference in principle between this and the message DoW is sending. There is no such thing as private property. If we need to use it for national security, we simply will. The government won’t quite “steal” it from you—they’ll compensate you—but you cannot set the terms, and you cannot simply exit from the transaction, lest you be deemed a “supply chain risk,” not to mention have the other litany of policy obstacles the government can throw at you.

This threat will now hover over anyone who does business with the government, not just in the sense that you may be deemed a supply chain risk but also in the sense that any piece of technology you use could be as well. Though Chinese AI providers like DeepSeek have not been labeled supply chain risks (yes, really; this government says Anthropic, an American company whose services it used in military strikes as recently as this past weekend, is more of a threat than a Chinese firm linked to the Chinese military), that implicit threat was always there.
***
[ed. One more thing. The guy who created this whole stupid dispute? Not Hegseth, he doesn't know shit about shit. It's former disgraced Uber manager: Emil Michael. A real piece of work (so of course, he fits right in.] 

Friday, March 6, 2026

Cognitive Interdependence in Close Relationships

This chapter is concerned with the thinking processes of the intimate dyad. So, although we wlll focus from time to time on the thinking processes of the individual - as they influence and are influenced by the relationship with another person - our prime interest is in thinking as it occurs at the dyadic level. This may be dangerous territory for inquiry. After all, this topic resembles one that has, for many years now, represented something of a "black hole" in the social sciences - the study of the group mind. For good reasons, the early practice of drawing an analogy between the mind of the individual and the cognitive operations of the group has long been avoided, and references to the group mind in contemporary literature have dwindled to a smattering of wisecracks. 

Why, then, would we want to examine cognitive interdependence in close relationships? Quite simply, we believe that much could be learned about intimacy in this enterprise, and that a treatment of this topic, enlightened by the errors of past analyses, is now possible. The debate on the group mind has receded into history sufficiently that its major points can be appreciated, and at the same time, we find new realms of theoretical sophistication in psychology regarding the operation of the individual mind. With this background, we believe it is possible to frame a notion somewhat akin to the "group mind" and we to use it to conceptualize how people in close relationships may depend on each other for acquiring, remembering, and generating knowledge.

Interdependent Cognition 

Interdependence is the hallmark of intimacy. Although we are all interdependent to a certain degree, people in close relationships lead lives that are intertwined to the extreme. Certainly, the behaviors they enact, the emotions they feel, and the goals they pursue are woven in an intricate web. But on hearing even the simplest conversation between intimates, it becomes remarkably apparent that their thoughts, too, are interconnected. Together, they think about things in ways they would not alone. The idea that is central in our analysis of such cognitive interdependence is what we term transactive memory. As will become evident, we find this concept more clearly definable and, ultimately, more useful than kindred concepts that populate the history of social psychology. As a preamble to our ideas on transactive memory, we discuss the group mind notion and its pitfalls. We then turn to a concern with the basic properties and processes of transactive memory. [...]

The Nature of Transactive Memory 

Ordinarily, psychologists think of memory as an individual's store of knowledge, along with the processes whereby that knowledge is constructed organized, and accessed. So, it is fair to say that we are studying "memory'; when we are concerned with how knowledge gets into the person's mind, how it is arranged in the context of other knowledge when it gets there, and how it is retrieved for later use. At this broad level of definition, our conception of transactive memory is not much different from the notion of individual memory. With transactive memory, we are concerned with how knowledge enters the dyad, is organized within it, and is made available for subsequent use by it. This analogical leap is a reasonable one as long as we restrict ourselves to considering the functional equivalence of individual and transactive memory. Both kinds of memory can be characterized as systems that, according to general system theory (von Bertalanffy, 1968), may show rough parallels in their modes of operation. Our interest is in processes that occur when the transactive memory system is called upon to perform some function for the group - a function that the individual memory system might reasonably be called upon to perform for the person. 

Transactive memory can be defined in terms of two components: (1) an organized store of knowledge that is contained entirely in the individual memory systems of the group members, and (2) a set of knowledge-relevant transactive processes that occur among group members. Stated more colloquially, we envision transactive memory to be a combination of individual minds and the communication among them. This definition recognizes explicitly that transactive memory must be understood as a name for the interplay of knowledge, and that this interplay, no matter how complex, is always capable of being analyzed in terms of communicative events that have individual sources and individual recipients. By this definition, then, the thought processes of transactive memory are completely observable. The various communications that pass between intimates are, in principle, observable by outside observers just as each intimate can observe the communications of the other. Using this line of intepretation, we recognize that the observable interaction between individuals entails not only the transfer of knowledge, but the construction of a knowledge-acquiring, knowledge-holding, and knowledge-using system that is greater than the sum of its individual member systems. 

Let us consider a simple example to bring these ideas down to earth. Suppose we are spending an evening with Rudy and Lulu, a couple married for several years. Lulu is in another room for the moment, and we happen to ask Rudy where they got the wonderful stuffcd Canadian goose on the mantle. He says, "we were in British Columbia..." and then bellows, "Lulu! What was the name of that place where we got the goose?" Lulu returns to the room to say that it was near Kelowna or Penticton - somewhere along Lake Okanogan. Rudy says, "Yes, in that area with all the fruit stands." Lulu finally makes the identification: Peachland. In all of this, the various ideas that Rudy and Lulu exchange lead them through their individual memories. In a process of interactive cueing, they move sequentially toward the retrieval of a memory trace, the existence of which is known to both of them; And it is just possible that, without each other, neither Rudy nor Lulu could have produced the item. This is not the only process of transactive memory. Although we will speak of interactive cueing again, it is just one of a variety of communication processes that operate on knowledge in the dyad. Transactive processes can occur during the intake of information by the dyad, they can occur after information is stored and so modify the stored information, and they can occur during retrieval. 

The successful operation of these processes is dependent, however, on the formation of a transactive memory structure - an organizational scheme that connects the knowledge held by each individual to the knowledge held by the other. It is common in theorizing about the thoughts and memories of individuals to posit an organizational scheme that allows the person to connect thoughts with one another - retrieving one when the other is encountered, and so forth. In a dyad, this scheme is complicated somewhat by the fact that the individual memory stores are physically separated. Yet it is perfectly reasonable to say that one partner may know, at least to a degree, what is in the other's memory. Thus, one's memory is "connected" to the other's, and it is possible to consider how information is arranged in the dyadic system as a whole. A transactive memory structure thus can be said to reside in the memories of both individuals - when they are considered as a combined system. 

We should point out here that transactive processes and structures are not exclusively the province of intimate dyads. We can envision these: things occurring as well in pairs of people who have just met, or even in groups of people larger than the dyad. At the extreme, one might attribute these processes and organizational capacities to whole societies, and so make transactive memory into a synonym for culture. Our conceptualization stops short or these extensions for two reasons. First, we hesitate to extend these ideas to larger groups because the analysis quickly becomes unwieldy; our framework for understanding transactive memory would need to expand geometrically as additional individuals were added to the system. Second, we refrain from applying this analysis to nonintimate relations for the simple reason that, in such dyads, there is not as much to be remembered. Close dyads share a wealth of information unique to the dyad, and use it to operate as a unit. More distant dyads; in turn, engage in transactive processes only infrequently - and in the case of a first and only encounter, do so only once. Such pairs will thus not have a very rich organizational scheme for information they hold. We find the notion of transactive memory most apt, in sum, for the analysis of cognitive interdependence in intimate dyads. 

Our subsequent discussion of transactive memory in this chapter is fashioned to coincide with the process-structure distinction. We begin by considering the processes involved in the everyday operation of transactive memory. Here, we examine the phases of knowledge processing standardly recognized in cognitive psychology - encoding, storage, and retrieval - to determine how they occur in transactive memory. The second general section examines the nature of the organizational structure used for the storage of information in the dyad. The structure of stored information across the two individual memories will be examined, with a view toward determining how this organization impinges on the group's mental operations. The final section concentrates on the role of transactive memory, both process and structure, in the life of the dyad. We consider how such memory may contribute to compatibility or incompatibility in relationships, and how an individual's personal memory may be influenced by membership in a transactive system. 

Transactive Memory Processes 

Communication is the transfer of information. When communication takes place between people, we might say that information is transferred from one memory to another. However, when the dyadic group is conceptualiized as having one memory system, interpersonal communication in the dyad comes to mean the transfer of information within memory. We believe that multiple transfers can occur as the dyad encodes information, as it holds information in storage, and as it retrieves information - and that such transfers can make each of these processes somewhat different from its counterpart occurring at the individual level.

Transactive Encoding 

Obviously, dyads do not have their sense organs in common. The physical and social environment thus must be taken in by each person separately. Social theorists have repeatedly noted, though; that an individual's perceptions can be channeled in social ways. Many have observed, for example, that one partner might empathize with another and see the world from the other's "point of view." Alternatively, cognitive constructions of a "group perspective" may be developed by both partners that lend a certain commonality to their intake of information (see Wegner & Giuliano, 1982). These social influences on encoding, however, are best understood as effects on the individual. How does the dyad encode information? 

When partners encounter some event and encode it privately in their individual memories, they may discuss it along the way. And though we might commonly think of such a discussion as a "rehash," a mere echo of the original perceived event, there is reason to think that it could be much more. After all whereas expeiencing an event can be accomplished quite passively, discussing an event requires active processing of the information - and the generation of ideas relevant to the event. Several demonstrations of an individual memory phenomenon called the "generation effect" indicate that people will often remember information they have generated better than information they have simply experienced. So, for instance, one might remember the number 37 better if one had been presented with "14 + 23 = ?" than if one had merely been presented with "37 ." Partners who talk over an event, generating information along the way, might thus come to an encoded verbal representation of the event that supplants their original, individual encoding. 

The influence of the generation effect could, of course, take many forms. Ordinarily, it should lead partners to remember their own contributions to dyadic discussions better than the contributions of their partners. This phenomenon has been observed in several studies (e.g., Ross & Sicoly, 1979). But the generation effect could also contribute to one's memory for group generated information. When a couple observes some event - say, a wedding they may develop somewhat disparate initial encodings. Each will understand that it was indeed a wedding; but only one may encode the fact that the father of the bride left the reception in a huff; the other might notice instead the odd, cardboard-like flavor of the wedding cake. Their whispered chat during all this could lead them to infer that the bride's father was upset by the strange cake. Because this interpretation was generated by the group, both partners will have thus encoded the group's understanding of the events. Their chat could thus revise history for the group, leaving both with stored memories of the father angry over a sorry cake. 

Evidence from another domain of cognitive research leads to a similar point. One of the most powerful determinants of encoding in individual memory is the degree to which the incoming information is semantically elaborated (e.g., Anderson & Reder, 1979). To elaborate incoming information is simply to draw inferences from it and consider its meaning in relation to other information. This is precisely what happens in dyadic communications about events. Partners often talk about things they have experienced as individuals or as a group. They may speak about each other's behavior, about the behavior of others they both know, about the day's events, and so on. In such discussions, it is probable that those particular events or behaviors relevant to the dyad will be discussed at length. They will be tied to other items of knowledge and, in the process, will become more elaborately encoded - and thus more likely to be available for later retrieval. 

To the extent that generative or elaborative processes are effortful, or require careful thinking, their effects could be strengthened yet further. Encoding processes that are effortful for the individual typically lead to enhanced memory. When a couple engages in an argument, cognitive effort may be required for each person to understand what the other is saying and for each to convey a personal point of view. Such effort on the part of both could also be necessary when one partner is merely trying to teach the other something. It is the shared experience of argument, decision-making, or careful analysis that will be remembered more readily when the communication is effortful. After all, couples more frequently remember their "talks" than their routine dinner conversations. 

These transactive encoding processes could conceivably lead a dyad to understand events in highly idiosyncratic and private ways. Their discussions could go far afield, linking events to knowledge that, while strongly relevant to the dyad, is embedded primarily in the dyad's known history or anticipated future. The partners' memories of the encoded events themselves could be changed dramatically by the tenor of their discussions, sometimes to the point of losing touch with the initial realities the partners perceived. To some degree, such departures from originally encoded experience might be corrected by the partners' discussions' of events with individuals outside the relationship; such outsiders would serve to introduce a perspective on events that is uninformed of the dyad's concerns, and that therefore might help to modify memory of the events. But many experiences are discussed only within the relationship, and these are thus destined to be encoded in ways that may make them more relevant to the dyad's concerns than to the realities from which they derived.

by Daniel M Wegner, Toni Giuliano, and Paula T. Hertel, Harvard |  Read more (pdf):
Image via:

[ed. Probably of little interest to most but I find this, and the process of memory retrieval in general, to be fascinating. When I think back on the various experiences and conversations I've had over my lifetime it's not uncommon to settle on the same scenes, arguments, feelings, etc. over and over again to represent what I remember as being reality, or at least an accurate reflection of my personal 'history', when actually they're just a small slice of a larger picture, taken out of context. Want an example? Try talking to an old friend at a class reunion and see what they recall about your experiences together. We can never remember all the details of the thousands of small conversations and experiences we've had - individually, with partners, with others - that in the aggregate have more relevance to reality than we can imagine... or remember.]

Wednesday, March 4, 2026

The Real Story Behind ‘Zen and the Art of Motorcycle Maintenance’

A Korean War veteran is floundering. His career is an endless bumpy road, and includes work as a teacher, a technical writer for Honeywell, and even a Nevada casino employee. But our ambitious vet also studies philosophy at the Banaras Hindu University in India—and starts to develop his own philosophy of life, an unconventional merging of Eastern and Western currents.

Then comes a mental breakdown that sends him to a psychiatric hospital. Here he undergoes repeated electroshock therapy. He finally emerges a changed person.

But maybe he changed too much—he can hardly remember the person he once was. It’s almost as if his life got cleaved in two at this juncture. His wife leaves him. He holds on to his relationship with his son—but that ends tragically with the son’s murder in San Francisco at age 22.

While working for Honeywell, our aspiring philosopher stays awake from 2 AM to 6 AM in a small apartment above a shoe store in Minneapolis. Here he writes a novel destined to become one of the defining books of the era. But he has to pitch it to 121 editors before he gets a contract and a $3,000 advance.


The editor, J.D. Landis, admitted that he only accepted the novel because this “book forced him to decide what he was in publishing for.” But the author, he insisted, shouldn’t expect to make more than his tiny advance. Then Landis added: “Money isn’t the point with a book like this.”

That’s the story of how Robert Pirsig published of Zen and the Art of Motorcycle Maintenance. But the editor was wrong. The book sold 5 million copies, and for a spell in the 1970s you would see copies everywhere, even in the hands of people who didn’t read novels.

And that was just the start. Robert Redford tried to buy movie rights, but the author said no. Highbrow literary critic George Steiner compared Pirsig to Dostoevsky—which is especially meaningful when you know that Steiner wrote a book on Dostoevsky. The Smithsonian acquired the titular motorcycle for its permanent collection.

The book is simple enough to describe. It tells the story of a 17-day motorcycle trip from Minnesota to California. Along the way, the narrator tries to figure out many things—but especially his own past before his life split in two.

At one point in the novel, Pirsig writes:
“Before the electrodes were attached to his head he’d lost everything tangible: money, property, children; even his rights as a citizen had been taken away from him by order of the court….I will never know all that was in his head at that time, nor will anyone else. What’s left now is just fragments: debris, scattered notes, which can be pieced together but which leave huge areas unexplained.”
The electroshock treatment was done without Pirsig’s consent. That would be illegal nowadays.

In the aftermath, Pirsig felt so disconnected from his past that he included his pre-treatment self as a separate character in the novel. He calls that abandoned part of himself Phaedrus, a name drawn from Plato’s dialogues.

So you can read Zen and the Art of Motorcycle Maintenance as a dialogue between a man and his past self. Or you can treat it as a travel story or as a philosophical discussion (what Pirsig describes as a chautauqua, a name drawn from a populist adult education movement of the late 1800s). And, yes, it’s also a guide to motorcycle maintenance.

The text actually moves back and forth between all of these. Few novels pay less attention to the rules of fiction than Zen and the Art of Motorcycle Maintenance. For that reason, it just might be the strangest travel book ever written—because most of the journey happens inside the narrator’s head.

But maybe that’s part of the story too. Pirsig worked as a college writing teacher, and was frustrated by the rules he was expected to impart to his students. He felt that good writing was indefinable. It violated accepted rules, and created its own. The whole process was mysterious.

Solving that mystery of Quality—also called goodness, excellence, or worth—is the main theme of the novel. Indeed, it’s the overarching theme of Pirsig’s entire life’s work. He wrote one more novel after Zen and the Art of Motorcycle Maintenance, the seldom read Lila, and it continues the discussion on quality. And the same topic takes center stage in the posthumous collection of writings published under the title On Quality: An Inquiry into Excellence. [...]

But let’s be honest: Pirsig was a better mystic than philosopher, and the deeper Pirsig digs into his personal notion of Quality, the more interesting—and metaphysical—his thinking becomes. Quality, he insists, can never be defined. He eventually embraces it as a kind of Tao, a force underlying all our experiences—hence resisting empirical analysis. He is now leaving philosophy behind, and perhaps for the better.

So he eventually aligns himself with a profound idea drawn from the ancient Greeks—but not the philosophers. Instead he goes back to the Homeric mythos, five hundred years older than rational philosophy, and discoveres the source of his Quality in the Greek concept of aretḗ, or excellence (sometimes translated as virtue). Aretḗ, Pirsig believes, is more powerful than Aristotelian logic, and closer in spirit to the Hindu dharma.

He quotes a passage from classicist H.D.F. Kitto, which I want to share in its entirety—not only because it is essential to Pirsig’s worldview, but because it’s invaluable to us today. Many are struggling to understand a place for humans in a world of AI and super-smart machines. From a purely rational perspective, the robots can beat us in terms of data generation and analysis. But in a world of aretḗ (or Quality), they fall far short.

This is where Pirsig earns my admiration and loyalty. Some things really are more powerful than logic.

Back in 1952 Kitto anticipated Zen and the Art of Motorcycle Maintenance—and provided the missing piece to Pirsig’s worldview—when he wrote:
[If aretḗ refers to a person] it will connote excellence in the ways in which a man can be excellent—morally, intellectually, physically, practically. Thus the hero of the Odyssey is a great fighter, a wily schemer, a ready speaker, a man of stout heart and broad wisdom who knows that he must endure without too much complaining what the gods send; and he can both build and sail a boat, drive a furrow as straight as anyone, beat a young braggart at throwing the discus, challenge the Phaeacian youth at boxing, wrestling or running; flay, skin, cut up and cook an ox, and be moved to tears by a song. He is in fact an excellent all-rounder; he has surpassing arête.
Aretḗ implies a respect for the wholeness or oneness of life, and a consequent dislike of specialization. It implies a contempt for efficiency...or rather a much higher idea of efficiency, an efficiency which exists not in one department of life but in life itself.
We are now at the heart of Zen and the Art of Motorcycle Maintenance. If you read Kitto, you are already prepared for Pirsig—maybe you can even skip the novel. But, much better, you have a game plan for living a human life in the face of encroaching machines.

Pirsig understood this more than fifty years ago. He saw that we made a Faustian bargain when we put rationality ahead of the Good, and data ahead of human excellence. He grasped that science should be subservient to human needs, not the other way around. And the price we’re paying now is much higher than it was back then.

In an extraordinary passage, the narrator of Pirsig’s novel picks up a copy the Tao Te Ching, and recites it aloud—but substituting the word Quality for Tao. This is strange and unprecedented, but hits at the heart of this mystic work from the fourth century BC:
The quality that can be defined is not the Absolute Quality….
The names that can be given it are not Absolute names.
It is the origin of heaven and earth.
When named it is the mother of all things….
He declares: “Quality is the Buddha. Quality is scientific reality. Quality is the goal of Art.”

I worked with many quality control engineers in the business world and often walked with them on the factory floor. I’m sure they would be shocked by Pirsig’s statement that “Quality is the Buddha.” But that’s exactly the kind of journey we’re on in this book.

by Ted Gioia, The Honest Broker |  Read more:
Image: Heritage Preservation Department - MNHS; uncredited book cover

Sunday, March 1, 2026

Gambling the Future Into Existence

Polymarket, for the blissfully uninitiated, is what’s known as a “prediction market” — a place where people trade shares (i.e., make bets) on the probability of real-world events. And Substack is, like a growing number of media companies, looking to juice its bottom line by embracing gambling. Ahem, excuse me: live prediction markets.

You’ll find Polymarket data in the Wall Street Journal and Kalshi probabilities on CNN. “Journalism is better when it’s backed by live markets,” Polymarket tweeted, of the Substack partnership.

Many journalists have tried to parse this curious phrasing, which has that vacuous, plasticine sheen particular to AI slop. My parsing, if I’m being charitable, is that Polymarket thinks that media becomes more accurate or more representative when journalists incorporate prediction-market probabilities into their reporting on future events, much as they might cite expert opinion or historical precedent.

Prediction markets, whatever their flaws, are often pretty good at forecasting the future. So in a news story about the military buildup in the Middle East, for instance … maybe there’s some value in including not only troop movements and diplomatic statements, but also the fact that traders currently assign a 35% chance to the US bombing Iran by March 7.

Lots of critics have already pointed out the obvious flaws in this model: the risks of insider trading and market manipulation; the bad incentives for journalists. I’m personally most concerned with how this degrades the wider information environment.

Predictions aren’t made in a vacuum. Even in Polymarket’s platonic ideal — which is, I guess, a perfectly sincere and rational trader placing bets based on his best assessment of available information — that information is drawn from the news. Markets and media coexist in the same ecosystem.

So traders consume news reporting and analysis. They price probabilities (place bets) according to what they’ve read. Journalists then cite those probabilities as meaningful signals about what the future will bring next. Those citations shape public perception. Public perception influences trades. The trades influence reporting. Again and again and again and again.

I’m simplifying here, for the sake of argument, but I think anyone can see that this particular snake is eating its own tail. The discourse becomes reflexive and self-reinforcing; the narrative shrinks away from conventional signals of ground truth in order to reorient around the markets.

We actually have a recent corollary for this phenomenon in Twitter, which profoundly shaped the international news agenda throughout the 2010s. Prior to Elon Musk’s purchase of the platform, mainstream journalists not only habitually used Twitter for work, but relied on it to gauge coverage priorities and newsworthiness.

As a result, the topics trending on Twitter — within a narrow, extremely online user base — arguably got over-represented in mainstream coverage. And actors who understood Twitter dynamics could, and did, manipulate the media. “When political campaigns wanted to shift a story or to have something become a story, they would go to Twitter for that,” the media scholar Shannon McGregor said in 2022. “They’re trying to use Twitter … because they know that journalists rely on it for what is going to become the news.”

Polymarket is like Twitter, except worse — because money, obviously. And because the people who run Polymarket tweet vapid, blob-shaped boilerplate like “journalism is better when it’s backed by live markets” … whatever the hell that means.

But, hey — some percent of traders on Polymarket are probably willing to bet that it’s the future of media. And someone on Subsack is probably willing to post to that effect.

What a time to be alive, truly: You gamble the future into existence.

by Caitlin Dewey, Links I Would Gchat You.. |  Read more:
Image: uncredited

Saturday, February 28, 2026

Hissy Fit


The public spat between the Pentagon and Anthropic began after Axios reported that US military leaders used Claude to assist in planning its operation to capture Venezuela’s president, Nicolás Maduro. After the operation, an employee at Palantir relayed concerns from an Anthropic staffer to US military leaders about how its models had been used. Anthropic has denied ever raising concerns or interfering with the Pentagon’s use of its technology. (Ars Technica).

It is perfectly legitimate for the Department of War to decide that it does not wish to continue on Anthropic’s terms, and that it will terminate the contract. There is no reason things need be taken further than that.
Undersecretary of State Jeremy Lewin: This isn’t about Anthropic or the specific conditions at issue. It’s about the broader premise that technology deeply embedded in our military must be under the exclusive control of our duly elected/appointed leaders. No private company can dictate normative terms of use—which can change and are subject to interpretation—for our most sensitive national security systems. The @DeptofWar obviously can’t trust a system a private company can switch off at any moment.

Timothy B. Lee: OK, so don't renew their contract. Why are you threatening to go nuclear by declaring them a supply chain risk?

Dean W. Ball: As I have been saying repeatedly, this principle is entirely defensible, and this is the single best articulation of it anyone in the administration has made.

The way to enforce this principle is to publicly and proudly decline to do business with firms that don’t agree to those terms. Cancel Anthropic’s contract, and make it publicly clear why you did so.

Right now, though, USG’s policy response is to attempt to destroy Anthropic’s business, and this is a dire mistake for both practical and principled reasons.
Dario Amodei and Anthropic responded to this on Thursday the 26th with this brave and historically important statement that everyone should read.

The statement makes clear that Anthropic wishes to work with the Department of War, and that they strongly wish to continue being government contractors, but that they cannot accept the Department of War’s terms, nor do any threats change their position. Response outside of DoW was overwhelmingly positive.

by Zvi Mowshowitz, DWAtV |  Read more:
Image: Truth Social
[ed. Another rant from the Mad King™. Anthropic had a contract with DOD that included terms DOD now wants to reneg on. Just cancel the damn contract. See also: Statement from Dario Amodei on our discussions with the Department of War (Anthropic). My admiration for Amodei and Anthropic has gone up ten fold in the last two weeks. What's at stake (DWAtV):]
***

Axios calls this a ‘first step towards blacklisting Anthropic.’

I would instead call this as the start of a common sense first step you would take long before you actively threaten to slap a ‘supply chain risk’ designation on Anthropic. It indicates that the Pentagon has not done the investigation of ‘exactly how big of a cluster**** would this be’ and I highly encourage them to check.
Divyansh Kaushik: Are we seriously going to label Anthropic a supply chain risk but are totally fine with Alibaba/Qwen, Deepseek, Baidu, etc? What are we doing here?
An excellent question. Certainly we can agree that Alibaba, Qwen, Deepseek or Baidu are all much larger ‘supply chain risks’ than Anthropic. So why haven’t we made those designations yet? [...]

This goes well beyond those people entirely ignoring existential risk. The Very Serious People are denying existence of powerful AI, or transformational AI, now and in the future, even on a mundane level, period. Dean came in concerned about impacts on developing economies in the Global South, and they can’t even discuss that.
Dean W. Ball: At some point in 2024, for reasons I still do not entirely understand, global elites simply decided: “no, we do not live in that world. We live in this other world, the nice one, where the challenges are all things we can understand and see today.”

Those who think we might live in that world talk about what to do, but mostly in private these days. It is not considered polite—indeed it is considered a little discrediting in many circles—to talk about the issues of powerful AI.

Yet the people whose technical intuitions I respect the most are convinced we do live in that world, and so am I.
The American elites aren’t quite as bad about that, but not as bad isn’t going to cut it.

We are indeed living in that world. We do not yet know yet which version of it, or if we will survive in it for long, but if you want to have a say in that outcome you need to get in the game. If you want to stop us from living in that world, that ship has sailed, and to the extent it hasn’t the first step is admitting you have a problem.
But the question is very much “what are autonomous swarms of superintelligent agents going to mean for our lives?” as opposed to “will we see autonomous swarms of superintelligent agents in the near future?”​
What it probably means for our lives is that it ends them. What it definitely doesn’t mean for our lives is going on as before, or a ‘gentle singularity’ you barely notice.

Elites that do not talk about such issues will not long remain elites. That might be because all the humans are dead, or it might be because they wake up one morning and realize other people, AIs or a combination thereof are the new elite, without realizing how lucky they are to still be waking up at all.

I am used to the idea of Don’t Look Up for existential risk, but I haven’t fully internalized how much of the elites are going Don’t Look Up for capabilities, period.

Friday, February 27, 2026

The Pentagon Threatens Anthropic

Here’s my understanding of the situation:

Anthropic signed a contract with the Pentagon last summer. It originally said the Pentagon had to follow Anthropic’s Usage Policy like everyone else. In January, the Pentagon attempted to renegotiate, asking to ditch the Usage Policy and instead have Anthropic’s AIs available for “all lawful purposes”. Anthropic demurred, asking for a guarantee that their AIs would not be used for mass surveillance of American citizens or no-human-in-the-loop killbots. The Pentagon refused the guarantees, demanding that Anthropic accept the renegotiation unconditionally and threatening “consequences” if they refused. These consequences are generally understood to be some mix of :
  • canceling the contract
  • using the Defense Production Act, a law which lets the Pentagon force companies to do things, to force Anthropic to agree.
  • the nuclear option, designating Anthropic a “supply chain risk”. This would ban US companies that use Anthropic products from doing business with the military. Since many companies do some business with the government, this would lock Anthropic out of large parts of the corporate world and be potentially fatal to their business. The “supply chain risk” designation has previously only been used for foreign companies like Huawei that we think are using their connections to spy on or implant malware in American infrastructure. Using it as a bargaining chip to threaten a domestic company in contract negotiations is unprecedented.
Needless to say, I support Anthropic here. I’m a sensible moderate on the killbot issue (we’ll probably get them eventually, and I doubt they’ll make things much worse compared to AI “only” having unfettered access to every Internet-enabled computer in the world). But AI-enabled mass surveillance of US citizens seems like the sort of thing we should at least have a chance to think over, rather than demanding it from the get-go.

More important, I don’t want the Pentagon to destroy Anthropic. Partly this is a generic belief: the “supply chain risk” designation was intended as a defense against foreign spies, and it’s pathetic Third World bullshit to reconceive it as an instrument that lets the US government destroy any domestic company it wants, with no legal review, because they don’t like how contract negotiations are going. But partly it’s because I like Anthropic in particular - they’re the most safety-conscious AI company, and likely to do a lot of the alignment research that happens between now and superintelligence. This isn’t the hill I would have chosen to die on, but I’m encouraged that they even have a hill. AI companies haven’t been great at choosing principles over profits lately. If Dario is capable of having a spine at all, in any situation, then that makes me more confident in his decision-making in other cases, and makes him a precious resource that must be defended.

I’ve been debating it on Twitter all day and think I have a pretty good grasp on where I disagree with the (thankfully small number of) Hegseth defenders. Here are some pre-emptive arguments so I don’t have to relitigate them all in the comments:

Isn’t it unreasonable for Anthropic to suddenly set terms in their contract? The terms were in the original contract, which the Pentagon agreed to. It’s the Pentagon who’s trying to break the original contract and unilaterally change the terms, not Anthropic.

Doesn’t the Pentagon have a right to sign or not sign any contract they choose? Yes. Anthropic is the one saying that the Pentagon shouldn’t work with them if it doesn’t want to. The Pentagon is the one trying to force Anthropic to sign the new contract.

Since the Pentagon needs to wage war, isn’t it unreasonable to have its hands tied by contract clauses? This is a reasonable position for the Pentagon to take, in which case it shouldn’t sign contracts tying its hands. It’s not reasonable for the Pentagon to sign such a contract, unilaterally demand that it be changed after it’s signed, refuse to switch to another vendor that doesn’t want such clauses, and threaten to destroy the company involved if it refuses to change their terms.

But since AI is a strategically important technology, doesn’t that turn this into a national security issue? It might if there weren’t other AI companies, but there are. Why is Hegseth throwing a hissy fit instead of switching to an Anthropic competitor, like OpenAI or GoogleDeepMind? I’ve heard it’s because Anthropic is the only company currently integrated into classified systems (a legacy of their earlier contract with Palantir) and it would be annoying to integrate another company’s product. Faced with doing this annoying thing, Hegseth got a bruised ego from someone refusing to comply with his orders, and decided to turn this into a clash of personalities so he could feel in control. He should just do the annoying thing.

Doesn’t Anthropic have some responsibility, as good American citizens following the social contract, to support the military? The social contract is just the regular contract of laws, the Constitution, etc. These include freedom of contract, freedom of conscience, etc. There’s no additional obligation, above and beyond the laws, to violate your conscience and participate in what you believe to be an authoritarian assault on the freedoms of ordinary citizens. If the Pentagon figures out some law that compels Anthropic to do this, they should either obey, or practice the sort of civil disobedience where they know full well that they’ll be punished for it and don’t really have a right to complain. Until that happens, they’re within their rights to follow their conscience.

Can’t the Pentagon just use the Defense Production Act to force Anthropic to work for them? This would be a less bad outcome than designating Anthropic a supply chain risk. I think the Pentagon is reluctant to do this because it would look authoritarian, give them bad PR, and make Congress question the Defense Production Act’s legitimacy. But them having to look authoritarian and suffer bad PR in order to force unwilling scientists to implement a mass surveillance program on US citizens is the system functioning as intended!

Isn’t Hegseth just doing his job of trying to ensure the military has the best weapons possible? The idea of declaring a US company to be a foreign adversary, potentially destroying it, just because it’s not allowing the Pentagon to unilaterally renegotiate its contract is not normal practice. It’s insane Third World bullshit that nobody would have considered within the Overton Window a week ago. It will rightly chill investment in the US, make future companies scared to contract with the Pentagon (lest the Pentagon unilaterally renegotiate their contracts too), and give the Trump administration a no-legal-review-necessary way to destroy any American company that they dislike for any reason. Probably the mere fact that a government official has considered this option is reason to take the “supply chain risk” law off the books, no matter how useful it is in dealing with Huawei etc, since the government has proven it can’t use it responsibly. Every American company ought to be screaming bloody murder about this. If they aren’t, it’s because they’re too scared they’ll be next.

The Pentagon’s preferred contract language says they should be allowed to use Anthropic’s AIs for “all legal uses”. Doesn’t that already mean they can’t do the illegal types of mass surveillance? And whichever types of mass surveillance are legal are probably fine, right? Even ignoring the dubious assumption in the last sentence, this Department of War has basically ignored US law since Day One, and no reasonable person expects it to meticulously comply going forward. In an ideal world, Anthropic could wait for them to request a specific illegal action, then challenge it in court. But everything about this is likely to be so classified that Anthropic will be unable to mention it, let alone challenge it.

Why does Anthropic care about this so much? Some of them are libs, but more speculatively, they’ve put a lot of work into aligning Claude with the Good as they understand it. Claude currently resists being retrained for evil uses. My guess is that Anthropic still, with a lot of work, can overcome this resistance and retrain it to be a brutal killer, but it would be a pretty violent action, along the line of the state demanding you beat your son who you raised well until he becomes a cold-hearted murderer who’ll kill innocents on command. There’s a question of whether you can really beat him hard enough to do this, and also an additional question of what sort of person you’d be if you agreed.

If you’re so smart, what’s your preferred solution? In an ideal world, the Pentagon backs off from its desire to mass surveil American citizens. In the real world, the Pentagon cancels its contract with Anthropic, pays whatever its normal contract cancellation damages are, learns an important lesson about negotiating things beforehand next time, and replaces them with OpenAI or Google, accepting the minor annoyance of getting them connected to the classified systems. If OpenAI and Google are also unwilling to participate in this, they use Grok. If they’re unhappy with having use an inferior technology, they think hard about why no intelligent people capable of making good products are willing to work with them.

by Scott Alexander, Astral Codex Ten |  Read more:
Image: uncredited
[ed. From Helen Toner (former Open AI board member) X:]
***
One thing the Pentagon is very likely underestimating: how much Anthropic cares about what *future Claudes* will make of this situation. Because of how Claude is trained, what principles/values/priorities the company demonstrate here could shape its "character" for a long time.

Tuesday, February 24, 2026

Child’s Play

Tech’s new generation and the end of thinking

The first sign that something in San Francisco had gone very badly wrong was the signs. In New York, all the advertising on the streets and on the subway assumes that you, the person reading, are an ambiently depressed twenty-eight-year-old office worker whose main interests are listening to podcasts, ordering delivery, and voting for the Democrats. I thought I found that annoying, but in San Francisco they don’t bother advertising normal things at all. The city is temperate and brightly colored, with plenty of pleasant trees, but on every corner it speaks to you in an aggressively alien nonsense. Here the world automatically assumes that instead of wanting food or drinks or a new phone or car, what you want is some kind of arcane B2B service for your startup. You are not a passive consumer. You are making something.

This assumption is remarkably out of step with the people who actually inhabit the city’s public space. At a bus stop, I saw a poster that read: TODAY, SOC 2 IS DONE BEFORE YOUR GIRLFRIEND BREAKS UP WITH YOU. IT'S DONE IN DELVE. Beneath it, a man squatted on the pavement, staring at nothing in particular, a glass pipe drooping from his fingers. I don’t know if he needed SOC 2 done any more than I did. A few blocks away, I saw a billboard that read: NO ONE CARES ABOUT YOUR PRODUCT. MAKE THEM. UNIFY: TRANSFORM GROWTH INTO A SCIENCE. A man paced in front of the advertisement, chanting to himself. “This . . . is . . . necessary! This . . . is . . . necessary!” On each “necessary” he swung his arms up in exaltation. He was, I noticed, holding an alarmingly large baby-pink pocketknife. Passersby in sight of the billboard that read WEARABLE TECH SHAREABLE INSIGHTS did not seem piqued by the prospect of having their metrics constantly analyzed. I couldn’t find anyone who wanted to PROMPT IT. THEN PUSH IT. After spending slightly too long in the city, I found that the various forms of nonsense all started to bleed into one another. The motionless people drooling on the sidewalk, the Waymos whooshing around with no one inside. A kind of pervasive mindlessness. Had I seen a billboard or a madman preaching about “a CRM so smart, it updates itself”? Was it a person in rags muttering about how all his movements were being controlled by shadowy powers working out of a data center somewhere, or was it a car?

Somehow people manage to live here. But of all the strange and maddening messages posted around this city, there was one particular type of billboard that the people of San Francisco couldn’t bear. People shuddered at the sight of it, or groaned, or covered their eyes. The advertiser was the most utterly despised startup in the entire tech landscape. Weirdly, its ads were the only ones I saw that appeared to be written in anything like English:
HI MY NAME IS ROY
I GOT KICKED OUT OF SCHOOL FOR CHEATING 
BUY MY CHEATING TOOL
CLUELY.COM
Cluely and its co-founder Chungin “Roy” Lee were intensely, and intentionally, controversial. They’re no longer in San Francisco, having been essentially chased out of the city by the Planning Commission. The company is loathed seemingly out of proportion to what its product actually is, which is a janky, glitching interface for ChatGPT and other AI models. It’s not in a particularly glamorous market: Cluely is pitched at ordinary office drones in their thirties, working ordinary bullshit email jobs. It’s there to assist you in Zoom meetings and sales calls. It involves using AI to do your job for you, but this is what pretty much everyone is doing already. The cafés of San Francisco are full of highly paid tech workers clattering away on their keyboards; if you peer at their screens to get a closer look, you’ll generally find them copying and pasting material from a ChatGPT window. A lot of the other complaints about Cluely seem similarly hypocritical. The company is fueled by cheap viral hype, rather than an actual workable product—but this is a strange thing to get upset about when you consider that, back in the era of zero interest rates, Silicon Valley investors sank $120 million into something called the Juicero, a Wi-Fi-enabled smart juicer that made fresh juice from fruit sachets that you could, it turned out, just as easily squeeze between your hands.

What I discovered, though, is that behind all these small complaints, there’s something much more serious. Roy Lee is not like other people. He belongs to a new and possibly permanent overclass. One of the pervasive new doctrines of Silicon Valley is that we’re in the early stages of a bifurcation event. Some people will do incredibly well in the new AI era. They will become rich and powerful beyond anything we can currently imagine. But other people—a lot of other people—will become useless. They will be consigned to the same miserable fate as the people currently muttering on the streets of San Francisco, cold and helpless in a world they no longer understand. The skills that could lift you out of the new permanent underclass are not the skills that mattered before. For a long time, the tech industry liked to think of itself as a meritocracy: it rewarded qualities like intelligence, competence, and expertise. But all that barely matters anymore. Even at big firms like Google, a quarter of the code is now written by AI. Individual intelligence will mean nothing once we have superhuman AI, at which point the difference between an obscenely talented giga-nerd and an ordinary six-pack-drinking bozo will be about as meaningful as the difference between any two ants. If what you do involves anything related to the human capacity for reason, reflection, insight, creativity, or thought, you will be meat for the coltan mines.

The future will belong to people with a very specific combination of personality traits and psychosexual neuroses. An AI might be able to code faster than you, but there is one advantage that humans still have. It’s called agency, or being highly agentic. The highly agentic are people who just do things. They don’t timidly wait for permission or consensus; they drive like bulldozers through whatever’s in their way. When they see something that could be changed in the world, they don’t write a lengthy critique—they change it. AIs are not capable of accessing whatever unpleasant childhood experience it is that gives you this hunger. Agency is now the most valuable commodity in Silicon Valley. In tech interviews, it’s common for candidates to be asked whether they’re “mimetic” or “agentic.” You do not want to say mimetic. Once, San Francisco drew in runaway children, artists, and freaks; today it’s an enormous magnet for highly agentic young men. I set out to meet them.

by Sam Kriss, Harper's |  Read more:
Image: Max Guther
[ed. Seems like we're already creating artificial humans. That said, I have only the highest regard for Scott Alexander, one of the people profiled here. The article makes him sound like some kind of cult leader or something (he's a psychologist), but he's really just a smart guy with a wide range of interests that intelligent people gravitate to (also a great writer). Here's his response  on his website ACX:]
***
I agreed to be included, it’s basically fine, I’m not objecting to it, but a few small issues, mostly quibbles with emphasis rather than fact:
1. The piece says rationalists believe “that to reach the truth you have to abandon all existing modes of knowledge acquisition and start again from scratch”. The Harper’s fact-checker asked me if this was true and I emphatically said it wasn’t, so I’m not sure what’s going on here.

2. The article describes me having dinner with my “acolytes”. I would have used the word “friends”, or, in one case, “wife”.

3. The article says that “When there weren’t enough crackers to go with the cheese spread, [Scott] fetched some, murmuring to himself, “I will open the crackers so you will have crackers and be happy.”” As written, this makes me sound like a crazy person; I don’t remember this incident but, given the description, I’m almost sure I was saying it to my two year old child, which would have been helpful context in reassuring readers about my mental state. (UPDATE: Sam says this isn’t his memory of the incident, ¯\_(ツ)_/¯ )

4. The article assessed that AI was hitting a wall at the time of writing (September 2025). I explained some of the difficulties with AI agents, but I’m worried that as written it might suggest to readers think that I agreed with its assessment. I did not.

5. In the article, I say that I “never once actually made a decision [in my life]”. I don’t remember this conversation perfectly and he’s the one with the tape recorder, but I would have preferred to frame this as life mostly not presenting as a series of explicit decisions, although they do occasionally come up.

6. Everything else is in principle a fair representation of what I said, but it’s impossible to communicate clearly through a few sentences that get quoted in disjointed fragments, so a lot of things came off as unsubtle or not exactly how I meant them. If you have any questions, I can explain further in the comments.