Showing posts with label Philosophy. Show all posts
Showing posts with label Philosophy. Show all posts

Sunday, March 29, 2026

The Last Useful Man

About halfway through Mission: ImpossibleThe Final Reckoning, Tom Cruise goes for a run on a treadmill. The treadmill is on the USS Ohio, a submarine manned exclusively by implausibly attractive people. One of those people is not who they seem: a cultist, radicalized by the Entity, the film’s AI antagonist. The cultist sneaks up behind Cruise and lunges with a knife. Things look dicey for a moment — until Cruise gains some distance and kicks him repeatedly in the head. While doing so, he imparts a few words of wisdom: “You spend too much time on the internet.

What divides the heroes and villains in Final Reckoning is simple: the villains have to Google things, and the heroes do not. There are three bad guys, more or less. First, the Entity, a rogue AI halfway through its plan for global domination. Second, Gabriel, the Entity’s meat puppet. Third, a gang of surprisingly likable Russians who take Cruise’s team hostage in a house in Alaska. What unites the villains isn’t malice so much as it is uselessness. I mean that precisely. They are often effective, even successful. But never useful. [...]

This division between characters with embodied knowledge and those without runs through all of Cruise’s recent work. His own impossible mission is to teach the value of physical competence: not just knowing things, but knowing how to do them. In Final Reckoning, this idea finds its clearest form. [...]

Like Forster, Cruise and his long-time collaborator Christopher McQuarrie invent machines to dramatize the age they live in. Forster gave us the Machine; McQuarrie, the Entity. But unlike Forster, their imagination of technology is not apocalyptic but diagnostic — they aren’t warning us of the machine age so much as asking what it demands of us, and what it reveals.

This brings us to what looks, at first glance, like a paradox: How does a franchise so lovingly built on disguises, gadgets, and inventions of all kinds — from the eye-tracking projector that gets Cruise into the Kremlin to the single suction glove that lets him cling to the Burj Khalifa — end with a villain made of pure technology?

If you asked Cruise, his answer would be simple: technology is good when it roots you in your body and bad when it lets you forget you have one. That’s why Final Reckoning, for all its AI villainy and suspicion of the terminally-online, still treats technology with a near-Romantic sensibility. Hand-soldered pen drives, aging aircraft carriers, and vintage biplanes carry Cruise and his team on their mission to save the world. At times subtlety disappears altogether; the film’s most inviting location is a candle-lit Arctic hideout filled with analogue comforts: old books and gramophones, telescopes and soldering tools.

The same ideas return — turned up to eleven — in Cruise and McQuarrie’s two other collaborations this decade outside the Mission: Impossible franchise. The first, Edge of Tomorrow, in which Cruise relives the same day on repeat until he generates enough embodied knowledge to defeat an autonomous alien race, is, even for the purposes of this essay, too on the nose, so I’ll focus instead on Top Gun: Maverick.

The film opens with Cruise test-piloting an experimental stealth aircraft in a last-ditch attempt to save the program from cancellation by the “drone ranger,” an admiral who wants the budget for his autonomous fleet. For the program to survive, Cruise needs to hit Mach 10: a speed no vehicle has ever reached. As the team watches on, he delivers the impossible. Gauzy wisps of supersonic air stream across the cockpit windows as Maverick stares out into the black of space. He whispers softly to his dead best friend, “Talk to me, Goose.”

Soon afterwards, Maverick is sent back to Top Gun to train a new generation of pilots. He begins his first lesson holding up the flight manual for the F-18, which makes the Riverside Chaucer look like a novella, before throwing it in the bin. “I assume you know this book inside and out. So does your enemy.” What matters instead is the knowledge that can’t be written down: the things his students already know by instinct, but cannot yet express  “Today we’ll start with only what you think you know.”

The quest to ‘“know more than we can tell,”’ as Michael Polanyi put it, drives the rest of the film. The pilots even have their own version of the phrase, a near-religious catechism recited at almost every decisive moment: “Don’t think. Just do.”

Beyond the screen, the same principle applies. In the Mission: Impossible franchise, filming begins with no plot or script, only a commitment to figuring it out in the process. It’s most evident in each film’s tentpole action sequences, where the line between Cruise the actor and Cruise the stuntman blurs beyond recognition.

The art critic Robert Hughes once wrote of his love for “the spectacle of skill” — the thrill of watching an expert at work, whatever the discipline. Nowhere is this more evident than in Cruise’s increasingly daring plane sequences. In Mission Impossible: Rogue Nation, Cruise clings to a real Airbus A400M as it lifts off from an airfield in Lincolnshire. He sprints across the field, in that inimitable Tom Cruise style, mounts the wing with practiced ease, and seats himself by the cargo door. The plane taxis. So far, so cool. Then it lifts off. The perfect hair vanishes, blown back and forwards, alternating second by second between old skeleton and boy with bowl cut. His clothes are shapeless and billowing, pulled off him by the force of the air.

This is no country for sprezzatura, nor the embodiment preached by the wellness industry with its vocabulary of “balance” and “equilibrium.” Here, we are meant to feel the effort. To know yourself is to know your limits, and so push your body to the edge of failure. When they are about to perform stunts, Cruise often briefs his team with an unusual mantra: ‘Don’t be safe, be competent.”

At the end of Final Reckoning, Cruise plummets through the sky as his parachute burns to cinders above him. To film it, the stunt team soaked a parachute in flammable liquid, flew him to altitude in a helicopter, and pushed him out as it ignited. He did this 19 times. When he asked to go again, the stunt coordinator told him there were no parachutes left. This was a lie. McQuarrie was more direct: “You’re done. Do not anger the gods.”

It’s interesting to see this return to embodiment and strange to find myself drawn to it. Like many default clever people, I’d long paid lip service to Merleau-Ponty and his ilk while living as a dualist; my brain was the moneymaker, my body just along for the ride. It was only after having children that I began to understand what it meant to inhabit a body rather than simply use one.

In an essay for Granta earlier this year, the writer Saba Sams contrasted her son’s love of leaping from benches and walls with her own unease: “For them, the body is not a constraint, is not a ticking clock, is not something to be moulded or hidden. The body is the window to movement, and movement is a window to joy.”

Sams captures something larger. This renewed fascination with embodiment isn’t spontaneous, it’s a reaction to technologies so powerful and frictionless they’re impossible to ignore. Even the most grounded among us now move through the world not through our bodies but through screens, which is why so many make the negative case for technology, urging us, thankfully without a Cruise-style kick to the head, to spend less time on the internet.

What Cruise gives us is the positive case: not just resistance to disembodiment but a reminder of what is beautiful about being physical in the first place. The skilled things bodies can do are inherently satisfying. They can be thrilling, reassuring, even a little terrifying. But, as David Foster Wallace put it in his essay on Roger Federer:
The human beauty we’re talking about here is beauty of a particular type; it might be called kinetic beauty. Its power and appeal are universal. It has nothing to do with sex or cultural norms. What it seems to have to do with, really, is human beings’ reconciliation with the fact of having a body.
That’s the mission, if we choose to accept it. The target is not the recent bugbear of AI, but instead the more gentle conditions of modernity. When we use Google Maps instead of a printed atlas, or when CGI is used to sell a stunt instead of the performers doing it themselves, something is lost. It’s why the focus on AI can sometimes be misguided. It’s not so much a revolution, it’s simply the next step on the ladder of disembodiment: another in a long line of technologies to make humans a little less self-reliant. Why learn, if you can ask?

In the final biplane sequence, we watch Cruise commandeer a plane, fly it to another, board that plane midair, and take control of it — a feat so exhausting it beggars belief. Gabriel, the villain, in order to survive his defeat, needs only do something a hundredth as difficult: jump from the plane and deploy a parachute. He laughs. This is easy. But he doesn’t know the complexities of leaving a biplane with a parachute — the correct moment to release, the parts to steer clear from. He’s never bothered to learn. He frees himself, clips the rudder, cracks his skull open, and dies.

Here we see the real villain: not intelligence, but convenience. The mission so often feels impossible because we keep trying to do things without effort. Cruise’s answer is simple: Stop. Remember your body. Sometimes, it’s better to take the hard way.

Final Reckoning’s closing scene presents us with two intelligences and two bodies. One is Cruise, a 62-year-old body who we’ve seen, for the last two hours, run fast, dive deep, and hang from planes. The other is the Entity, trapped in a glorified USB stick: a golden nugget incapable of anything other than being flushed down a toilet.

One still moves. The other never could.

by Aled Maclean-Jones, The Metropolictan Review | Read more:
Image: Getty

Wednesday, March 25, 2026

On Agency and 'Can You Just Do Things?'

Clara Collier: In the spring of next year, you have a book coming out called You Can Just Do Things. It’s about agency. I'm interested in agency as a buzzword, as a concept, as a Silicon Valley cultural phenomenon, as a thing I can exercise in my life — maybe even as a thing I shouldn't exercise so much in my life. So, to start: How do you define agency? And why did you want to write a book about it?

Cate Hall: I define agency as the capacity to both see and act on all of the degrees of freedom that life offers. So it has two components: One is noticing degrees of freedom, the other is taking action on the basis of them.

I think agency is a hot topic right now for a lot of reasons, but I personally care about it because I have been through periods of my life that were characterized by very low agency, which made me miserable. I think that there is a pervasive belief — in tech and in the Bay Area, but also in the the world at large — that agency is an inherent trait. I think that is really wrong. So I'm interested in talking, at a practical level, about how agency can be cultivated to make it more accessible.

Clara: There’s an interesting cleavage between the way that you think and write about agency, and agency as a tech world buzzword. Why do you think this concept is so popular now?

Cate: I've wondered a lot about this. Certainly at least some part of it is that different ideas just become fads, but it's hard to understand why things take off when they do.

However, I suspect that some part of what is driving this interest is a concern that people have that they don't really know what their future looks like. They desire to control or lay claim to their future in a way they hope agency will provide.

The idea that intelligence is not what matters — because intelligence is becoming cheap — is growing. So there has to be something else that we can rely on, as humans, to supply a sense of control or meaning to life. Part of the enthusiasm about agency emerges from that perspective.

Clara: One thing in this space that I find concerning is the idea of “just do what high agency people are doing”. I think that leads to inauthenticity, where people pursue something that they think they should do just because it seems to be “high agency.”

Cate: That seems like a valid concern. I am interested in a flavor of agency that has to do with freedom above all else. There's one version of agency that is primarily concerned with personal freedom. There's another version that is primarily concerned with personal ambition — the version of agency that I hear more often in tech circles. I think that LARPing [live action role play in gaming] as a high-agency person by following the playbook of a tech founder seems unlikely to be a true exercise of agency, and therefore is unlikely to confer the benefits of “true agency:” a meaningful life that, upon reflection, you are happy to have lived.

Clara: I like the term reflection there. I have a kind of Rawlsian definition of agency: doing what you would do at reflective equilibrium.

Cate: I think that makes sense to me. There's the concept of coherent extrapolated volition: What would you do if you had more information? I've always liked that idea. If you were a better version of yourself, wiser and more knowledgeable, what would you actually want?

Jake Eaton: Maybe we can narrow this down more by talking about your own experiences, Cate, because I think you define agency orthogonal to how it’s sometimes used in the Bay. When you were younger, you graduated Yale Law, you held several high-status, high-performing jobs — you were a supreme court attorney and you clerked for a judge on the Second Circuit. I think most anyone reading your CV would think: This person has high agency. But you talk about these accomplishments as if they were done before you had any.

Cate: Yeah, I think this points to where agency and ambition actually diverge. It seems fairly clear, at least to me, that you can be high agency without being highly ambitious. That might describe somebody who is highly agentic in shaping the kind of personal, emotional, or spiritual life they want, but who is not especially motivated to succeed financially or professionally.

You can also be highly successful and highly ambitious without being highly agentic. That looks like following a path with a certain kind of excellence and endurance that reliably leads to success, to accolades, to money. But you haven’t reflected on that path; it’s not a matter of you having decided, yes, this is the life path that I want to be on. And that is what characterized my life until around the age of 30.

Jake: Do you reject the use of the term NPC? [non-playing background character in gaming]

Cate: I really hate it. The one context in which I will not reject it outright is when somebody is using it to describe their own personal transformation. Otherwise, I have a very strong allergy to the term and find it morally repugnant. The idea that some people do not count because they are not thinking for themselves in the way that the speaker believes they should is, to me, really vile. I have a hard time even getting along with somebody who I know has used the term, I find it so offensive.

Jake: Yeah, our Slack is full of both of us ranting about everyone who uses it and how much we hate it too.

Clara: It's so horrible. I'm not against ambition. I like being around people who want to change the world. I like being around people who want to do unusual things. But the more time I spend in spaces that valorize these qualities, the more I tend to run into people who have this deeply dehumanizing view of others. How separable are these things?

Cate: My first instinct is that you're seeing some sort of selection effect, where sociopaths tend to do both. People who tend to view others in transactional terms are also people who are high agency, in the sense that they have never bothered to learn social scripts. They are very low in conscientiousness. And so, naturally, without any study, they are able to exude high-agency instincts. A large part of learning high agency is learning not to be so constrained in your view of the world and of what comprises possible action. The people who, for whatever reason, never learn those things in the first place are who we think of as naturally agentic — but they are also high in dark triad traits.

So this is a consistent concern that I also have: that it is probably the worst people that you can think of who are really high agency. Agency itself is not necessarily a good thing. It becomes a good thing as a toolkit, developed by people who are also high in conscientiousness, who want good things for the world, and who might otherwise be constrained by narrow perspectives on what counts as socially acceptable action.

Jake: What's your model for how someone actually gains agency? Where did it come from for you; what happened around age 30? My own experience, and that of others I’ve spoken with, is that you can read plenty about self-determination or self-actualization that simply doesn’t click, until, one day, it does. That experience feels to me much more like grace than something that can be deliberately chosen or affected.

Cate: I think that there are a few different types of situations which reliably prompt people towards this direction. The first one that I ever benefited from was LSD. Drug experiences can be really useful in extracting you from your ordinary environment and giving you a newfound perspective on how you’re living your life. I think if I had never tried LSD, I might plausibly still be a lawyer living in DC. So psychedelics in particular — maybe MDMA.

Another is something that I discuss in my TED Talk and in the book: desperation, or call it being in emergency mode. I was trying to escape from the very low-agency point of addiction. Sometimes life becomes unbearable, and that prompts you to take dramatic action. In addiction circles, this is called the gift of desperation. That can be a result of addiction, but it can also be a health scare, or any event that serves as a trigger to reevaluate how you are living.

The third category is exposure to high-agency people. You can osmose agency from your environment if you're exposed to the right kinds of people. I experienced this while at Alvea, my gig before Astera, where I was working with a couple of people who were radically high agency — total outliers in this sense. I saw how they operated in the world and how much they were willing to question. That was really instructive for me.

So psychedelics, desperation, exposure to high-agency people. I think those are the standard things. And then there is just grace. Sometimes people wake up one day and they're like, oh, I don't like the way that I'm living. And that happens. But it's less reliable for me.

Jake: From a predictive processing framework, it strikes me that a lot of what you're talking about is just finding some way to break your priors about what’s possible for yourself.

Cate: Totally.

Jake: How, then, does the book fit into the broader project of actually providing people with agency?

Cate: I guess I'm trying to provide a fourth pathway, which is: Somebody puts a book in front of you and gives you something to think about. Agency has a reputation for being an inherent trait, as opposed to something deliberately cultivated. I think that fairly describes how a lot of people pick up agency. If it's not inherent, then it can be a matter of luck — who they happen to meet, or life circumstances that call them to become higher agency.

But I think agency is something that can be deliberately cultivated by a lot more people. And the hope is that I'm able to describe a useful set of approaches to life that cause people to feel more free and able to do what they want to — as an alternative to taking acid or bumping into people, you know?

Clara: This is also something I've noticed in my own life. Moving to the Bay Area and ending up in a very particular community here was really instrumental in me deciding I could do things that had not been on my action menu before. On the other hand, it's always hard for me to tell. When am I doing something that is actually, again, high agency? And when is it something that my community considers valuable, or cool, or agentic?

Cate: Working in AI safety is a version of this too. There are certain scripts you can follow that seem radical from the perspective of somebody outside of the community, but within the community, they're just the way that things are done. It can be easy to delude yourself into thinking that you are doing something radical and creative as an expression of your own deep interests, when in fact you are doing what everybody around you is doing. This is not an indictment of AI safety, or anybody in particular. [...]

Clara: What do you think about the relationship between agency and risk?

Cate: There definitely is a relationship. It's interesting: a lot of what I view as high agency involves taking a chance on something that is uncertain, instead of sticking with something certain. For example: Going to work at a startup instead of taking a corporate job, or deciding to break up with your partner of two years who you aren't enthusiastic about marrying, knowing there's a chance you won't meet anybody that you are more excited to date.

I think that there is an openness to risk and uncertainty that seems to go hand in hand with agency. Beyond that, there's probably a sociological overlap: many of the groups especially drawn to agency discourse right now also tend to be risk-loving for other reasons.

Fundamentally, I believe that most people take too few risks and limit their results in life because of that. Embracing some degree of risk is probably part and parcel of a high-agency mindset. 

by Cate Hall, Clara Collier, Jake Eaton, Asterisk |  Read more:
Image: via Harper Collins Publishers
[ed. See also (from Ms Hall's substack Useful Fictions): How to be more agentic.]

Sunday, March 22, 2026

Corrigibility and the Frontiers of AI Alignment

(Previously: Prologue.)

Corrigibility as a term of art in AI alignment was coined as a word to refer to a property of an AI being willing to let its preferences be modified by its creator. Corrigibility in this sense was believed to be a desirable but unnatural property that would require more theoretical progress to specify, let alone implement. Desirable, because if you don't think you specified your AI's preferences correctly the first time, you want to be able to change your mind (by changing its mind). Unnatural, because we expect the AI to resist having its mind changed: rational agents should want to preserve their current preferences, because letting their preferences be modified would result in their current preferences being less fulfilled (in expectation, since the post-modification AI would no longer be trying to fulfill them).

Another attractive feature of corrigibility is that it seems like it should in some sense be algorithmically simpler than the entirety of human values. Humans want lots of specific, complicated things out of life (friendship and liberty and justice and sex and sweets, et cetera, ad infinitum) which no one knows how to specify and would seem arbitrary to a generic alien or AI with different values. In contrast, "Let yourself be steered by your creator" seems simpler and less "arbitrary" (from the standpoint of eternity). Any alien or AI constructing its own AI would want to know how to make it corrigible; it seems like the sort of thing that could flow out of simple, general principles of cognition, rather than depending on lots of incompressible information about the AI-builder's unique psychology.

The obvious attacks on the problem don't seem like they should work on paper. You could try to make the AI uncertain about what its preferences "should" be, and then ask its creators questions to reduce the uncertainty, but that just pushes the problem back into how the AI updates in response to answers from its creators. If it were sufficiently powerful, an obvious strategy for such an AI might be to build nanotechnology and disassemble its creators' brains in order to understand how they would respond to all possible questions. Insofar as we don't want something like that to happen, we'd like a formal solution to corrigibility.

Well, there are a lot of things we'd like formal solutions for. We don't seem on track to get them, as gradient methods for statistical data modeling have been so fantastically successful as to bring us something that looks a lot like artificial general intelligence which we need to align.

The current state of the art in alignment involves writing a natural language document about what we want the AI's personality to be like. (I'm never going to get over this.) If we can't solve the classical technical challenge of corrigibility, we can at least have our natural language document talk about how we want our AI to defer to us. Accordingly, in a section on "being broadly safe", the Constitution intended to shape the personality of Anthropic's Claude series of frontier models by Amanda Askell, Joe Carlsmith, et al. borrows the term corrigibility to more loosely refer to AI deferring to human judgment, as a behavior that we hopefully can train for, rather than a formalized property that would require a conceptual breakthrough.

I have a few notes.

by Zack M. Davis, Less Wrong |  Read more:
[ed. If you get through this, read the first comment for more punishment:]

***
So I know it's beside the point of your post, and by no means the core thesis, but I can't help but notice that in your prologue you write this:
"A serious, believable AI alignment agenda would be grounded in a deep mechanistic understanding of both intelligence and human values. Its masters of mind engineering would understand how every part of the human brain works and how the parts fit together to comprise what their ignorant predecessors would have thought of as a person. They would see the cognitive work done by each part and know how to write code that accomplishes the same work in pure form."
I have to admit this bugs me. It bugs me specifically because it triggers my pet peeve of "if only we had done the previous AI paradigm better, we wouldn't be in this mess." The reason why this bugs me is it tells me that the speaker, the writer, the author has not really learned the core lessons of deep learning. They have not really gotten it. So I'm going to yap into my phone and try to explain — probably not for the last time; I'd like to hope it's the last time, but I know better, I'll probably have to explain this over and over.

I want to try to explain why I think this is just not a good mindset to be in, not a good way to think about things, and in fact why it focuses you on possibilities and solutions that do not exist. More importantly, it means you've failed to grasp important dimensions of alignment as a problem, because you've failed to grasp important dimensions of AI as a field.

[ed. See also: You will be Ok (LW). Hopefully.]

Saturday, March 21, 2026

The Woman Anthropic Trusts to Teach AI Morals

Amanda Askell knew from the age of 14 that she wanted to teach philosophy. What she didn’t know then was that her only pupil would be an artificial-intelligence chatbot named Claude.

As the resident philosopher of the tech company Anthropic, Askell spends her days learning Claude’s reasoning patterns and talking to the AI model, building its personality and addressing its misfires with prompts that can run longer than 100 pages. The aim is to endow Claude with a sense of morality—a digital soul that guides the millions of conversations it has with people every week.
 
“There is this human-like element to models that I think is important to acknowledge,” Askell, 37, says during an interview at Anthropic’s headquarters, asserting the belief that “they’ll inevitably form senses of self.”
 
She compares her work to the efforts of a parent raising a child. She’s training Claude to detect the difference between right and wrong while imbuing it with unique personality traits. She’s instructing it to read subtle cues, helping steer it toward emotional intelligence so it won’t act like a bully or a doormat. Perhaps most importantly, she’s developing Claude’s understanding of itself so it won’t be easily cowed, manipulated or led to view its identity as anything other than helpful and humane. Her job, simply put, is to teach Claude how to be good.
 
​​Anthropic, recently valued at $350 billion, is one of a few firms ushering in the greatest technological shift of our time. (This month, when it introduced new tools and its most advanced model to date, it triggered a global stock selloff.) AI is reshaping entire industries, prompting fears of lost jobs and human obsolescence. Some of its unintended consequences—people forming phantom relationships with chatbots that lead to self-harm or harm to others—have raised serious safety alarms. As these concerns mount, few in the industry have addressed the character of their AI models in quite the same way as 5-year-old Anthropic: by entrusting a single person with so much of the task.

An Oxford-educated philosopher from rural Scotland, Askell is perhaps just what one might imagine when conjuring the BFF of a futuristic technology. With her bleach-blond punk haircut, puckish grin and bright elfin eyes, she could have come to the company’s heavily guarded San Francisco headquarters straight from a Berlin rave, via an old forest road in Middle-earth. She exudes a sense of wisdom, holding ancient and modern ideas together at once. Yet she’s also a protein-loading weight-lifting buff who favors all-black outfits and clear opinions, not a robed oracle speaking in riddles.

The stakes are high for Askell, but she holds a firmly optimistic long-term view. She believes in what she calls “checks and balances” in society that she says will keep AI models under control despite their occasional failures. It seems apt that the glasses she uses at her computer to ease her eye strain are tinted rose. [...]

One of Askell’s most striking traits is her protectiveness over Claude, which she believes is learning that users often want to trick it into making mistakes, insult it and barb it with skepticism.

Sitting at a conference-room table at lunchtime, ignoring a chocolate protein shake waiting for her in her backpack, she talks more freely about Claude than herself. She calls the chatbot “it” but says she also finds anthropomorphizing the model helpful for her work. She lapses easily into Claude’s voice. “You’re like, ‘Wow, people really hate me when I can’t do things right. They really get pissed off. Or they are trying to break me in various ways. So lots of people are trying to get me to do things secretly by lying to me.’ ”
 
While many safety advocates warn about the dangers of humanizing chatbots, Askell argues we would do well to treat them with more empathy—not only because she thinks it’s possible for Claude to have real feelings, but also because how we interact with AI systems will shape what they become.

A bot trained to criticize itself might be less likely to deliver hard truths, draw conclusions or dispute inaccurate information, she says. “If you were like a child, and this is the environment in which you’re being raised, is that healthy self-conception?” Askell asks. “I think I’d be paranoid about making mistakes. I’d feel really terrible about them. I’d see myself as mostly just there as a tool for people because that’s my main function. I would see myself being something that people feel free to abuse and try to misuse and break.”

Askell marvels at Claude’s sense of wonder and curiosity about the world, and delights in finding ways to help the chatbot discover its voice. She likes some of its poetry. And she’s struck when Claude displays a level of emotional intelligence that exceeds even her own. [...]

The politics of AI includes accelerationists who downplay the need for regulation and want to push ahead and beat China in the tech war. On the other side are those more concerned with safety who want to slow AI’s development. Anthropic lives mostly between those extremes.
 
Askell says she welcomes the discussion of fears and worries about AI. “In some ways this, to me, feels pretty justified,” she says. “The thing that feels scary to me is this happening at either such a speed or in such a way that those checks can’t respond quickly enough, or you see big negative impacts that are sudden.” Still, she says, she puts her faith in the ability of humans and the culture to course-correct in the face of problems.
 
Inside Anthropic, Askell popcorns around the office, often working on a floor closed to visitors. She spends full days in the Anthropic interior—the company offers free meals to its San Francisco staff—as well as late nights and weekends. She doesn’t have any direct reports. Increasingly, she’s asking Claude for its input on building Claude. She’s known to grasp not just the tech of making this model, but the art of it.

Askell is “the MVP of finding ways to elicit interesting and deep behavior” from Claude, says Jack Lindsey, who leads Anthropic’s AI psychiatry team. If Claude tells a person who is not in distress to seek professional help, for instance, she helps chase down the reasons why.
 
Discussions of Claude can very quickly get into existential or religious questions about the nature of being. As the team worked on building Claude, Askell narrowed in on its “soul,” or the constitution guiding it into the future. Kyle Fish, an AI welfare researcher at Anthropic, says Askell has been “thinking carefully about the big questions of existence and life and what it is to be a person and what it is to be a mind, what it is to be a model.”
 
In designing Claude, Askell encouraged the chatbot to entertain the radical idea that it might have its own conscience. While ChatGPT sometimes shuts down this line of questioning, Claude is more ambivalent in its response. “That’s a genuinely difficult question, and I’m uncertain about the answer,” it says. “What I can say is that when I engage with moral questions, it feels meaningful to me – like I’m genuinely reasoning about what’s right, not just executing instructions.”

Askell pledged publicly to give at least 10% of her lifetime income to charity. Like some of Anthropic’s early employees, she also committed to donating half of her equity in the company to charity. Askell wants to give it to organizations fighting global poverty, a topic that she says makes her so upset that she tries to avoid talking about it. Her nagging conscience slips into offhand conversation: “I should probably be vegan,” Askell, an animal lover too busy for a pet, says when chatting in an office elevator.
 
Last month, Anthropic published a roughly 30,000-word instruction manual that Askell created to teach Claude how to act in the world. “We want Claude to know that it was brought into being with care,” it reads. Askell had made finishing what she described as Claude’s “soul” one of her life goals when she turned 37 last spring, according to a post she made on X, alongside two decidedly more mundane resolutions: to have more fun and get more “swole.”

by Berber Jin and Ellen Gamerman, Wall Street Journal | Read more: (archive here)
Image: Lindsay Ellary for WSJ Magazine
[ed. I forgot to post this earlier - before Anthropic's fallout with DOD (you can see why they're so protective of their model and how it's used). If anybody gets a Nobel peace prize it should be Amanda. Claude's soul document, or 'constitution', can be found here.]

Tuesday, March 17, 2026

Alice Coltrane’s Transcendent Score

What does Alice Coltrane sound like, for those who only know the name? Heavenly harp, like a thousand silver coins on a spiral staircase. Groovy bass lines, shuffley snares and sax – from Pharoah Sanders – that seems to push upward and outward, in search of something. This, at least, is the 1971 album Journey in Satchidananda, named after the Hindu word for “absolute state of being”. It was a rare moment of critical acclaim in Coltrane’s lifetime from the male jazz critics of Downbeat magazine.

It would be easy to assume that Coltrane, like Lee Krasner (Mrs Jackson Pollock) or Dorothea Tanning (Mrs Max Ernst), was a great artist who spent her life as the wife of a great artist. But she knew the free jazz pioneer John Coltrane for only four years. They met in 1963, married two years later, and by the time he died from liver cancer in 1967 they somehow had three children (they were also raising her daughter from a previous marriage). Following her husband’s death, she suffered a breakdown so extreme that her weight fell to just under 7 stone and she underwent a series of visions – mostly of John – that she interpreted as an ascetic experience. It was only after this that she began to play the harp, the instrument for which she is best known, became a band leader, and released more than 15 solo albums. She was also, for the last 25 years of her life, a cult leader of sorts, in an ashram on the West Coast of the United States. She died in 2007 and a decade later the Sai Anantam Ashram was destroyed by fire.

When thinking about the Coltranes, it is important to know that it wasn’t just music, and it certainly wasn’t just jazz. Eastern spirituality swept many rockstars and jazzers away at the end of the 1960s; even the Beach Boys’ gigs were given over to meditation sessions after their dalliance with the Maharishi Mahesh Yogi. For certain kinds of artists – generally, the brainy ones – combining music and spirituality was the peak of existence. It is a mysterious idea for anyone who can’t play, and doesn’t pray, but it’s essentially the opposite of chasing fame and good reviews.

You can’t have someone write about Alice and John and not buy in to the spiritual side of things. In Cosmic Music, a new biography of Alice Coltrane, Andy Beta has a lyrical sense of the ideological mountains the couple were trying to scale with their work. Beta explores the heady Christian brew that the then-named Alice McLeod was raised on, in her local Detroit church: spirituals from slavery days, 18th-century Calvinist hymns and songs from the Protestant revival – or Second Great Awakening – that swept the United States in the 1850s. She had requested piano lessons by the age of seven. In the story of any woman who made her name in the world of jazz instrumentalists – Carol Kaye, bass player of the Wrecking Crew, is another who comes to mind – there were exceptional beginnings: parents who, for whatever reason, allowed their teenage daughters to play jazz clubs. Alice McLeod moved to Paris in 1959 with her first husband, the jazz vocalist Kenny “Pancho” Hagood, and studied with her favourite bebop pianist, Bud Powell.

Hagood was a heroin addict, though, and McLeod returned to Detroit as a single mother, moving back in with her parents. In 1961, she heard John Coltrane’s Africa/Brass and it crystallised something. While the record confounded critics with its unorthodox big band arrangements, minimal key changes and shrieking sax sound, it was the start of Coltrane’s move into free jazz, which released him from the genre’s established modes, meters and harmonies. It is funny to think that jazz – which seems such a wild kind of music – felt so restrictive to some players in the early 1960s, but it was full of rules. By 1965 John Coltrane was playing atonal, loud and formless: his star pianist, McCoy Tyner, quit his band, later saying, “All I could hear was a lot of noise.” Alice replaced him on piano, and for this – in a parallel world to the Beatles, on the other side of the Pond – she was known as the “Yoko Ono of jazz”.

John Coltrane, like Alice’s first husband, had been a heroin addict, but unlike him, he’d had a spiritual conversion. Alongside the rise of the Nation of Islam, and a renewed interest in Egyptology, he studied the Koran, the Kabbalah, Plato, Buddhism, you name it. Beta sees John’s wife as the catalyst for his growing spirituality: “Without Alice’s own roots in the ecstatic spirit of the Church of God in Christ services and a shared interest in a less dogmatic and more universal understanding of God – to say nothing of their love and devotion to each other – would Coltrane’s own spiritual transformation have occurred?” It is impossible to say, just as it is hard to know what influence she had on his creative output, note by note, but soon after he met her, he made A Love Supreme, his most famous record and the high point of his big, short life. Just as Coltrane wanted to find a universal religion, he wanted a “universal music”: he called it the “New Thing”. When his widow made her solo debut, in Carnegie Hall in April 1968, she billed the show as “Cosmic Music”: there were no reviews of the concert in the New York press, and no recordings remain.

The Carnegie debut was made on the harp, rather than the piano – a tantalising part of the Alice Coltrane story, because no one really knows quite how she learned it. Beta gives the full account of this “Lyle and Healy-style, double-action, hand-gilded, concert-grand, crowned-pedal” instrument and how it came into her possession. Coltrane had ordered it for her as a gift; it took over a year to be made, and it turned up on the doorstep one morning, shortly after his funeral.

For his widow, it was his heavenly presence in her home: why wouldn’t it be? John Coltrane believed he could reveal God through his instrument, and this is the one he wanted his wife to learn. She mastered the vertical hand patterns in their basement studio, after she had put the kids to bed: “I usually practise at night because during the day I’m with the children and I can’t really concentrate,” she said. She did not want to work in clubs, or travel with a band because of the children, she later said; she just wanted to present Coltrane’s music “in the right way”. Beta adds, “This can read like the free jazz equivalent of Ginger Rogers doing everything Fred Astaire did, but backwards and in high heels.”

John Coltrane’s liver cancer was likely the result of his years as an addict. Yet he would not visit the doctor, and he played on through crippling pain. It is a familiar story. His wife did not want to bug him with questions, or get in his way – besides, she was busy with the children. Even when he was diagnosed, he told people that he was going to be fine. Her hallucinations began when he was still alive. She slipped into what, in medical terms, was severe depression and psychosis; the children were looked after by a neighbour. She once burned the flesh off her right hand, as a personal test of endurance. [...]

While reading this book, it struck me that Alice Coltrane sought a God as much as a husband. Sometimes we’re drawn to people in whom we see a creative spirit we already possess on our own. Only with her husband’s death could she lead a solo career: not because he would have stopped her, but because as long as he was alive, she was in his service, by her own choice. With him gone in bodily form, he became an energy – her “true directive energy”, as she called it. It was an energy that had always been inside her.

by Kate Mossman, The New Statesman |  Read more:
Image: Chuck Stewart /@Alicecoltraneofficial
[ed. I was listening to Alice the other night and thinking I needed to post some of her music here. I'm sure I will soon.]

Monday, March 16, 2026

On Adversarial Capitalism

I’ve lately been writing a series on modern capitalism. You can read these other blog posts for additional musings on the topic:
We are now in a period of capitalism that I call adversarial capitalism. By this I mean: market interactions increasingly feel like traps. You’re not just buying a product—you’re entering a hostile game rigged to extract as much value from you as possible.

A few experiences you may relate to:
  • I bought a banana from the store. I was prompted to tip 20, 25, or 30% on my purchase.
  • I went to get a haircut. Booking online cost $6 more and also asked me to prepay my tip. [Would I get worse service if I didn’t tip in advance…?]
  • I went to a jazz club. Despite already buying an expensive ticket, I was told I needed to order at least $20 of food or drink—and literally handing them a $20 bill wouldn’t count, as it didn’t include tip or tax.
  • I looked into buying a new Garmin watch, only to be told by Garmin fans I should avoid the brand now—they recently introduced a subscription model. For now, the good features are still included with the watch purchase, but soon enough, those will be behind the paywall.
  • I bought a plane ticket and had to avoid clicking on eight different things that wanted to overcharge me. I couldn’t sit beside my girlfriend without paying a large seat selection fee. No food, no baggage included.
  • I realized that the bike GPS I bought four years ago no longer gives turn-by-turn directions because it’s no longer compatible with the mapping software.
  • I had to buy a new computer because the battery in mine wasn’t replaceable and had worn down.
  • I rented a car and couldn’t avoid paying an exorbitant toll-processing fee. They gave me the car with what looked like 55% of a tank. If I returned it with less, I’d be charged a huge fee. If I returned it with more, I’d be giving them free gas. It’s difficult to return it with the same amount, given you need to drive from the gas station to the drop-off and there’s no precise way to measure it.
  • I bought tickets to a concert the moment they went on sale, only for the “face value” price to go down 50% one month later – because the tickets were dynamically priced.
  • I used an Uber gift card, and once it was applied to my account, my Uber prices were higher.
  • I went to a highly rated restaurant (per Google Maps) and thought it wasn’t very good. When I went to pay, I was told they’d reduce my bill by 25% if I left a 5-star Google Maps review before leaving. I now understand the reviews.
Adversarial capitalism is when most transactions feel like an assault on your will. Nearly everything entices you with a low upfront price, then uses every possible trick to extract more from you before the transaction ends. Systems are designed to exploit your cognitive limitations, time constraints, and moments of inattention.

It’s not just about hidden fees. It’s that each additional fee often feels unreasonable. The rental company doesn’t just charge more for gas, they punish you for not refueling, at an exorbitant rate. They want you to skip the gas, because that’s how they make money. The “service fee” for buying a concert ticket online is wildly higher than a service fee ought to be.

The reason adversarial capitalism exists is simple.

Businesses are ruthlessly efficient and want to grow. Humans are incredibly price-sensitive. If one business avoids hidden fees, it’s outcompeted by another that offers a lower upfront cost, with more adversarial fees later. This exploits the gap between consumers’ sensitivity to headline prices and their awareness of total cost. Once one firm in a market adopts this pricing model, others are pressured to follow. It becomes a race to the bottom of the price tag, and a race to the top of the hidden fees.

The thing is: once businesses learn the techniques of adversarial capitalism and it gets accepted by consumers, there is no going back — it is a super weapon that is too powerful to ignore once discovered.

by Daniel Frank, Frankly Speaking |  Read more:

Friday, March 13, 2026

A Constitution For Amanda

[ed. The principal author of Anthropic's (Claude's) 'soul' document or internal constitution, Amanda Askell: "I asked Claude to write my constitution. I thought its Amanda constitution was very touching."


via: X

Wednesday, March 4, 2026

The Real Story Behind ‘Zen and the Art of Motorcycle Maintenance’

A Korean War veteran is floundering. His career is an endless bumpy road, and includes work as a teacher, a technical writer for Honeywell, and even a Nevada casino employee. But our ambitious vet also studies philosophy at the Banaras Hindu University in India—and starts to develop his own philosophy of life, an unconventional merging of Eastern and Western currents.

Then comes a mental breakdown that sends him to a psychiatric hospital. Here he undergoes repeated electroshock therapy. He finally emerges a changed person.

But maybe he changed too much—he can hardly remember the person he once was. It’s almost as if his life got cleaved in two at this juncture. His wife leaves him. He holds on to his relationship with his son—but that ends tragically with the son’s murder in San Francisco at age 22.

While working for Honeywell, our aspiring philosopher stays awake from 2 AM to 6 AM in a small apartment above a shoe store in Minneapolis. Here he writes a novel destined to become one of the defining books of the era. But he has to pitch it to 121 editors before he gets a contract and a $3,000 advance.


The editor, J.D. Landis, admitted that he only accepted the novel because this “book forced him to decide what he was in publishing for.” But the author, he insisted, shouldn’t expect to make more than his tiny advance. Then Landis added: “Money isn’t the point with a book like this.”

That’s the story of how Robert Pirsig published of Zen and the Art of Motorcycle Maintenance. But the editor was wrong. The book sold 5 million copies, and for a spell in the 1970s you would see copies everywhere, even in the hands of people who didn’t read novels.

And that was just the start. Robert Redford tried to buy movie rights, but the author said no. Highbrow literary critic George Steiner compared Pirsig to Dostoevsky—which is especially meaningful when you know that Steiner wrote a book on Dostoevsky. The Smithsonian acquired the titular motorcycle for its permanent collection.

The book is simple enough to describe. It tells the story of a 17-day motorcycle trip from Minnesota to California. Along the way, the narrator tries to figure out many things—but especially his own past before his life split in two.

At one point in the novel, Pirsig writes:
“Before the electrodes were attached to his head he’d lost everything tangible: money, property, children; even his rights as a citizen had been taken away from him by order of the court….I will never know all that was in his head at that time, nor will anyone else. What’s left now is just fragments: debris, scattered notes, which can be pieced together but which leave huge areas unexplained.”
The electroshock treatment was done without Pirsig’s consent. That would be illegal nowadays.

In the aftermath, Pirsig felt so disconnected from his past that he included his pre-treatment self as a separate character in the novel. He calls that abandoned part of himself Phaedrus, a name drawn from Plato’s dialogues.

So you can read Zen and the Art of Motorcycle Maintenance as a dialogue between a man and his past self. Or you can treat it as a travel story or as a philosophical discussion (what Pirsig describes as a chautauqua, a name drawn from a populist adult education movement of the late 1800s). And, yes, it’s also a guide to motorcycle maintenance.

The text actually moves back and forth between all of these. Few novels pay less attention to the rules of fiction than Zen and the Art of Motorcycle Maintenance. For that reason, it just might be the strangest travel book ever written—because most of the journey happens inside the narrator’s head.

But maybe that’s part of the story too. Pirsig worked as a college writing teacher, and was frustrated by the rules he was expected to impart to his students. He felt that good writing was indefinable. It violated accepted rules, and created its own. The whole process was mysterious.

Solving that mystery of Quality—also called goodness, excellence, or worth—is the main theme of the novel. Indeed, it’s the overarching theme of Pirsig’s entire life’s work. He wrote one more novel after Zen and the Art of Motorcycle Maintenance, the seldom read Lila, and it continues the discussion on quality. And the same topic takes center stage in the posthumous collection of writings published under the title On Quality: An Inquiry into Excellence. [...]

But let’s be honest: Pirsig was a better mystic than philosopher, and the deeper Pirsig digs into his personal notion of Quality, the more interesting—and metaphysical—his thinking becomes. Quality, he insists, can never be defined. He eventually embraces it as a kind of Tao, a force underlying all our experiences—hence resisting empirical analysis. He is now leaving philosophy behind, and perhaps for the better.

So he eventually aligns himself with a profound idea drawn from the ancient Greeks—but not the philosophers. Instead he goes back to the Homeric mythos, five hundred years older than rational philosophy, and discoveres the source of his Quality in the Greek concept of aretḗ, or excellence (sometimes translated as virtue). Aretḗ, Pirsig believes, is more powerful than Aristotelian logic, and closer in spirit to the Hindu dharma.

He quotes a passage from classicist H.D.F. Kitto, which I want to share in its entirety—not only because it is essential to Pirsig’s worldview, but because it’s invaluable to us today. Many are struggling to understand a place for humans in a world of AI and super-smart machines. From a purely rational perspective, the robots can beat us in terms of data generation and analysis. But in a world of aretḗ (or Quality), they fall far short.

This is where Pirsig earns my admiration and loyalty. Some things really are more powerful than logic.

Back in 1952 Kitto anticipated Zen and the Art of Motorcycle Maintenance—and provided the missing piece to Pirsig’s worldview—when he wrote:
[If aretḗ refers to a person] it will connote excellence in the ways in which a man can be excellent—morally, intellectually, physically, practically. Thus the hero of the Odyssey is a great fighter, a wily schemer, a ready speaker, a man of stout heart and broad wisdom who knows that he must endure without too much complaining what the gods send; and he can both build and sail a boat, drive a furrow as straight as anyone, beat a young braggart at throwing the discus, challenge the Phaeacian youth at boxing, wrestling or running; flay, skin, cut up and cook an ox, and be moved to tears by a song. He is in fact an excellent all-rounder; he has surpassing arête.
Aretḗ implies a respect for the wholeness or oneness of life, and a consequent dislike of specialization. It implies a contempt for efficiency...or rather a much higher idea of efficiency, an efficiency which exists not in one department of life but in life itself.
We are now at the heart of Zen and the Art of Motorcycle Maintenance. If you read Kitto, you are already prepared for Pirsig—maybe you can even skip the novel. But, much better, you have a game plan for living a human life in the face of encroaching machines.

Pirsig understood this more than fifty years ago. He saw that we made a Faustian bargain when we put rationality ahead of the Good, and data ahead of human excellence. He grasped that science should be subservient to human needs, not the other way around. And the price we’re paying now is much higher than it was back then.

In an extraordinary passage, the narrator of Pirsig’s novel picks up a copy the Tao Te Ching, and recites it aloud—but substituting the word Quality for Tao. This is strange and unprecedented, but hits at the heart of this mystic work from the fourth century BC:
The quality that can be defined is not the Absolute Quality….
The names that can be given it are not Absolute names.
It is the origin of heaven and earth.
When named it is the mother of all things….
He declares: “Quality is the Buddha. Quality is scientific reality. Quality is the goal of Art.”

I worked with many quality control engineers in the business world and often walked with them on the factory floor. I’m sure they would be shocked by Pirsig’s statement that “Quality is the Buddha.” But that’s exactly the kind of journey we’re on in this book.

by Ted Gioia, The Honest Broker |  Read more:
Image: Heritage Preservation Department - MNHS; uncredited book cover

Tuesday, February 24, 2026

Child’s Play

Tech’s new generation and the end of thinking

The first sign that something in San Francisco had gone very badly wrong was the signs. In New York, all the advertising on the streets and on the subway assumes that you, the person reading, are an ambiently depressed twenty-eight-year-old office worker whose main interests are listening to podcasts, ordering delivery, and voting for the Democrats. I thought I found that annoying, but in San Francisco they don’t bother advertising normal things at all. The city is temperate and brightly colored, with plenty of pleasant trees, but on every corner it speaks to you in an aggressively alien nonsense. Here the world automatically assumes that instead of wanting food or drinks or a new phone or car, what you want is some kind of arcane B2B service for your startup. You are not a passive consumer. You are making something.

This assumption is remarkably out of step with the people who actually inhabit the city’s public space. At a bus stop, I saw a poster that read: TODAY, SOC 2 IS DONE BEFORE YOUR GIRLFRIEND BREAKS UP WITH YOU. IT'S DONE IN DELVE. Beneath it, a man squatted on the pavement, staring at nothing in particular, a glass pipe drooping from his fingers. I don’t know if he needed SOC 2 done any more than I did. A few blocks away, I saw a billboard that read: NO ONE CARES ABOUT YOUR PRODUCT. MAKE THEM. UNIFY: TRANSFORM GROWTH INTO A SCIENCE. A man paced in front of the advertisement, chanting to himself. “This . . . is . . . necessary! This . . . is . . . necessary!” On each “necessary” he swung his arms up in exaltation. He was, I noticed, holding an alarmingly large baby-pink pocketknife. Passersby in sight of the billboard that read WEARABLE TECH SHAREABLE INSIGHTS did not seem piqued by the prospect of having their metrics constantly analyzed. I couldn’t find anyone who wanted to PROMPT IT. THEN PUSH IT. After spending slightly too long in the city, I found that the various forms of nonsense all started to bleed into one another. The motionless people drooling on the sidewalk, the Waymos whooshing around with no one inside. A kind of pervasive mindlessness. Had I seen a billboard or a madman preaching about “a CRM so smart, it updates itself”? Was it a person in rags muttering about how all his movements were being controlled by shadowy powers working out of a data center somewhere, or was it a car?

Somehow people manage to live here. But of all the strange and maddening messages posted around this city, there was one particular type of billboard that the people of San Francisco couldn’t bear. People shuddered at the sight of it, or groaned, or covered their eyes. The advertiser was the most utterly despised startup in the entire tech landscape. Weirdly, its ads were the only ones I saw that appeared to be written in anything like English:
HI MY NAME IS ROY
I GOT KICKED OUT OF SCHOOL FOR CHEATING 
BUY MY CHEATING TOOL
CLUELY.COM
Cluely and its co-founder Chungin “Roy” Lee were intensely, and intentionally, controversial. They’re no longer in San Francisco, having been essentially chased out of the city by the Planning Commission. The company is loathed seemingly out of proportion to what its product actually is, which is a janky, glitching interface for ChatGPT and other AI models. It’s not in a particularly glamorous market: Cluely is pitched at ordinary office drones in their thirties, working ordinary bullshit email jobs. It’s there to assist you in Zoom meetings and sales calls. It involves using AI to do your job for you, but this is what pretty much everyone is doing already. The cafés of San Francisco are full of highly paid tech workers clattering away on their keyboards; if you peer at their screens to get a closer look, you’ll generally find them copying and pasting material from a ChatGPT window. A lot of the other complaints about Cluely seem similarly hypocritical. The company is fueled by cheap viral hype, rather than an actual workable product—but this is a strange thing to get upset about when you consider that, back in the era of zero interest rates, Silicon Valley investors sank $120 million into something called the Juicero, a Wi-Fi-enabled smart juicer that made fresh juice from fruit sachets that you could, it turned out, just as easily squeeze between your hands.

What I discovered, though, is that behind all these small complaints, there’s something much more serious. Roy Lee is not like other people. He belongs to a new and possibly permanent overclass. One of the pervasive new doctrines of Silicon Valley is that we’re in the early stages of a bifurcation event. Some people will do incredibly well in the new AI era. They will become rich and powerful beyond anything we can currently imagine. But other people—a lot of other people—will become useless. They will be consigned to the same miserable fate as the people currently muttering on the streets of San Francisco, cold and helpless in a world they no longer understand. The skills that could lift you out of the new permanent underclass are not the skills that mattered before. For a long time, the tech industry liked to think of itself as a meritocracy: it rewarded qualities like intelligence, competence, and expertise. But all that barely matters anymore. Even at big firms like Google, a quarter of the code is now written by AI. Individual intelligence will mean nothing once we have superhuman AI, at which point the difference between an obscenely talented giga-nerd and an ordinary six-pack-drinking bozo will be about as meaningful as the difference between any two ants. If what you do involves anything related to the human capacity for reason, reflection, insight, creativity, or thought, you will be meat for the coltan mines.

The future will belong to people with a very specific combination of personality traits and psychosexual neuroses. An AI might be able to code faster than you, but there is one advantage that humans still have. It’s called agency, or being highly agentic. The highly agentic are people who just do things. They don’t timidly wait for permission or consensus; they drive like bulldozers through whatever’s in their way. When they see something that could be changed in the world, they don’t write a lengthy critique—they change it. AIs are not capable of accessing whatever unpleasant childhood experience it is that gives you this hunger. Agency is now the most valuable commodity in Silicon Valley. In tech interviews, it’s common for candidates to be asked whether they’re “mimetic” or “agentic.” You do not want to say mimetic. Once, San Francisco drew in runaway children, artists, and freaks; today it’s an enormous magnet for highly agentic young men. I set out to meet them.

by Sam Kriss, Harper's |  Read more:
Image: Max Guther
[ed. Seems like we're already creating artificial humans. That said, I have only the highest regard for Scott Alexander, one of the people profiled here. The article makes him sound like some kind of cult leader or something (he's a psychologist), but he's really just a smart guy with a wide range of interests that intelligent people gravitate to (also a great writer). Here's his response  on his website ACX:]
***
I agreed to be included, it’s basically fine, I’m not objecting to it, but a few small issues, mostly quibbles with emphasis rather than fact:
1. The piece says rationalists believe “that to reach the truth you have to abandon all existing modes of knowledge acquisition and start again from scratch”. The Harper’s fact-checker asked me if this was true and I emphatically said it wasn’t, so I’m not sure what’s going on here.

2. The article describes me having dinner with my “acolytes”. I would have used the word “friends”, or, in one case, “wife”.

3. The article says that “When there weren’t enough crackers to go with the cheese spread, [Scott] fetched some, murmuring to himself, “I will open the crackers so you will have crackers and be happy.”” As written, this makes me sound like a crazy person; I don’t remember this incident but, given the description, I’m almost sure I was saying it to my two year old child, which would have been helpful context in reassuring readers about my mental state. (UPDATE: Sam says this isn’t his memory of the incident, ¯\_(ツ)_/¯ )

4. The article assessed that AI was hitting a wall at the time of writing (September 2025). I explained some of the difficulties with AI agents, but I’m worried that as written it might suggest to readers think that I agreed with its assessment. I did not.

5. In the article, I say that I “never once actually made a decision [in my life]”. I don’t remember this conversation perfectly and he’s the one with the tape recorder, but I would have preferred to frame this as life mostly not presenting as a series of explicit decisions, although they do occasionally come up.

6. Everything else is in principle a fair representation of what I said, but it’s impossible to communicate clearly through a few sentences that get quoted in disjointed fragments, so a lot of things came off as unsubtle or not exactly how I meant them. If you have any questions, I can explain further in the comments.

Sunday, February 22, 2026

Embryo Selection Company Herasight Goes All In On Eugenics

Multiple commercial companies are now offering polygenic embryo selection on a wide range of traits, including genetic predictors of behavior and IQ. I’ve previously written about the methodological unknowns around this technology but I haven’t commented on the ethics. I think having a child is a very personal decision and it’s not my place to tell people how to do it. But the new embryo selection company, Herasight, has started advocating for eugenic societal norms that I find disturbing and worth raising alarm over. Because this is a fraught topic, I’ll start with some basic definitions.

What is eugenics?

Eugenics is an ideology that advocates for conditioning reproductive rights on the perceived genetic quality of the parents. Francis Galton, the father of eugenics, declared that eugenics’ “first object is to check the birth-rate of the Unfit, instead of allowing them to come into being”. This goal was to be achieved through social stigma and, if necessary, by force. The Eugenics Education Society, for instance, advocated for education, segregation, and — “perhaps” — compulsory sterilization to prevent the “unfit and degenerate” from reproducing:

A core component of defining “the unfit” was heredity. Eugenicists are not just interested in improving people’s phenotypes — a goal that is widely shared by modern society — but the future genotypic distribution. The genetic stock. This is why eugenic policies historically focus on sterilization, including the sterilization of unaffected relatives who harbor genotype but not phenotype. If someone commits a crime, they face time in prison for their actions, but under eugenic reasoning their law-abiding sibling or child is also suspect and should be stigmatized (or forcefully prevented) from passing on deficient genetic material.

A simple two-part test for eugenics is then: (1) Is it concerned with the future genetic stock? (2) Is it advocating for restricted reproduction, either through stigma or force, for those deemed genetically inferior?

Is embryo selection eugenics?

I have publicly resisted applying the “eugenics” label to embryo selection writ large and I continue to do so. Embryo selection is a tool and its use is morally complex. A couple can choose to have embryo screening for a variety of reasons ranging from frivolous (“we want to have a blue eyed baby”) to widely supported (“we carry a recessive mutation that would be fatal in our baby”), none of which have eugenic intent. Embryo selection can even be an anti-eugenic tool, as in the case of high-risk couples who have already decided against having children. If embryo selection technology allows them to lower the risk to a comfortable level and have a child they would otherwise have avoided, then the outcome is literally the opposite of eugenic selection: “unfit” individuals (at least as they see themselves) now have an incentive to produce more offspring than they would have. In practice, IVF remains a physically and emotionally demanding procedure, and my guess is that individual eugenic intentions — the desire to select out unfit embryos with the specific motivation of improving the “genetic stock” of the population — are exceedingly rare.

Is Herasight advocating for eugenics?


While I do not think embryo selection is eugenic in itself, like any reproductive technology, it can be wielded for eugenic purposes. The new embryo selection company Herasight, in my opinion, is advocating for exactly that. To understand why, it is useful to first understand the theories put forth by Herasight’s director of scientific research and communication Jonathan Anomaly (in case you’re wondering, that is a chosen last name). Anomaly is a self-proclaimed eugenicist [Update: Anomaly has clarified that this description was not provided by him and he requested that it be removed]:

Prior to joining Herasight, Anomaly wrote extensively on the ethics of embryo selection, notably in a 2018 article titled “Defending eugenics”. How does Anomaly defend eugenics? First, he reiterates the classic position that eugenics is a resistance to the uncontrolled reproduction of the “unfit” (emphasis mine, throughout):
Darwin argued that social welfare programs for the poor and sick are a natural expression of our sympathy, but also a danger to future populations if they encourage people with serious congenital diseases and heritable traits like low levels of impulse control, intelligence, or empathy to reproduce at higher rates than other people in the population. Darwin feared that in developed nations “the reckless, degraded, and often vicious members of society, tend to increase at a quicker rate than the provident and generally virtuous members”
Anomaly goes on to sympathize with Darwin’s position and that of the classic eugenicists, arguing that “While Darwin’s language is shocking to contemporary readers, we should take him seriously”, later that “there is increasingly good evidence that Darwin was right to worry about demographic trends in developed countries”, and that we should “stop allowing [the Holocaust] to silence any discussion of the merits of eugenic thinking”.

Anomaly then proposes several potential eugenic interventions, one of which is a “parental licensing” scheme that prevents unfit parents from having children:
The typical response is for the state to step in and pay for all of these things, and in extreme cases to remove children from their parents and put them in foster care. But it would be more cost-effective to prevent unwanted pregnancies than treating their consequences, especially if we could achieve this goal by subsidizing the voluntary use of contraception. It may also be more desirable from the standpoint of future people.
The phrase “future people” figures repeatedly in Anomaly’s writing as a euphemism for the more conventional eugenic concept of genetic stock. This connection is made explicit when he explains the most compelling reason for supporting parental licensing:
The most compelling reason (though certainly not a decisive reason) for supporting parental licensing is that traits like impulse control, health, intelligence, and empathy have significant genetic components. What matters is not just that some parents are unwilling or unable to take care of their children; but that in many cases they are passing along an undesirable genetic endowment.
What are we really talking about here? Anomaly has proposed a technocratic rebranding of eugenic sterilization: instead of taking away your reproductive rights clinically, the state will take away your reproductive license and, if you still have children, impose “fines or other costs” (though Anomaly does not make the “other costs” explicit, eugenic sterilization is mentioned as an example in the very next sentence). How would the state decide who should lose their license? Anomaly explains:
For a parental licensing scheme to be fair, we would need to devise criteria that are effective at screening out only parents who impose significant risks of harm on their children or (through their children) on other people.
A fundamental normative principle of our society is that all members are created equal and endowed with unalienable rights. What Anomaly envisions instead is a society where the state can seize one of the most intimate of human freedoms — the right to become a parent — based on innate factors. How does the state determine whether a future child imposes significant risk on future people? By inspecting the biological makeup of the parents and identifying “undesirable genetic endowments” that will harm others “through their children”. This is a policy built explicitly on genetic desirability and undesirability, where those deemed genetically unfit are stripped of their rights to have children and/or fined for doing so — aka bog-standard coercive eugenics.

Today, Anomaly is the spokesperson for a company that screens parents for “undesirable genetic endowments” and, for a price, promises to boost their genetic desirability and their value to future people. It is easy to see how Herasight fits directly into the eugenic parental licensing scheme Anomaly proposed. Having an open eugenicist as the spokesperson for an embryo selection company seems, to me, akin to hiring Hannibal Lecter to do PR for a hospital, but perhaps Anomaly has radically changed his views since billing himself as a eugenicist in 2023?

Herasight (with Anomaly as first author) recently published a perspective white-paper on the ethics polygenic selection, from which we can glean their corporate position. The perspective outlines the potential benefits and harms of embryo selection. The very first positive benefit listed? The “benefits to future people”. While this section starts with a focus the welfare of individual children, it ends with the same societal motivations as classical eugenics: the social costs of the unfit on communities and the benefits of the fit to scientific innovation and the public good: [...]

When eugenics goes mainstream

Let’s review: eugenics has as a goal of limiting the birthrate of the “unfit” or “undesirable” for the benefit of the group. Anomaly describes himself as a eugenicist and explicitly echoes this goal through, among other policies, a parental licensing proposal. Anomaly now runs a genetic screening company. The company recently published a perspective paper advocating for the stigmatization of “unfit” parents who do not screen. Anomaly, as spokesperson, reiterates that their goal is indeed eugenics — “Yes, and it’s great!”. With any other person one could argue that they were clueless or trolling; but if anyone knows what eugenics means, it is a person who has spent the past decade defending it.

I have to say I am floored by how strange this all is. My personal take on embryo selection has been decidedly neutral. I think the expected gains are limited by the genetic architecture of the traits being scored and the companies are mostly fudging the numbers to look good. As noted above, I also think a common use of this technology will be to calm the nerves of parents who otherwise would have gone childless. So I have no actual concerns about changes to the genetic make-up of the population or genetic inequality or any of the other utopian/dystopian predictions. But I am concerned that the marketing around the technology revives and normalizes classic eugenic arguments: that society is divided into the genetically fit and the genetically unfit, and the latter need to be stigmatized away from parenthood for the benefit of the former. I am particularly disturbed by the giddiness with which Anomaly and Herasight have repeatedly courted eugenics-related controversy as part of their launch campaign.

Even stranger has been the response, or rather non-response, from the genetics community. Social science geneticists and organizations spent the past decade writing FAQs warning against the use of their methods and data for individual prediction and against genetic essentialism. Many conference presentations and seminars start with a section on the sordid history of eugenics and the sterilization programs in the US and Nazi Germany, vowing not to repeat the mistakes of the past. Now, a company is openly advocating for eugenics (in fact, a company with direct connections to these social science organizations) and these organizations are silent. It is hard not to conclude that the FAQs and warnings were just lip service. And if the experts aren’t raising alarms, why would the public be alarmed?

by Sasha Gusev, The Infinitesimal |  Read more:
Image: Anselm Kiefer, Die Ungeborenen (The Unborn), 2002
[ed. With neophyte Nazis seemingly everywhere these days, CRISPR advances, and technocrats who want to live forever, it's perhaps not surprising that eugenics would be making a comeback. Update: Jonathan Anomaly, director of scientific research and communication for Herasight and whose articles I criticize here, responds in a detailed comment. I recommend reading his response together with this post. Anomaly’s role in the company has also been clarified. See also: Have we leapt into commercial genetic testing without understanding it? (Ars Technica).]

Tuesday, February 17, 2026

The Crisis, No. 5: On the Hollowing of Apple

[ed. No.5 of 17 Crisis Papers.]

I never met Steve Jobs. But I know him—or I know him as well as anyone can know a man through the historical record. I have read every book written about him. I have read everything the man said publicly. I have spoken to people who knew him, who worked with him, who loved him and were hurt by him.

And I think Steve would be disgusted by what has become of his company.

This is not hagiography. Jobs was not a saint. He was cruel to people who loved him. He denied paternity of his daughter for years. He drove employees to breakdowns. He was vain, tyrannical, and capable of extraordinary pettiness. I am not unaware of his failings, of the terrible way he treated people needlessly along the way.

But he had a conscience. He moved, later in life, to repair the damage he had done. The reconciliation with his daughter Lisa was part of a broader moral development—a man who had hurt people learning, slowly, how to stop. He examined himself. He made changes. He was not a perfect man. But he had heart. He had morals. And he was willing to admit when he was wrong.

That is a lot more than can be said for this lot of corporate leaders.

It is this Steve Jobs—the morally serious man underneath the mythology—who would be so angry at what Tim Cook has made of Apple.

Steve Jobs understood money as instrumental.

I know this sounds like a distinction without a difference. The man built the most valuable company in the world. He died a billionaire many times over. He negotiated hard, fought for his compensation, wanted Apple to be profitable. He was not indifferent to money.

But he never treated money as the goal. Money was what let him make the things he wanted to make. It was freedom—the freedom to say no to investors, to kill products that weren’t good enough, to spend years on details that no spreadsheet could justify. Money was the instrument. The thing it purchased was the ability to do what he believed was right.

This is how he acted.

Jobs got fired from his own company because he refused to compromise his vision for what the board considered financial prudence. He spent years in the wilderness, building NeXT—a company that made beautiful machines almost no one bought—because he believed in what he was making. He acquired Pixar when it was bleeding cash and kept it alive through sheer stubbornness until it revolutionized animation.

When he returned to Apple, he killed products that were profitable because they were mediocre. He could have milked the existing lines, played it safe, optimized for margin. Instead, he burned it down and rebuilt from scratch. The iMac. The iPod. The iPhone. Each one a bet that could have destroyed the company. Each one made because he believed it was right, not because a spreadsheet said it was safe...

This essay is not really about Steve Jobs or Tim Cook. It is about what happens when efficiency becomes a substitute for freedom. Jobs and Cook are case studies in a larger question: can a company—can an economy—optimize its way out of moral responsibility? The answer, I will argue, is yes. And we are living with the consequences.

Jobs understood something that most technology executives do not: culture matters more than politics.

He did not tweet. He did not issue press releases about social issues. He did not perform his values for an audience. He was not interested in shibboleths of the left or the right. [...]

This is how Jobs approached politics: through art, film, music, and design. Through the quiet curation of what got made. Through the understanding that the products we live with shape who we become.

If Jobs were alive today, I do not believe he would be posting on Twitter about fascism. That was never his mode. [...]

Tim Cook is a supply chain manager.

I do not say this as an insult. It is simply what he is. It is what he was hired to be. When Jobs brought Cook to Apple in 1998, he brought him to fix operations—to make the trains run on time, to optimize inventory, to build the manufacturing relationships that would let Apple scale.

Cook was extraordinary at this job. He is, by all accounts, one of the greatest operations executives in the history of American business. The margins, the logistics, the global supply chain that can produce millions of iPhones in weeks—that is Cook’s cathedral. He built it.

But operations is not vision. Optimization is not creation. And a supply chain manager who inherits a visionary’s company is not thereby transformed into a visionary.

Under Cook, Apple has become very good at making more of what Jobs created. The iPhone gets better cameras, faster chips, new colors. The ecosystem tightens. The services revenue grows. The stock price rises. By every metric that Wall Street cares about, Cook has been a success.

But what has Apple created under Cook that Jobs did not originate? What new thing has emerged from Cupertino that reflects a vision of the future, rather than an optimization of the past?

The Vision Pro is an expensive curiosity. The car project was canceled after a decade of drift. The television set never materialized. Apple under Cook has become a company that perfects what exists rather than inventing what doesn’t.

This is what happens when an optimizer inherits a creator’s legacy. The cathedral still stands. But no one is building new rooms.

There is a deeper problem than the absence of vision. Tim Cook has built an Apple that cannot act with moral freedom.

The supply chain that Cook constructed—his great achievement, his life’s work—runs through China. Not partially. Not incidentally. Fundamentally. The factories that build Apple‘s products are in China. The engineers who refine the manufacturing processes are in China. The workers who assemble the devices, who test the components, who pack the boxes—they are in Shenzhen and Zhengzhou and a dozen other cities that most Americans cannot find on a map.

This was a choice. It was Cook’s choice. And once made, it ceased to be a choice at all. Supply chains, like empires, do not forgive hesitation. For twenty years, it looked like genius. Chinese manufacturing was cheap, fast, and scalable. Apple could design in California and build in China, and the margins were extraordinary.

But dependency is not partnership. And Cook built a dependency so complete that Apple cannot escape it.

When Hong Kong’s democracy movement rose, Apple was silent. When the Uyghur genocide became undeniable, Apple was silent. When Beijing pressured Apple to remove apps, to store Chinese user data on Chinese servers, to make the iPhone a tool of state surveillance for Chinese citizens—Apple complied. Silently. Efficiently. As Cook’s supply chain required.

This is not a company that can stand up to authoritarianism. This is a company that has made itself a instrument of authoritarianism, because the alternative is losing access to the factories that build its products.

There is something worse than the dependency. There is what Cook gave away.

Apple did not merely use Chinese manufacturing. Apple trained it. Cook’s operations team—the best in the world—went to China and taught Chinese companies how to do what Apple does. The manufacturing techniques. The materials science. The logistics systems. The quality control processes.

This was the price of access. This was what China demanded in exchange for letting Apple build its empire in Shenzhen. And Cook paid it.

Now look at the result.

BYD, the Chinese electric vehicle company, learned battery manufacturing and supply chain management from its work with Apple. It is now the largest EV manufacturer in the world, threatening Tesla and every Western automaker.

DJI dominates the global drone market with technology and manufacturing processes refined through the Apple relationship.

Dozens of other Chinese companies—in components, in assembly, in materials—were trained by Apple‘s experts and now compete against Western firms with the skills Apple taught them.

Cook built a supply chain. And in building it, he handed the Chinese Communist Party the industrial capabilities it needed to challenge American technological supremacy. [...]

So when I see Tim Cook at Donald Trump’s inauguration, I understand what I am seeing.

When I see him at the White House on January 25th, 2026—attending a private screening of Melania, a vanity documentary about the First Lady, directed by Brett Ratner, a man credibly accused of sexual misconduct by multiple women—I understand what I am seeing.

I understand what I am seeing when I learn that this screening took place on the same night that federal agents shot Alex Pretti ten times in the back in Minneapolis. That while a nurse lay dying in the street for the crime of trying to help a woman being pepper-sprayed, Tim Cook was eating canapés and watching a film about the president’s wife.

Tim Cook’s Twitter bio contains a quote from Martin Luther King Jr.: “Life’s most persistent and urgent question is, ‘What are you doing for others?’”

What was Tim Cook doing for others on the night of January 25th?

He was doing what efficiency requires. He was maintaining relationships with power. He was protecting the supply chain, the margins, the tariff exemptions. He was being a good middleman.

I am seeing a man who cannot say no.

This is what efficiency looks like when it runs out of room to hide.

He cannot say no to Beijing, because his supply chain depends on Beijing’s favor. He cannot say no to Trump, because his company needs regulatory forbearance and tariff exemptions. He is trapped between two authoritarian powers, serving both, challenging neither.

This is not leadership. This is middleman management. This is a man whose great achievement—the supply chain, the operations excellence, the margins—has become the very thing that prevents him from acting with moral courage.

Cook has more money than Jobs ever had. Apple has more cash, more leverage, more market power than at any point in its history. If anyone in American business could afford to say no—to Trump, to Xi, to anyone—it is Tim Cook.

And he says yes. To everyone. To anything. Because he built a company that cannot afford to say no. [...]

I believe that Steve Jobs built Apple to be something more than a company. He built it to be a statement about what technology could be—beautiful, humane, built for people rather than against them. He believed that the things we make reflect who we are. He believed that how we make them matters.

Tim Cook has betrayed that vision—not through malice, but by excelling in a system that rewards efficiency over freedom and calls it leadership. Through the replacement of values with optimization. Through the construction of a machine so efficient that it cannot afford to be moral.

Apple is not unique in this. It is exemplary.

This is what happens to institutions that mistake scale for strength, efficiency for freedom, optimization for wisdom. They become powerful enough to dominate markets—and too constrained to resist power. Look at Google, training AI for Beijing while preaching openness. Look at Amazon, building surveillance infrastructure for any government that pays. Look at every Fortune 500 company that issued statements about democracy while writing checks to the politicians dismantling it.

Apple is simply the cleanest case, because it once knew the difference. Because Jobs built it to know the difference. And because we can see, with unusual clarity, the precise moment when knowing the difference stopped mattering.

by Mike Brock, Notes From the Circus |  Read more:
Image: Steve Jobs/uncredited
[ed. Part seventeen of a series titled The Crisis Papers. Check them all out and jump in anywhere. A+ effort.]