Friday, February 13, 2026

The Anthropic Hive Mind

As you’ve probably noticed, something is happening over at Anthropic. They are a spaceship that is beginning to take off.

This whole post is just spidey-sense stuff. Don’t read too much into it. Just hunches. Vibes, really.

If you run some back-of-envelope math on how hard it is to get into Anthropic, as an industry professional, and compare it to your odds of making it as a HS or college player into the National Football League, you’ll find the odds are comparable. Everyone I’ve met from Anthropic is the best of the best of the best, to an even crazier degree than Google was at its peak. (Evidence: Google hired me. I was the scrapest of the byest.)

Everyone is gravitating there, and I’ve seen this movie before, a few times.

I’ve been privileged to have some long, relatively frank conversations with nearly 40 people at Anthropic in the past four months, from cofounders and execs, to whole teams, to individuals from departments across the company: AI research, Engineering, GTM, Sales, Editorial, Product and more. And I’ve also got a fair number of friends there, from past gigs together.

Anthropic is unusually impenetrable as a company. Employees there all know they just need to keep their mouths shut and heads down and they’ll be billionaires and beyond, so they have lots of incentive to do exactly that. It’s tricky to get them to open up, even when they do chat with you.

But I managed. People usually figure out I’m harmless within about 14 seconds of meeting me. I have developed, in my wizened old age, a curious ability to make people feel good, no matter who they are, with just a little conversation, making us both feel good in the process. (You probably have this ability too, and just don’t know how to use it yet.)

By talking to enough of them, and getting their perspectives in long conversations, I have begun to suspect that the future of software development is the Hive Mind.

Happy But Sad

To get a proper picture of Anthropic at this moment, you have to be Claude Monet, and paint it impressionistically, a big broad stroke at a time. Each section in this post is a stroke, and this one is all about the mood.

To me it seems that almost everyone there is vibrantly happy. It has the same crackle of electricity in the air that Amazon had back in 1998. But that was back in the days before Upton Sinclair and quote “HR”, so the crackle was mostly from faulty wiring in the bar on the first floor of the building.

But at both early Amazon and Anthropic, everyone knew something amazing was about to happen that would change society forever. (And also that whatever was coming would be extremely Aladeen for society.)

At Anthropic every single person and team I met, without exception, feels kind of sweetly but sadly transcendent. They have a distinct feel of a group of people who are tasked with shepherding something of civilization-level importance into existence, and while they’re excited, they all also have a solemn kind of elvish old-world-fading-away gravity. I can’t quite put my finger on it.

But I am starting to suspect they feel genuinely sorry for a lot of companies. Because we’re not taking this stuff seriously enough. 2026 is going to be a year that just about breaks a lot of companies, and many don’t see it coming. Anthropic is trying to warn everyone, and it’s like yelling about an offshore earthquake to villages that haven’t seen a tidal wave in a century.

by Steve Yegge, Medium |  Read more:
Image: uncredited
[ed. See also: Anthropic’s Chief on A.I.: ‘We Don’t Know if the Models Are Conscious’ (NYT); and Machines of Loving Grace (Anthropic - Dario Amodei)]
***
Amodei: I actually think this whole idea of constitutional rights and liberty along many different dimensions can be undermined by A.I. if we don’t update these protections appropriately.

Think about the Fourth Amendment. It is not illegal to put cameras around everywhere in public space and record every conversation. It’s a public space — you don’t have a right to privacy in a public space. But today, the government couldn’t record that all and make sense of it.

With A.I., the ability to transcribe speech, to look through it, correlate it all, you could say: This person is a member of the opposition. This person is expressing this view — and make a map of all 100 million. And so are you going to make a mockery of the Fourth Amendment by the technology finding technical ways around it?

Again, if we have the time — and we should try to do this even if we don’t have the time — is there some way of reconceptualizing constitutional rights and liberties in the age of A.I.? Maybe we don’t need to write a new Constitution, but ——

Douthat: But you have to do this very fast.

Amodei: Do we expand the meaning of the Fourth Amendment? Do we expand the meaning of the First Amendment?

Douthat: And just as the legal profession or software engineers have to update in a rapid amount of time, politics has to update in a rapid amount of time. That seems hard.

Amodei: That’s the dilemma of all of this.