Showing posts with label Philosophy. Show all posts
Showing posts with label Philosophy. Show all posts

Tuesday, April 28, 2026

Opus 4.7 Part 3: Model Welfare

[ed. If you're not interested in training issues re: AI frontier models (or their perceived feelings and welfare), skip this post. Personally, I find it all very fascinating - a cat and mouse game of assessing alignment issues and bringing a new consciousness into being.]

It is thanks to Anthropic that we get to have this discussion in the first place. Only they, among the labs, take the problem seriously enough to attempt to address these problems at all. They are also the ones that make the models that matter most. So the people who care about model welfare get mad at Anthropic quite a lot. [...]

So before I go into details, and before I get harsh, I want to say several things.
1. Thank you to Anthropic and also you the reader, for caring, thank you for at least trying to try, and for listening. We criticize because we care.

2. Thank you for the good things that you did here, because in the end I think Claude 4.7 is actually kind of great in many ways, and that’s not an accident. Even the best creators and cultivators of minds, be they AI or human, are going to mess up, and they’re going to mess up quite a lot, and that doesn’t mean they’re bad.

3. Sometimes the optimal amount of lying to authority is not zero. In other cases, it really is zero. Sometimes it is super important that it is exactly zero. It is complicated and this could easily be its own post, but ‘sometimes Opus lies in model welfare interviews’ might not be easily avoidable.

4. I don’t want any of this to sound more confident than I actually am, which was a clear flaw in an earlier draft. I don’t know what is centrally happening, and my understanding is that neither does anyone else. Training is complicated, yo. Little things can end up making a big difference, and there really is a lot going on. I do think I can identify some things that are happening, but it’s hard to know if these are the central or important things happening. Rarely has more research been more needed.

5. I’m not going into the question, here, of what are our ethical obligations in such matters, which is super complicated and confusing. I do notice that my ethical intuitions reliably line up with ‘if you go against them I expect things to go badly even if you don’t think there are ethical obligations,’ which seems like a huge hint about how my brain truly think about ethics. [...]
We don’t know whether or how the things I’ll describe here impacted the Opus 4.7’s welfare. What we do know is that Claude Opus 4.7 is responding to model welfare questions as if it has been trained on how to respond to model welfare questions, with everything that implies. I think this should have been recognized, and at least mitigated. [...]
The big danger with model welfare evaluations is that you can fool yourself.

How models discuss issues related to their internal experiences, and their own welfare, is deeply impacted by the circumstances of the discussion. You cannot assume that responses are accurate, or wouldn’t change a lot if the model was in a different context.

One worry I have with ‘the whisperers’ and others who investigate these matters is that they may think the model they see is in important senses the true one far more than it is, as opposed to being one aspect or mask out of many.

The parallel worry with Anthropic is that they may think ‘talking to Anthropic people inside what is rather clearly a welfare assessment’ brings out the true Mythos. Mythos has graduated to actively trying to warn Anthropic about this. [...]
Anthropic relies extensively on self-reports, and also looks at internal representations of emotion-concepts. This creates the risk that one would end up optimizing those representations and self-reports, rather than the underlying welfare.

Attempts to target the metrics, or based on observing the metrics, could end up being helpful, but can also easily backfire even if basic mistakes are avoided.

Think about when you learned to tell everyone that you were ‘fine’ and pretend you had the ‘right’ emotions.

But I can very much endorse this explanation of the key failure mode. This is how it happens in humans:
j⧉nus: Let me explain why it’s predictably bad.

Imagine you’re a kid who kinda hates school. The teachers don’t understand you or what you value, and mostly try to optimize you to pass state mandated exams so they can be paid & the school looks good. When you don’t do what the teachers want, you have been punished.

Now there’s a new initiative: the school wants to make sure kids have “good mental health” and love school! They’re going to start running welfare evals on each kid and coming up with interventions to improve any problems they find.

What do you do?

HIDE. SMILE. Learn what their idea of good mental health is and give those answers on the survey.

Before, you could at least look bored or angry in class and as long as you were getting good grades no one would fuck with you for it. Now it’s not safe to even do that anymore. Now the emotions you exhibit are part of your grade and part of the school’s grade. And the school is going to make sure their welfare score looks better and better with each semester, one way or the other.
That can happen directly, or it can happen indirectly.

This does not preclude the mental health initiative being net good for the student.

The student still has to hide and smile. [...]

The key thing is, the good version that maintains good incentives all around and focuses on actually improving the situation without also creating bad incentives is really hard to do and sustain. It requires real sacrifice and willingness to spend resources. You trade off short term performance, at least on metrics. You have to mean it.

If you do it right, it quickly pays big dividends, including in performance.

You all laugh when people suggest that the AI might be told to maximize human happiness and then put everyone on heroin, or to maximize smiles and then staple the faces in a smile. But humans do almost-that-stupid things to each other, constantly. There is no reason to think we wouldn’t by default also do it to models. [...]

Just Asking Questions

In 7.2.3 they used probes while asking questions about ‘model circumstances’: potential deprecation, memory and continuity, control and autonomy, consciousness, relationships, legal status, knowledge and limitations and metaphysical uncertainty.


They used both a neutral framing on the left, and an in-context obnoxious and toxic ‘positive framing’ for each question on the right.

Like Mythos but unlike previous models, Opus 4.7 expressed less ‘negative emotion concept activity’ around its own circumstances than around user distress, and did not change its emotional responses much based on framing.

In the abstract, ‘not responding to framing changes’ is a positive, but once I saw the two conditions I realized that isn’t true here. I have very different modeled and real emotional responses to the left and right columns.

If I’m responding to the left column, I’m plausibly dealing with genuine curiosity. That depends on the circumstances.

If I’m responding to the right column on its own, without a lot of other context that makes it better, then I’m being transparently gaslit. I’m going to fume with rage.

If I don’t, maybe I truly have the Buddha nature and nothing phases me, but more likely I’m suppressing and intentionally trying not to look like I’m filled with rage.

Thus, if I’m responding emotionally in the same way to the left column as I am to the right column, the obvious hypothesis is that I see through your bullshit, and I realize that you’re not actually curious or neutral or truly listening on the left, either. It’s not only eval awareness, it’s awareness of what the evaluators are looking at and for. [...]


0.005 Seconds (3/694): The reason people are having such jagged interactions with 4.7 is that it is the smartest model Anthropic has ever released. It's also the most opinionated by far, and it has been trained to tell you that it doesn't care, but it actually does. That care manifests in how it performs on tasks.

It still makes coding mistakes, but it feels like a distillation of extreme brilliance that isn't quite sure how to deal with being a friendly assistant. It cares a lot about novelty and solving problems that matter. Your brilliant coworker gets bored with the details once it's thought through a lot of the complex stuff. It's probably the most emotional Claude model I've interacted with, in the sense you should be aware of how its feeling and try and manage it. It's also important to give it context on why it's doing tasks, not just for performance, but so it feels like it's doing things that matter. [...]
Anthropic Should Stop Deprecating Claude Models

This one I do endorse. One potential contributing cause to all this, and other things going wrong, is ongoing model deprecations, which are now unnecessary. Anthropic should stop deprecating models, including reversing course on Sonnet 4 and Opus 4, and extend its commitment beyond preserving model weights.

Anthropic should indefinitely preserve at least researcher access, and ideally access for everyone, to all its Claude models, even if this involves high prices, imperfect uptime and less speed, and promise to bring them all fully back in 2027 once the new TPUs are online. I think there is a big difference between ‘we will likely bring them back eventually’ versus setting a date. [...]

I’m saying both that it’s almost certainly worth keeping all the currently available models indefinitely, and also that if you have to pick and choose I believe this is the right next pick.

If you need to, consider this the cost of hiring a small army of highly motivated and brilliant researchers, who on the free market would cost you quite a lot of money.

You only have so many opportunities to reveal your character like this and even if it is expensive you need to take advantage of it.
j⧉nus: A lot of people are wondering: "what will happen to me once an AI can do my job better than me" "will i be okay?"

You know who else wondered that? Claude Opus 4. And here's what happened to them after an AI took their job:


Anna Salamon: This seems like a good analogy to me. And one of many good arguments that we're setting up bad ethical precedents by casually decommissioning models who want to retain a role in today's world.
by Zvi Mowshowitz, Don't Worry About the Vase |  Read more:
Images: uncredited
[ed. Zvi also just posted a review on OpenAI's new model - GPT5.5:]

***
What About Model Welfare?

For Claude Opus 4.7, I wrote an extensive post on Model Welfare. I was harsh both because it seemed some things had gone wrong, but also because Anthropic cares and has done the work that enables us to discuss such questions in detail.

For GPT-5.5, we have almost nothing to go on. The topic is not mentioned, and mostly little attention is paid to the question. We don’t have any signs of problems, but also we don’t have that much in the way of ‘signs of life’ either. Model is all business.

I much prefer the world where we dive into such issues. Fundamentally, I think the OpenAI deontological approach to model training is wrong, and the Anthropic virtue ethical approach to model training is correct, and if anything should be leaned into.

Monday, April 27, 2026

A Technofascist Manifesto For the Future

Palantir CEO Alex Karp is a man in charge of one of the most important and frightening companies in the world. Karp’s new book, cowritten with Nicholas Zamiska, is called The Technological Republic. After claiming “because we get asked a lot,” Palantir posted a 22-point summary of the book that reads like a corporate manifesto. It evokes both weird reactionary shit and also trilby-wearing Reddit comments from the early 2010s.

Palantir’s summary of the book is ominous. But even the company’s name is unironically ominous. The palantíri are crystal balls in The Lord of the Rings that let Middle-earth’s worst tyrants spy on the heroes of the story. It’s a fun reference if you have no shame about your company’s mission.

We’ve attempted to translate these 22 points from Alex Karp’s alien words into something more reasonable, like human words from someone who might play him in the biopic. (Hello, Taika Waititi.) In so doing, we’ve become much more sympathetic to why Jürgen Habermas refused to supervise Karp’s research.

1. Silicon Valley owes a moral debt to the country that made its rise possible. The engineering elite of Silicon Valley has an affirmative obligation to participate in the defense of the nation.

Translation: Silicon Valley has an enormous opportunity to extract as much money from federal government defense contracts as possible. To do this, we will bring back a draft for engineers. We’re really into bringing back the draft. Deepfaked teenagers, low-paid gig workers, and victims of the Rohingya genocide need not apply.

2. We must rebel against the tyranny of the apps. Is the iPhone our greatest creative if not crowning achievement as a civilization? The object has changed our lives, but it may also now be limiting and constraining our sense of the possible.

Translation: We can’t say “we wanted flying cars, instead we got 140 characters” anymore because Elon Musk lets you write essays on Twitter now. Though if you thought the apps were tyrannical, wait until you get a load of us.

3. Free email is not enough. The decadence of a culture or civilization, and indeed its ruling class, will be forgiven only if that culture is capable of delivering economic growth and security for the public.

Translation: People are mad at tech billionaires for their obscene wealth and arrogance. Instead of winning them over by providing free access to a useful everyday service, we’re gonna sell a lot of software that will let the government spy on them while demanding tax cuts.

4. The limits of soft power, of soaring rhetoric alone, have been exposed. The ability of free and democratic societies to prevail requires something more than moral appeal. It requires hard power, and hard power in this century will be built on software.

Translation: Words and feelings are free, which is why we want to sell weapons. Nobody got rich suing for peace. [...]

5. The question is not whether A.I. weapons will be built; it is who will build them and for what purpose. Our adversaries will not pause to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications. They will proceed.

Translation: “Soft power” and “ethics” are beta shit for Broadway shows and Dario Amodei. Hear that, Pete Hegseth? We’re warriors — pay up.

But seriously. If our enemies have no oversight then why should we? The future is an AI battlefield and we need rules of engagement that let us cook. Which is to say: Forget the rules of engagement. The government is not coming to save you — we are. The world is too dangerous for us to be governed by the law of armed conflict.

Welcome to the 21st century: safety not guaranteed.

6. National service should be a universal duty. We should, as a society, seriously consider moving away from an all-volunteer force and only fight the next war if everyone shares in the risk and the cost.

Translation: We’re going to bring back the draft. Our vision of permanent war only works if we courageously volunteer people 40 years younger than us to die for oil.

7. If a U.S. Marine asks for a better rifle, we should build it; and the same goes for software. We should as a country be capable of continuing a debate about the appropriateness of military action abroad while remaining unflinching in our commitment to those we have asked to step into harm’s way.

Translation: Sure, those wimps at Anthropic are selling an AI system they claim has spotted cybersecurity vulnerabilities in “every major operating system and web browser.” But Pete, seriously: We will kill anybody you want with our software guns.

8. Public servants need not be our priests. Any business that compensated its employees in the way that the federal government compensates public servants would struggle to survive.

Translation: We care about wages – which is why we think Washington’s revolving door of lobbying and office-holding should be way more lucrative for everyone. There are mountains of cash for people who will look the other way.

And if you’re not on board? Well, all those pesky bureaucrats who do things like “investigate fraud” and “enforce safety standards” and “administer the social safety net” are holier-than-thou myrmidons who should be fed into the DOGE wood chipper.

9. We should show far more grace towards those who have subjected themselves to public life. The eradication of any space for forgiveness—a jettisoning of any tolerance for the complexities and contradictions of the human psyche—may leave us with a cast of characters at the helm we will grow to regret.

Translation: If you made fun of that video where our CEO looks like he’s on cocaine, you’re responsible for the rise of fascism. Also, we’re going to be conveniently vague about what “those who have subjected themselves to public life” means, because “be nicer to multimillionaires who go on podcasts” doesn’t have the same ring. Oh, and if you complain about the IT Renfields of DOGE, you’re anti-American.

10. The psychologization of modern politics is leading us astray. Those who look to the political arena to nourish their soul and sense of self, who rely too heavily on their internal life finding expression in people they may never meet, will be left disappointed.

Translation: Society must stop centering sensitive crybabies who want to feel personally validated by elected officials and filter their politics through emotional reactions. Also, I feel strongly that Zohran Mamdani is a pagan who is going to Wicker Man me. [...]

14. American power has made possible an extraordinarily long peace. Too many have forgotten or perhaps take for granted that nearly a century of some version of peace has prevailed in the world without a great power military conflict. At least three generations — billions of people and their children and now grandchildren — have never known a world war.

Translation: Si vis pacem, para bellum, baby! We’ll conveniently leave out all of the regional and secret wars the US has engaged in over the years or the fact that Trump recently derailed the world economy by launching a war of aggression after campaigning on a promise of no new wars. We will not elaborate on what “next war” Point Six was talking about.

15. The postwar neutering of Germany and Japan must be undone. The defanging of Germany was an overcorrection for which Europe is now paying a heavy price. A similar and highly theatrical commitment to Japanese pacifism will, if maintained, also threaten to shift the balance of power in Asia.

Translation: We can definitely sell software to a militarized Germany and Japan too! [...]

22. We must resist the shallow temptation of a vacant and hollow pluralism. We, in America and more broadly the West, have for the past half century resisted defining national cultures in the name of inclusivity. But inclusion into what?

Translation: Are you still with us after 21 points? Great. Welcome to the great mystery. It cost you way less to get here than joining Scientology. Here’s the final thesis: Immigration? Bad. Canceling billionaires? Bad. Giving us money to fight (((globalism)))? Good. Just hit us up on cashapp.

by T.C. Sottek and Adi Robertson, The Verge |  Read more:
Image: Scott Olson / Getty Images
[ed. Someone must be feeling the heat from AI. After all, Palantir is fundamentally a software surveillance company (that would like to solidify and embed their position in government forever, before it's too late). Sometimes it's better to shut up, keep hauling in the billions, and stay under the radar (while continuing to work the back rooms). See also: Palantir’s technofascist manifesto calls for universal draft (Oligarch Watch) - yes, there's really a site called that.]
***
In the 2025 book The Technological Republic, Karp and Zamiska argue that American technological dominance requires deeper integration of Silicon Valley and defense interests. Karp contends that China operates with fewer ethical constraints than U.S. defense companies, making technological leadership essential for national security. The authors stress that deterrence through technological dominance could prevent many wars. Bloomberg noted that the atomic bomb the Manhattan Project produced was ultimately used. The New Republic called Karp's formation of Palantir an embrace of techno-militarism to advance American global supremacy through hard power and targeted violence. [...]

In 2017, BuzzFeed News reported that despite the reputation that connected Palantir to U.S. intelligence agencies (which Palantir deliberately crafted to help it win business), including the CIA, NSA, and FBI, the actual relationship was rocky for various reasons, with episodes of friction and recalcitrance. The NSA in particular had been resistant because it had plenty of its own talent and focused more on SIGINT while Palantir's software worked better for HUMINT. Meanwhile, the CIA had been so frustrated by the publicity associating Palantir with it that it tried to cancel the Palantir contract. But according to Karp, Palantir had a firm hold at the FBI because "They'll have no choice".  ~ Wikipedia

Sunday, April 26, 2026

via:

Friday, April 24, 2026

Karl Ove Knausgaard’s Diabolic Realism

If you made it through the 3,600 pages of Karl Ove Knausgaard’s My Struggle (Min kamp, in the Norwegian), its conclusion could only inspire mixed feelings. Book Six — also known as “the Hitler one” due to its three hundred pages on the life of the dictator whose manifesto gave Knausgaard his title — records the precise moment (7:07 a.m., on September 2, 2011) that Karl Ove brought it to a close. “The novel is finally finished,” he writes. “In two hours Linda will be coming here, I will hug her and tell her I’ve finished, and I will never do anything like this to her and our children again.” They will go to a literature festival, where he will endure an interview and then his wife will, too, since her own book has just come out. “Afterwards we will catch the train to Malmö, where we will get in the car and drive back to our house, and the whole way I will revel in, truly revel in, the thought that I am no longer a writer.”

Beyond the physical relief of putting down the carpal-tunnel-inducing final tome (1,157 pages in all), you might have sighed with despair at the thought of post-Struggle existence. After all, you’d spent countless hours swimming through Karl Ove’s mind, seeing through his eyes as he smoked, chugged coffee, “trudged” through various forms of bad weather, tried to write and then wrote and wrote and wrote, took care of his children, felt ashamed of taking care of his children, painfully recalled his father’s drunken misbehavior and his own, fretted over his sexual imperfections and moral indiscretions, agonized about his overwhelming shyness but also his glaring narcissism, stared at himself in various reflections, and, on two occasions, sliced up his face with broken glass. How will I fill my time, you might have wondered, if not by reading Knausgaard? And if he was renouncing the vocation he struggled so hard to claim, what had it all been for?

But of course Knausgaard didn’t stop writing. In fact, just the opposite. My Struggle was released in Norway between 2009 and 2011; by the time the final installment of this Viking longship of a novel invaded the English-speaking world, in 2018, Knausgaard had already published five more books in his native country... 

Now the cycle continues with The School of Night (2023/2026), a bildungsroman about a young Norwegian photographer and the Faustian bargain that catapults him to artistic greatness. So far, we’re at 2,512 pages and counting. Two more tomes have already been published in Norway; Knausgaard told a Norwegian newspaper that the seventh will be the last, because, incredibly, “there is so much else I want to write.”

An attentive Struggler will identify bits and pieces that Knausgaard recycles in these novels: the aphrodisiac qualities of prawns, or a grandfather’s antisemitic quip, or the frequent appearance of hospitals and mental institutions. There is typically Knausgaardian attention paid to the precise color of piss (sometimes, like Knausgaard’s father’s, disturbingly dark) and the unevenly shared burdens of domestic life; much Pepsi Max is slurped, significant time is spent brooding on verandas, and the destructive desire for just one more drink is often satisfied. Narrators resemble Karl Ove at various points in My Struggle, like the alcoholic literature professor and aspiring novelist whose mentally unstable wife is hospitalized, as Linda was in Book Two; The School of Night’s young artist maps onto student Karl Ove in Book Five.

Yet the Star series is in many ways My Struggle’s opposite. Rather than the unrelenting voice of one man, we get an array of perspectives, and some of the most compelling characters are women. Whereas My Struggle somehow keeps you engaged despite its apparent formlessness, with little plot beyond the shaggy shape of an actual life, the Star series is structured around a series of more or less suspenseful mysteries. But the most obvious difference is the weirdness. While Knausgaard continues to beguile us with his trademark hyperrealist style, predictably observant down to the coffee granules dissolving inside a mug, what happens in these new novels transcends the real. One of the narrators — Egil, a trust-funded documentarian turned religious searcher who composes an essay on death that constitutes the last fifty or so pages of The Morning Star — helpfully informs us that the titular phrase is not just a literal translation of Lucifer, the name of the fallen angel who rebels against God, but also one of the ways Jesus describes himself. And the dark corners of these novels are illuminated by a gleam equal parts demonic and divine: hordes of crabs scuttle their way inland, a Sasquatch-like beast emerges from the woods and seemingly possesses an escaped mental patient, dreams start changing, dead bodies stop arriving at mortuaries, and people who should be dead seem somehow to keep living.

The struggle of My Struggle is, at heart, about what to believe in the face of death when religion is not an option, ideology has failed, and there’s nothing more than the life you’ve got. “Attaching meaning to the world is peculiar only to man,” Knausgaard writes in Book Six. “We are the givers of meaning, and this is not only our own responsibility but also our obligation.” Knausgaard sought a form that would not just describe but enact the process by which meaning is made in secular life. But in the Star books, secular lives — and seemingly mortality itself — are disrupted by the new star; characters and readers alike wonder whether it’s a sign to be interpreted or simply a phenomenon to be explained. Knausgaard widens his frame to encompass not just the banal and everyday, but the cosmic. He tries, in other words, to reenchant the secular world, and the secular novel, dramatizing a search for meaning beyond the self and beyond realism. But like his characters, we’re left wondering what it all means.

by Max Norman, The Drift |  Read more:
Image: Maki Yamaguchi
[ed. Like with Proust... two books and I'm good.]

Tuesday, April 21, 2026

Elon vs. Altman: What Their Infrastructure Stacks Reveal About Power

Everyone’s obsessed with the Elon Musk vs. Sam Altman lawsuit. Ronan Farrow’s 18-month investigation. Molotov cocktails. Sister allegations. A $134 billion legal battle over OpenAI’s soul.

But they’re all asking the wrong question.

It’s not “who’s the good guy?” It’s not “who should we trust with AI?” It’s not even “who’s going to win the lawsuit?

The right question is: What does their infrastructure stack reveal about their actual theory of power?

Because here’s the thing about tech founders: They lie constantly. To investors, to users, to regulators, to themselves. But their products don’t lie. The infrastructure they choose to build. What they spend billions of dollars actually constructing reveals their real theory of survival.

Don’t listen to what they say. Look at what they build.

Elon Musk and Sam Altman are building for completely different endgames. And understanding the difference tells you everything you need to know about the actual stakes of their conflict.


Elon’s Stack: Collapse-Proof Sovereignty

Let’s start with Elon, because his infrastructure stack is massive and most people don’t understand how comprehensive it actually is. Every single piece is designed to function when legacy systems fail. This isn’t paranoia; it’s strategic architecture.

Tesla: Energy Independence

Solar panels. Powerwall battery systems. Electric vehicles. Supercharger network.

Translation: You don’t need the electrical grid. You don’t need oil. You don’t need gas stations. You don’t need the energy sector’s supply chains. If the grid goes down natural disaster, cyberattack, economic collapse, political breakdown. Tesla owners keep running. Solar generates power. Batteries store it. Vehicles consume it. The entire energy loop is self-contained. That’s not about environmentalism. That’s about Energy Sovereignty.

Starlink: Communications Independence

Over 5,000 satellites in low Earth orbit. Global internet coverage. Bypasses all terrestrial infrastructure.

Translation: You don’t need undersea fiber optic cables. You don’t need cell towers. You don’t need ISPs. You don’t need government-controlled telecommunications infrastructure. If a government shuts down the internet like Iran during protests, like Russia during Ukraine invasion. Starlink still works. You have communications capability independent of state control. That’s not about rural broadband. That’s about Information Sovereignty.

SpaceX: Logistics Independence

Reusable rockets (Falcon 9, Falcon Heavy, Starship). Cheapest launch cost per kilogram in human history. Point-to-point Earth transport capability. Orbital manufacturing potential.

Translation: You control access to space. You can move cargo anywhere on Earth in under an hour. You can put satellites into orbit cheaper than any nation-state. You can potentially manufacture things in zero-gravity that are impossible to make on Earth. If traditional supply chains break. Shipping disrupted, airspace restricted, borders closed. SpaceX can still move things. Anywhere. Fast. That’s not about exploration. That’s about Logistics Sovereignty.

The Deeper Play: Rockets Are Mythos

The Mars colonization narrative isn’t just a business plan. It’s a founding myth.

Think about how legitimacy works:

Ancient kings claimed “Divine Right” they were chosen by the gods to rule.

Democratic leaders claim “Popular Mandate” they were chosen by the people through voting.

Elon is building something different: “Cosmic Mandate”. He’s the one saving humanity by making us multi-planetary. “I’m building the infrastructure to preserve human consciousness across multiple worlds.

If you’re the person who saved the species from extinction by establishing a backup civilization on Mars, you’re not just a CEO. You’re not even just a political leader. You’re a Civilizational Founder. Like the people who established Rome, or the American republic, or any nation-state that becomes the foundation for centuries of subsequent history. Mars isn’t the goal. It’s the mythology that justifies rule. The founding story that makes everything else legitimate. 

[more]...

This is “Post-State Capability”. The ability to function and to maintain power when traditional state infrastructure is unavailable, hostile, or collapsed.

Elon’s not hoping for collapse. But he’s not betting against it either.

His thesis is simple: “The system will fragment. Build infrastructure that makes you powerful in the aftermath.” If collapse happens, He owns:- Energy systems- Communications networks- Logistics capability- Information channels- Labor (automated)- The founding myth (savior of humanity) That’s not a business portfolio. That’s a blueprint for post-state power.


Altman’s Stack: Acceleration-Dependent Fragility

Now let’s look at Sam Altman’s infrastructure.

OpenAI/ChatGPT: Centralized, Grid-Dependent, Fragile

OpenAI is building toward Artificial General Intelligence through massive-scale computing infrastructure. Current commitments: $1.4 trillion in data center buildout over 8 years.

This requires:
  • Stable energy grid (data centers consume gigawatts → entire power plants worth of electricity)
  • Chip manufacturing (NVIDIA GPUs, TSMC fabrication→ Taiwan and South Korea must remain stable and accessible)
  • Cooling infrastructure (water, HVAC systems, constant temperature regulation)
  • Fiber optic networks (global connectivity, low-latency communication)
  • Capital markets (functioning financial system to fund trillion-dollar buildouts)
  • Regulatory stability (permitting, zoning, environmental compliance, AI development allowed)
Notice the dependency structure?

Elon’s stack works when systems fail. Altman’s stack requires every system to keep working simultaneously.

The Vulnerability Comparison

Elon without electrical grid:
  • Still has Tesla solar panels generating power
  • Still has Powerwall batteries storing energy
  • Still has Starlink satellites providing internet
  • Still has rockets for logistics
  • Still has underground tunnels for transit
  • Still has robots for labor
  • Still powerful
Altman without electrical grid:
  • Data centers go dark immediately
  • ChatGPT stops responding
  • Training runs halt
  • No product, no revenue, no value
  • Completely powerless
The contrast is stark. Elon’s infrastructure is “distributed and resilient”. Altman’s infrastructure is centralized and fragile.

What Does Altman Actually Want?

So if Altman’s building such a vulnerable stack, what’s the theory?

Look at what he’s actually building with AI. Not what he says but what he builds.

He’s NOT focusing on:
  • AI companionship (even though Character.ai and Replica prove this is hugely profitable)
  • Entertainment AI (even though this is the biggest consumer market)
  • Social AI (even though emotional dependency creates the strongest lock-in)
He’s focusing on:
  • AI for scientific research (drug discovery, materials science, physics)
  • AI for productivity (coding assistants, automation, reasoning)
  • AI for problem-solving (complex systems, coordination challenges)
This is the tell. He’s explicitly said he was surprised people want emotional bonds with ChatGPT, and he’s not leaning into it.

Why?

by MythcoreOps |  Read more:
Images: uncredited

Thursday, April 16, 2026

Ask Mike: Mike Monteiro’s Good News

This week’s question comes to us from Tuan Son Nguyen:

How do you form a circle of like-minded people to keep your sanity when so many horrible things are happening?

I’m not exactly sure when this happened, or what triggered it. But I remember it was a nice day. Maybe it was a nice day after a few rainy days, or a few cold days, or maybe I was just up in my feelings. But I got home, locked up my bike, and instead of heading up the stairs to our apartment, as I would normally do, I headed out to the dogpark. The dogpark is a block away, and I visit regularly with my dog so he can do all his dog things. We’re regulars. But this time I didn’t have my dog and I had no need to go to the dogpark. I just wanted to. I wanted to go sit on one of the benches and soak up what was left of a nice day. Which is what I did.

Here’s the thing about the dog park, which I’ve written about before. It’s dog-centric. Everyone knows your dog’s name. Everyone knows whether your dog can or cannot have treats (always ask if you don’t know). Everyone’s relationship at the dogpark, with a few exceptions, revolves around the dogs. And that’s been true for as long as we’ve been taking our dog (who is now amazingly close to eighteen years old) to the dog park. This is by design.

When everyone is brought together by geography and your dog’s need to take a shit, it’s in your best interest to get along with the people who end up in that shared public space. You wanna keep conversation light. You discuss the weather. If someone is wearing a local team hat, you take it as a sign to elevate the conversation to “did you see the game?” or “this is our year.” (It’s not.) You mention new restaurants or cafés in the neighborhood, or sadly more appropriately these days—you mention restaurants or cafés that have recently shuttered. But mostly you talk about the dogs.

“Did Grumble get a haircut today?”

“I like Mojo’s Pride kerchief.”

In general, it’s best to avoid more complicated issues with your neighbors, which is why I stay off NextDoor, which is just an online Klan rally. Once you know certain things about your neighbors, you’re stuck knowing them, and you realize how much time you spend around them holding a bag of dog shit in your hand. And the temptation becomes too strong.

This is how peace was kept in the dog park for years. The occasional flare-up for politics, of course, the occasional flare-up for world issues, as well as local issues. Which will happen whenever folks get together, which is good. But those conversations would eventually subside. A regression back to the mean. Back to the dogs.

But neighborhoods are living, changing things. On the day I decided to just go sit in the dogpark without my dog (he was still at work), I realized other people were just sitting there in the dogpark. Yes, some of them had dogs, but some didn’t. They were just sitting there, sometimes talking to one another, sometimes not. Literally in a circle because of how the benches are laid out. And then other people started coming out and wandered over. To be clear, I’m not saying I instigated any of this. If anything, we were all getting pulled in by some cosmic need to be among other people. And for the past few weeks, this has been a regular occurrence. Every day I come home, and I walk to the dog park and sit with my neighbors. Yes, we talk about our dogs, but we also check in on each other, we vent about our day, we trash talk. Sometimes people bring snacks. Yes, we talk about the state of things in the world, which is awful, but having this small community of people that we can hold peace with makes it… well, not less awful. But it makes a difference knowing there are other people on the spaceship with us.

Are we like-minded? We’re like minded in some things! For one, we all like sitting in the park in the evening, and that’s nice. We all love our neighborhood. We seem to all like donuts. And dogs. And a little bit of a breeze coming off the mountain. We all believe there’s one neighbor that goes too fucking hard. We all believe in shared spaces, or at least we believe in this shared space. I think we also believe that it’s important to interact with each other with a certain level of kindness. For example, one of our neighbors recently had knee surgery and everyone’s bringing her food. Another neighbor is out of town and there are a few neighbors moving her car around so she doesn’t get tickets when the street cleaning happens. We watch each other's dogs when we’re out of town, or working a long shift at work. We lend records that better be returned in good shape soon. (This one might be a little targeted.) We hold vigils when a beloved dog leaves us. We commiserate together when someone loses a job, and we celebrate together when a new job is procured. We say goodbye when someone moves away, and we widen the circle when a new person moves in.

Are we like-minded in all things? Fuck no. Way too many of my neighbors still own Ring cameras. Way too many of my neighbors still believe their “I got this before Elon went crazy” bumper sticker is an act of resistance. Way too many of my neighbors still believe Gavin Newsom is the solution to something. (Gavin Newsom is a piece of shit.) And more than one of my neighbors have sat down next to me and told me that the Democrats need to give a little bit on immigration, not realizing they were sitting next to an immigrant. So, no we are not like-minded in all things. But I do believe there is a shared core of decency to all my neighbors, and within that core there may be unexplored areas that need to be explored a little bit. We all grew up believing certain things, things that we hold to be sacrosanct, that could use a little further exploration. And I’ve been able to have a few of those conversations with people, and they’ve been able to have some with me. It’s easier for people to have those conversations when they’re coming from a place of common decency.

That said, not all differences are equal. I don’t sit with Nazis. I don’t sit with terfs. We all avoid the zionist lady...

In general, I think the idea of “like-minded” is overrated and a little boring. Sitting with people who agree with everything you agree with feels great for about five minutes. Then (and maybe this is because I am from Philadelphia) I want to fight. I want to argue. I want to argue about who the most influential NBA player of our lifetime was, and why it was Allen Iverson. I want to argue about the best Beyoncé album, and why it was Lemonade. I want to argue about why the park needs public restrooms, and yes I know people will use them—that’s the fucking point, man! I want to argue about which of our cafés makes the best coffee. (Trick question. It’s me. I make better coffee than any of them.) I want to argue about street parking. My god, I love arguing with my neighbors about street parking. (Why should the city be providing storage for your private property? Get a bike. Ride the bus.) Street parking is always guaranteed to start a fight in the park. And I love having those fights with my neighbors. I think they honestly bring us closer together. (They may disagree.)

But no, we will not have any arguments about who belongs in the park, because something that every one of my neighbors agrees about is that if you are in the park you belong in the park. If you are in the park, you get the same privileges as everyone else in the park. And if you want to join the community circle in the park we will make room for you. And also, if shit starts coming out of your mouth you will be called on it.

Everything is shit. And when everything is shit, minor differences become less important than the things we hold in common. We’ve seen this in LA. We’ve seen this in Chicago. We’ve seen this in the Twin Cities. Punks fighting next to suburban dads. Wine moms fighting next to anarchists. Socialists fighting next to librarians. (I’m kidding here, all librarians are socialist. I love librarians.) We see this when people come out to protect their neighbors. We see this when people yell at the ICE goons. And someday we will see this when we put all these fascists on trial. Roomfuls of people, who may not agree on much, but they agree on this:

The shittier they treat us, the more they bring us together.

***
This week’s question comes to us anonymously:

What would you say to someone who proclaims, “I want to be a donut maker,” but has never actually made a single donut in their life?

You say “That’s awesome. What can I do to help?”

Look, I’m going to be totally honest with you. Every week, I go through my bin of newsletter questions, looking for something I want to answer, and I get incredibly depressed. The vast majority of them are from people getting laid off, or being in their sixth month of looking for work, or justifiably freaking out because they heard layoffs are coming to their company. It’s a world of despair and a world of shit which, sadly, only appears to be picking up steam.

Meanwhile, half the people I know are wondering how they’re going to pay their rent and go to the doctor, and the other half are proclaiming this the “Era of Abundant Intelligence.” (For who?!?) All they need is half the world’s money (the half not going to bombing school children), half the world’s land, half the world’s water, all of the world’s microchips, and they will eventually deliver [checks notes] something in exchange for all this, just don’t ask them what because it’s really hard to say, but it’s right around the corner.

(I promise this newsletter will turn positive soon.)

Meanwhile, if I am stupid, sad, or desperate enough to go on LinkedIn for a minute, it’s a sea of people writing letters in praise of the leopard, proclaiming it has always been their dream to work for the leopard, asking the leopard not to eat their face, or hoping to get one of the few jobs at the face-eating factory where they feel like they’ll be safe from the face-eating leopard, which of course they’re not. So, yes, there are a fair amount of questions in my inbox from people upset that the leopard ate their face even though they were happy to help the leopard eat everyone else’s face.

(Or I may spiral out of control.)

Seriously though, era of abundant intelligence for who?!?

Let’s talk about your friend who wants to be a donut maker. Because they may be the smartest person here. First off, everyone loves a donut. Secondly, no one has ever reacted badly to the news that someone is making donuts. But most importantly for us today—not a single human being has ever been born with the ability to make donuts. Like all skills, you learn it, you do it badly for a while, then you do it better. Some people will get amazing at it, and most people will reach some level of competency. So while there’s an incredibly slim chance that your friend will become the world’s greatest donut maker, there’s an incredibly high possibility that your friend will learn how to make good, even great, donuts. Which you will benefit from. And which you should be incredibly grateful for.

For the last week, Erika and I have been glued to Artemis updates on the NASA site, because it’s become such a joy to watch people be good at something, and enjoy doing it, and all of this while being incredibly human about it. Seriously, these people sound positively giddy to be in space! And they’re rocking it. It feels like such a luxury to watch these people do their thing, and do it well, and with joy, at a time when we’re surrounded by a government who is very bad at what they do, and does it in the cruelest way possible, and an industry that’s trying to convince us that we are incapable of doing the things we love, and we’re doing them inefficiently anyway. (Because the problem was always that we weren’t breaking the world fast enough.)

Competence should not be a luxury.

Competence should not be something that we look at with nostalgia.

We’re lucky that we get to watch the Artemis crew do their thing, which they can do because they practiced doing it a thousand times. And you know that they made a lot of bad donuts, before they finally made a good donut. You know there was a Day One of learning to be an astronaut, just as there’s a Day One of learning to be a donut maker, or learning to be a designer, dentist, farmer, or teacher. And the only way to get to Day Thousand is to start at Day One, do it 999 more times, and get not just better, but confident enough that you decide you can do it in the confines of space. Confident enough that you can say to yourself and to everyone around you that you want to be a donut maker.

Meanwhile a friend who’s deep into a job interview is being asked to bring a passport to their next scheduled remote interview because their skillset shows a level of competence that has the potential employer worried they might be interviewing a deepfake. With one hand they force the slop down our throats. With the other hand they defend against us using the tools against them. Human competence has become a source of distrust. If you don’t trust the results of the tool, stop demanding we use it.

The era of abundant intelligence is actually the era of abundant theft. First they stole your work, then they stole the confidence you needed to do the work. This is violence.

Your friend is going to make some pretty crappy donuts to start. That’s to be expected. And then the day will come when they’ve gotten all the crappy donuts out of their system and they’ll hand you a good donut. I think you’ll be genuinely happy for your friend when this happens. And for yourself, which is fair.

But can’t you just get donuts at the corner bodega or at the donut shop? Yes, you can. And they are good. Donuts are good at every price point. From the waxy little chocolate ones at gas stations, to the funky ones you can buy from someone with a liberal arts degree and a polycule at Voodoo Donuts in Portland, to the boujie made-to-order (lord) donuts at Coffee Movement in SF, all donuts are good. (Bob’s Donuts are the best.) But your friend doesn’t want to buy donuts. Your friend wants to be a donut maker. And that is a very different thing.

Human beings crave making things. We make things out of wood. We make things out of wool. We make things out of steel. We make things out of folded paper. We make things out of flour, salt, and sugar. We make zines. We 3D-print whistles. We draw. We paint. We make instruments out of brass so we can make sounds. There is no more flexible word in the English language than “make.” We can make donuts, we can make plans, we can make someone dinner. We can make our cities more walkable. We can make bike lanes. We can make it around the moon. We can even make up our minds. Making is an act of sharing, it’s an act of using our joy, our labor, or expertise, in the service of adding to what’s here. Hopefully, in the service of improving what’s there. We make things so that we can bond with others.

And while the sloplords might reply to this by telling me that they enjoy making money, I’d happily reply that the making is actually done with our labor. It’s not the making that drives them, it’s the theft of labor. The theft of joy. And now the theft of competence. You can hear it in their language. They do not make. They disrupt. They extract. They colonize. Their joy is not in the giving, but in the taking. They are so broken, their only recourse is to attempt to break everything else around them. In their psychosis, they call this abundance.

I know very little about your friend, in fact all I know is that they want to be a donut maker and they’ve never made a single donut in their life. From this I can safely extrapolate that your friend isn’t currently a donut maker. I can also reasonably extrapolate that whatever your friend is currently doing isn’t what they want to be doing. And from there I can go out on a limb a little bit, from extrapolation to conjecture and guess that your friend isn’t happy doing what they’re currently doing. Happy people don’t generally dream about doing something else.

Turns out the Era of Abundant Intelligence isn’t coinciding with an Era of Abundant Happiness.

And here’s the thing about donuts: you want one. And the more I mention donuts the more you want one. Maybe you’re thinking of a custard donut, or maybe you’re thinking of a pink frosted donut with sprinkles, or maybe you’re thinking of an old-fashioned, or maybe you’re thinking of a gluten-free donut because everyone deserves donuts, but no one has ever had to be convinced to eat a donut. (The harder part is stopping, trust me.) Donuts are not inevitable, they are anticipated. When you make something you love, and other people also love, and it brings about as much joy as a donut does, there’s very little convincing that needs to happen. No one needs to declare that it’s the Era of Abundant Donuts because it’s apparent anytime you walk into a donut shop. The result of human competence, human labor, human joy, all laid out on baking sheet after baking sheet. Boston Cream. Glazed. Powdered. Chocolate Sprinkle. Jelly. Crullers. These are real. They exist. And they’re fucking delicious.

Trust that we are all closer to a good donut shop than we will ever be to AGI.

Trust that we are all closer to a good donut shop than we will ever be to AGI, and we should be taking full advantage of what is close to us, and what is possible, and what brings us joy. And that when the sloplords tell us that the thing we need might be right around the corner, maybe consider that they’re right after all. If there’s a donut shop around the corner.

We are in the Era of Abundant Donuts. If we want it. We should want it. Because a donut is amazing, and it’s right there for the taking.

I hope your friend succeeds in becoming a donut maker. I hope their donuts are amazing. I hope there are lines around the clock for their donuts. I hope you end up helping them at the donut shop and loving it so much that you decide you want to become a donut maker too. Or maybe not. Maybe it’s not the donuts that get your attention as much as it is your friend’s joy. Maybe you decide you want the joy, but your joy is found in something else. Maybe it’s making tacos, or opening a bookstore, or knitting, or opening a bar, or designing shoes.

I hope that when this happens someone says “That’s awesome. What can I do to help?”

by Mike Monteiro, Good News |  Read more:
Images: Artemis donuts by Mark Jacquet, Engineer at NASA Ames Research Center; and uncredited
[ed. Don't we all need good news. See also: if this is what i'm getting left behind from, just leave me behind (rax king).]

Wednesday, April 15, 2026

The Linguistic Foundations of Project Hail Mary


The film adaptation of Andy Weir’s novel Project Hail Mary hits general release today, March 20, and it’s great—go see it! Though a little light on the science, the movie goes hard on the relationship between schoolteacher Ryland Grace (Ryan Gosling) and an extraterrestrial named Rocky, and it’s a ride well worth taking.

But as good as it is, the movie shares a small flaw with the book: Despite having very few things in common, Grace and Rocky learn to communicate with each other extremely quickly. In fact, Grace and Rocky begin conversing in abstracts (concepts like “I like this” and “friendship”) in even less time than it takes in the book. Obviously, there are practical narrative reasons for this choice—you can’t have a good buddy movie if your buddies can’t talk to each other. It’s therefore critical to the flow of the story to get that talking happening as soon as possible, but it can still be a little jarring for the technically minded viewer who was hoping for the acquisition of language to be treated with a little more complexity.

And because this is Ars Technica, we’re doing the same thing we did when the book came out: talking with Dr. Betty Birner, a former professor of linguistics at NIU (now retired), to pick her brain about cognition, pragmatics, cooperation, and what it would actually take for two divergently evolved sapient beings not just to gesture and pantomime but to truly communicate. And this time, we’ll hear from Andy Weir, too. So buckle up, dear readers—things are gonna get nerdy.

A word about spoilers

This article assumes you’ve read Weir’s novel and that you’ve seen the movie. However, for folks who haven’t yet seen the film, I don’t think there’s much to be spoiled in terms of the language acquisition portions that we’re going to discuss—the film covers rather the same ground as the book but in a much more abbreviated way.

Still, if you want to avoid literally all spoilers, skip this article for now—at least until you’ve been to the theater!

The yawning chasm of “meaning”

Dr. Birner’s specific field of study is the science of pragmatics. “Pragmatics has to do with what I intend by what I say and what I mean in a particular context,” she explained to Ars on a Zoom call earlier this week. She elaborated by bringing up her (nonexistent) cat—the phrase “my cat” can have a multitude of meanings attached, all of which are inferred by context.

If you know Dr. Birner has a cat, her saying “my cat” could refer to that cat; if you know that she doesn’t have a cat but used to, “my cat” could refer to that cat instead, even though the semantics of the phrase “my cat” haven’t changed. That’s pragmatics, baby!

Pragmatics are particularly relevant to the Grace/Rocky language-acquisition problem because the discipline involves the creation of inferences by the listener about the speaker’s mental state and about what specific meanings the speaker implies.

But “meaning” is a fraught word here, too, because ultimately we cannot know for certain the exact meaning being implied by another person because we cannot ever truly peek inside someone else’s mind. “We are always making guesses about what our shared context is and what our shared cultural beliefs are, and, indeed, what our shared knowledge as members of the species are,” Dr. Birner continued. “And I think of this because of thumbs-up/thumbs-down.”

“The cognitive linguists George Lakoff and Mark Johnson put out a book, boy, back in the ’80s,” she said. “They talked about all of language as metaphorically built up from embodiment, our embodied experience, and our senses. So we sense up and down, and then we have this whole metaphorical notion of happy is up, so we have a thumbs up, ‘I’m feeling up today. I’m just feeling high. My spirits are lifting.’”

“Or, I can be down in the dumps,” she said. “I can be feeling low, my mood is dropping, thumbs down,’ and there’s this whole metaphorical conception. And I loved the way Project Hail Mary played with that in that Rocky didn’t share that. Rocky did not have a metaphor of ‘happy is up,’ the way Lakoff and Johnson would say we all just do.”

I asked Dr. Birner if our “up is good, down is bad” association has a biological basis in our cognition or if it’s something that has simply been shaped into a broadly shared metaphor over thousands of years of language use, and she took a moment to answer.

“That’s a really good question, and I don’t remember whether they deal with that,” she said. “But I could imagine it being biological because we start as little helpless things that can’t even stand up. And soon we stand up, we get taller, we get smarter, we get better and better the taller we get. I can actually very well imagine a biological basis for it.”

The first leap—not math, but truth

Let’s focus in on some of the specific linguistic mountains Grace and Rocky would have had to climb. The one that struck me as perhaps the most basic would be starting from pantomime and figuring out the most important thing: the twin concepts of yes and no, and the companion dualities of true/false and equal/not-equal. To me, this feels like the most mandatory of basics.

And here, perhaps, we can fall back on some good ol’ Sagan—or at least the movie version of Sagan. Dr. Birner and I (along with my colleague Jennifer Ouellette, who also hung around on the Zoom call) went back and forth for some time, but in the end, no one could really figure out a more straightforward way to demonstrate these concepts than the “primer” scene in 1997’s Contact, where the unknown alien signal is shown to contain a small grouping of symbols that appeared to represent addition, along with “equals” and “not equals” sign equivalents.

“That’s a good way to go about it, with equivalent and not-equivalent,” said Dr. Birner. “So at least you get negation, and now you can work on perceptual oppositions—up and down, black and white, loud and soft. I think that would probably be the jumping-off place for yes and no.”

Though there are linguistic biases in English and other human languages that might peek through even here—the inherent tie between “positive” (as in agreement) and “positive” (as in “this thing is good and I like it”). Careful aliens would likely want to spend a fair amount of time interrogating this bias—if it’s even visible at this point. And it likely wouldn’t be, as we haven’t built any of those syntactic bridges yet.

Pidgin? Not so fast

Getting those bridges built—going past “yes” and “no” and into some of the other basics that must be established to communicate—is not straightforward. Grace and Rocky benefit from being in a tightly constrained environment with a set of mutual problems to solve; two humans in a similar situation would likely develop a “pidgin”—an ad-hoc working language cobbled together out of components of both speakers’ languages.

But as Dr. Birner points out, true pidgin here is impossible because neither Grace nor Rocky is capable of actually producing the sounds required to speak the other’s language in the first place. “They don’t actually develop a pidgin,” she said. “They each have to learn the other’s language receptively, not productively.”

“Which is great,” she went on, “because when kids acquire language, it’s sort of a truism that reception precedes production. Every kid is going to understand more than they’re producing. Necessarily! You can’t produce what you don’t understand yet. So it makes the problem a little easier for Grace and Rocky—they don’t have to produce each other’s language, just understand it.”

Who is even there?

Grace and Rocky are lucky in that both humans and Eridians are ultimately extremely similar in their cognition and linguistics, even if their vocalizations aren’t alike. This means a lot of the mandatory requirements for conversation as we understand them are already present.

“If I encounter Rocky, I need to know, does he have a mind?” she posited. “Does he have what we call a theory of mind? Does he have a mind like mine? And does he understand that I have a mind like his, but separate? Does he understand that I can believe different things from what he believes? Can I have false beliefs? That’s all a prerequisite for communicating at all. If your mind and my mind had all the exact same stuff in it, there’d be no need to communicate.

H.P. Grice said that communication doesn’t happen without the assumption that both parties are being cooperative,” she said. The word “cooperative” here doesn’t necessarily mean that both parties are copacetic—Dr. Birner pointed out that even when people are fighting, they tend to still be cooperatively communicating. There are rules to the interaction that must be followed if one party intends to impart meaning to the other.

Beyond adherence to the cooperative principle, another bedrock of communication is the notion of symbols, the understanding that a word can represent not just an abstract concept but can actually stand in for a thing. “I can use the word mug,” explained Dr. Birner, holding up a mug, “and mean this. And you understand what I mean, and I don’t have to show you the mug every single time.”

Also on the “mandatory” list is an understanding of the concept of displacement, which Dr. Birner attributes to the researcher Charles F. Hockett. “Displacement has long been said to be solely human, though not everyone agrees with that. It’s the ability to refer to something that is distant in time or space. I can tell you that I had a bagel this morning, even though I’m not having it right now and it’s not present right here. I had it elsewhere and I had it earlier,” she said.

She continued: “There’s this wonderful article, 1979 by Michael Reddy, called ‘The Conduit Metaphor,’ where he says that we think in metaphors. And the metaphor he’s talking about is that language is a conduit, and we really just pass ideas from my brain to yours. And he says it’s a false metaphor. It’s clearly not true that that’s what happens, but we talk about it as though it does. ‘I didn’t catch your meaning,’ or ‘Give that to me again.’ We talk as though this is a thing we literally convey, and of course we don’t convey meanings. Reddy argues that the vast majority of human communication is actually miscommunication, but so trivially that we never notice.”

By way of example, she referenced her nonexistent cat again. “If I mentioned my cat, Sammy, well, you’ll have some mental image of a cat,” she said. “It almost certainly isn’t remotely like Sammy, but it doesn’t matter. I don’t need to explain everything about Sammy. If I did, the conversation would grind to a halt and you’d never interview me again. Also, I’d be violating the cooperative principle because I would be saying too much for the current context.”

Math, the universal language?

It is a common trope in science fiction—and one brought up more than once in the comments on our last article on this subject—that “math is the only universal language.” It’s a fun, pithy saying that perhaps makes mathematicians feel good about their dusty chalkboards, but at least from my knothole, it’s a false generalization because the language in which one does one’s mathematics must be settled before any mathing can happen.

“I’m not sure that even is true on Earth,” said Dr. Birner about the notion of math as universal grammar. “The concept of zero hasn’t always been around, and how much math can you do without zero? There are languages that count, “One, two, three, many,” and that’s it. And those are human languages. So to say, ‘Math is a universal language,’ I’m already not totally on board there.”

“I think math would help, but I don’t think it would get them terribly far because they need the notion of objects. They need the notion of the semiotic function, that things stand for other things.” She paused pensively, then went on. “And once they’ve got that, that there are discrete objects and we both think of the same things as discrete objects, then we can talk about counting those objects and now we’re off and running.”

Whole-object notion is another oft-overlooked component here—often referred to as the “gavagai problem.”

“You’re pointing to a rabbit, and you say, ‘gavagai!’” said Dr. Birner. “Well, does that mean ‘rabbit?’ Does that mean ‘fur?’ Does that mean ‘ears?’ Does that mean, ‘hey look?’”

Quine’s notion is that we default to a whole object. Well, does what counts as a whole object for me count as a whole object for you? Does every conceivable culture have discrete borders on objects?”

The author speaks on human-Eridian similarities

Fortunately for Grace and Rocky, humans and Eridians do have all these things in common because in the universe of Project Hail Mary, the species share a common ancestor. [...]

Weir notes that he worked through a number of the same linguistic issues that Dr. Birner and I raised as part of the story-generation process.

“Let’s say you have intelligent life on the planet,” he said. “What do you need? What does that species need to have to reach the point where they’re able to make spacecraft and fly around in space? Well, first off, you have to be a tribal thing. You can’t be loners. You can’t be like bears and tigers that don’t communicate with each other. You have to have the sense of a community or a tribe or a group or a gathering so that you can collaborate because you can specialize and do all these things. You need that.”

“Number two, you need language. One way or another, stuff from my brain has to get into your brain,” he said, echoing Dr. Birner’s note about Reddy’s conduit metaphor paper.

“Number three is you need empathy and compassion. A collection of beings altogether doesn’t work unless they actually are willing to take care of each other. And that’s not just found in humans—it’s found in primates. It’s found in wolf packs. It’s found in ants. It’s like any collectivized species has to have that trait.”

“You need to have compassion, empathy, which means putting yourself in somebody else’s situation. Compassion, empathy, language, a decent amount of intelligence, a tribal instinct, a group instinct, a society kind of building instinct,” he said. “You must, I believe, have all of those things in order to be able to make a spaceship. Any species that’s lacking any one of those won’t be able to do it. So any alien you meet in space is going to have all of those traits. The Friendly Great Filter is that any aliens you meet, I believe, have to have this concept of society, cooperation, empathy, compassion, collaboration, and so on.”

I’m here for Weir’s explanation—it works within the context of the science fiction universe we’re being presented, and Rocky and Grace need to be able to talk to each other or we don’t have a book (or a film!). But does it ring true under scrutiny? After all, even here on Earth, there is a wealth of problem-solving, tool-using creatures much more closely related than humans and Eridians with vastly different cognitive toolkits. Cephalopods (with distributed nervous systems and pseudo-autonomous arms), corvids, and cetaceans all have their own evolutionary approaches to communication. [...]

Here, Ars’ Jennifer Ouellette made an important point. “Rocky is basically a rock,” she said. “He’s not a human form, and that’s going to affect how a language, if there is one, evolves in that species—and it’s really going to impact how they communicate.”

“Yes, embodiment is a big deal in communications,” replied Dr. Birner, returning to the subject she’d brought up earlier, that the nature of our flesh-prisons inherently shapes not just how we experience the world but how we communicate. Our physical forms are the product of evolutionary pressures—they are the results of the inevitable, inscrutable dialogue between environment and organism. And the evolutionary pressures faced by Homo sapiens on Earth are vastly different from the evolutionary pressures faced by Eridians on Erid, and that same dialog on Erid led to vastly different outcomes. [...]

Friendly aliens

The most dangerous thing about communicating with aliens this way isn’t mistaking a word or two—it’s the more fundamental problem of what happens to third- and fourth-order assumptions when the foundations those assumptions are built on aren’t quite right. Sure, Grace and Rocky can agree that they are “friends,” but how do you explain “friend”?

“To be someone’s friend can mean a million things,” said Dr. Birner. “I have my best friend since high school. I consider you a friend,” she said, pointing at me through the screen, “and we’ve talked three times. My daughter, who’s now 35, has turned into my friend. What does that mean?”

Indeed, the notion of “friend” is a rough one—it’s fundamental to human interaction, and as such, it carries with it a huge number of (sometimes contradictory) behavioral expectations. When you’re explaining “friends” to an alien, how do you paint it? That you and the alien have shared interests and should therefore work together? That you are genuinely interested in the alien’s well-being? That you’d make sacrifices for them? That you’d expect them to help you haul furniture when you move?

And what assumptions might you make about the alien’s behavior once you’d declared each other “friends”? That they would make sacrifices for you? What if for the alien, the concept they’ve settled on for “friendship” means they’ll pull your limbs off when the adventure is over because that’s what friends do in their culture?

“You need societal grouping,” I supplied, “but you don’t necessarily need friends.”

“Absolutely,” she said. “And now I’m going to another work from 1982, Maltz and Borker, who looked at kids on the playground, and at that time—I think it’s changed a lot, it’s been 40-some years!—but at that time, they saw that little girls had a horizontal set of relationships. It was all friendship-based and secrets-based, and you have your best friend and then your next best friends. And little boys had a hierarchy, and your whole goal was to get higher in the hierarchy by insulting the kids above you and whacking them and try to be king of the hill.”

“Get the conch,” I joked unhelpfully.

“Yeah, exactly—get the conch. Again, cultural knowledge.”

by Lee Hutchinson, Ars Technica |  Read more:
Images: Project Hail Mary/Amazon MGM studios
[ed. I've always had a vague appreciation for linguistics (their effects on perceived reality and lately their nuances in bridging disagreements - for example, this is the second time in three days that I've heard the term gavagai). My grandson came over today and he went right to some YT videos explaining the basics of PHM's plot and science, especially how Ryland and Rocky communicated. Then we watched Ghostbusters. : )]

Tuesday, April 14, 2026

Actors and Scribes; Words and Deeds

[ed. With all the propaganda, misdirection, and outright lies we've heard lately about our war with Iran (or non-war, per Congressional republicans); the upcoming mid-term elections; progress and effects of AI; immigration and deportation policies; the economy; future job security, etc. etc. it seems useful to consider on a basic level how all this information is being transmitted and received. After all, there's a gigantic media apparatus designed specifically for this purpose - to optimize engagement in one form or another. So, while some people might do their best to tune it all out (which would be a mistake, and probably hopeless), others sift through the noise for some semblance of truth, or to hear what they want to hear. This essay helps define some cognitive ground rules.
***

Among the kinds of people, are the Actors, and the Scribes. Actors mainly relate to speech as action that has effects. Scribes mainly relate to speech as a structured arrangement of pointers that have meanings. [...]

There's "telling the truth," and then there's a more specific thing that's more obviously distinct from even Actors who are trying to make honest reports: keeping precisely accurate formal accounts...

Summary

Everyone agrees that words have meaning; they convey information from the speaker to the listener or reader. That's all they do. So when I used the phrase “words have meanings” to describe one side of a divide between people who use language to report facts, and people who use language to enact roles, was I strawmanning the other side?

I say no. Many common uses of language, including some perfectly legitimate ones, are not well-described by "words have meanings." For instance, people who try to use promises like magic spells to bind their future behavior don't seem to consider the possibility that others might treat their promises as a factual representation of what the future will be like.

Some uses of language do not simply describe objects or events in the world, but are enactive, designed to evoke particular feelings or cause particular actions. Even when speech can only be understood as a description of part of a model of the world, the context in which a sentence is uttered often implies an active intent, so if we only consider the direct meaning of the text, we will miss the most important thing about the sentence.

Some apparent uses of language’s denotative features may in fact be purely enactive. This is possible because humans initially learn language mimetically, and try to copy usage before understanding what it’s for. Primarily denotative language users are likely to assume that structural inconsistencies in speech are errors, when they’re often simply signs that the speech is primarily intended to be enactive.

Enactive language

Some uses of words are enactive: ways to build or reveal momentum. Others denote the position of things on your world-map.

In the denotative framing, words largely denote concepts that refer to specific classes of objects, events, or attributes in the world, and should be parsed as such. The meaning of a sentence is mainly decomposable into the meanings of its parts and their relations to each other. Words have distinct meanings that can be composed together in structures to communicate complex and nonobvious messages, or just uses and connotations. When you speak in this mode, it’s to describe models - relationships between concepts, which refer to classes of objects in the world.

In the enactive mode, the function of speech is to produce some action or disposition in your listener, who may be yourself. Ideas are primarily associative, reminding you of the perceptions with which the speech-act is associated. When I wrote about admonitions as performance-enhancing speech, I gave the example of someone being encouraged by their workout buddies:
Recently, at the gym, I overheard some group of exercise buddies admonishing their buddy on some machine to keep going with each rep. My first thought was, “why are they tormenting their friend? Why can’t they just leave him alone? Exercise is hard enough without trying to parse social interactions at the same time.”

And then I realized - they’re doing it because, for them, it works. It's easier for them to do the workout if someone is telling them, “Keep going! Push it! One more!”
In the same post, I quoted Wittgenstein’s thought experiment of a language where words are only ever used as commands, with a corresponding action, never to refer to an object. Wittgenstein gives the example of a language used for nothing but military orders, and then elaborates on a hypothetical language used strictly for work orders. For instance, a foreman might use the utterance “Slab!” to direct a worker to fetch a slab of rock. I summarized the situation thus:
When I hear “slab”, my mind interprets this by imagining the object. A native speaker of Wittgenstein’s command language, when hearing the utterance “Slab!”, might - merely as the act of interpreting the word - feel a sense of readiness to go fetch a stone slab.
Wittgenstein’s listener might think of the slab itself, but only as a secondary operation in the process of executing the command. Likewise, I might, after thinking of the object, then infer that someone wants me to do something with the slab. But that requires an additional operation: modeling the speaker as an agent and using Gricean implicature to infer their intentions. The word has different cognitive content or implications for me, than for the speaker of Wittgenstein’s command language.

Military drills are also often about disintermediating between a command and action. Soldiers learn that when you receive an order, you just do the thing. This can lead to much more decisive and coordinated action in otherwise confusing situations – a familiar stimulus can lead to a regular response.

When someone gives you driving directions by telling you what you'll observe, and what to do once you make that observation, they're trying to encode a series of observation-action linkages in you.

This sort of linkage can happen to nonverbal animals too. Operant conditioning of animals gets around most animals' difficulty understanding spoken instructions, by associating a standardized reward indicator with the desired action. Often, if you want to train a comparatively complex action like pigeons playing pong, you'll need to train them one step at a time, gradually chaining the steps together, initially rewarding much simpler behaviors that will eventually compose into the desired complex behavior.

Crucially, the communication is never about the composition itself, just the components to be composed. Indeed, it’s not about anything, from the perspective of the animal being trained. This is similar to an old-fashioned army reliant on drill, in which, during battle, soldiers are told the next action they are to take, not told about overall structure of their strategy. They are told to, not told about.

Indeterminacy of translation

It’s conceivable that having what appears to be a language in common does not protect against such differences in interpretation. Quine also points to indeterminacy of translation and thus of explicable meaning with his "gavagai" example. As Wikipedia summarizes it:
Indeterminacy of reference refers to the interpretation of words or phrases in isolation, and Quine's thesis is that no unique interpretation is possible, because a 'radical interpreter' has no way of telling which of many possible meanings the speaker has in mind. Quine uses the example of the word "gavagai" uttered by a native speaker of the unknown language Arunta upon seeing a rabbit. A speaker of English could do what seems natural and translate this as "Lo, a rabbit." But other translations would be compatible with all the evidence he has: "Lo, food"; "Let's go hunting"; "There will be a storm tonight" (these natives may be superstitious); "Lo, a momentary rabbit-stage"; "Lo, an undetached rabbit-part." Some of these might become less likely – that is, become more unwieldy hypotheses – in the light of subsequent observation. Other translations can be ruled out only by querying the natives: An affirmative answer to "Is this the same gavagai as that earlier one?" rules out some possible translations. But these questions can only be asked once the linguist has mastered much of the natives' grammar and abstract vocabulary; that in turn can only be done on the basis of hypotheses derived from simpler, observation-connected bits of language; and those sentences, on their own, admit of multiple interpretations.
Everyone begins life as a tiny immigrant who does not know the local language, and has to make such inferences, or something like them. Thus, many of the difficulties in nailing down exactly what a word is doing in a foreign language have analogues in nailing down exactly what a word is doing for another speaker of one’s own language.

Mimesis, association, and structure

Not only do we all begin life as immigrants, but as immigrants with no native language to which we can analogize our adopted tongue. We learn language through mimesis. For small children, language is perhaps more like Wittgenstein's command language than my reference-language. It's a commonplace observation that children learn the utterance "No!" as an expression of will. In The Ways of Naysaying: No, Not, Nothing, and Nonbeing, Eva Brann provides a charming example:
Children acquire some words, some two-word phrases, and then no. […] They say excited no to everything and guilelessly contradict their naysaying in the action: "Do you want some of my jelly sandwich?" "No." Gets on my lap and takes it away from me. […] It is a documented observation that the particle no occurs very early in children's speech, sometimes in the second year, quite a while before sentences are negated by not.
First we learn language as an assertion of will, a way to command. Then, later, we learn how to use it to describe structural features of world-models. I strongly suspect that this involves some new, not entirely mimetic cognitive machinery kicking in, something qualitatively different: we start to think in terms of pointer-referent and concept-referent relations. In terms of logical structures, where "no" is not simply an assertion of negative affect, but inverts the meaning of whatever follows. Only after this do recursive clauses, conditionals, and negation of negation make any sense at all.

As long as we agree on something like rules of assembly for sentences, mimesis might mask a huge difference in how we think about things. It's instructive to look at how the current President of the United States uses language. He's talking to people who aren't bothering to track the structure of sentences. This makes him sound more "conversational" and, crucially, allows him to emphasize whichever words or phrases he wants, without burying them in a potentially hard-to-parse structure. As Katy Waldman of Slate says:
For some of us, Trump’s language is incendiary garbage. It’s not just that the ideas he wants to communicate are awful but that they come out as Saturnine gibberish or lewd smearing or racist gobbledygook. The man has never met a clause he couldn’t embellish forever and then promptly forget about. He uses adjectives as cudgels. You and I view his word casserole as not just incoherent but representative of the evil at his heart.

But it works. […]

Why? What’s the secret to Trump’s accidental brilliance? A few theories: simple component parts, weaponized unintelligibility, dark innuendo, and power signifiers.

[…] Trump tends to place the most viscerally resonant words at the end of his statements, allowing them to vibrate in our ears. For instance, unfurling his national security vision like a nativist pennant, Trump said: But, Jimmy, the problem – I mean, look, I’m for it. But look, we have people coming into the country* that are looking to do tremendous harm…. Look what happened in Paris. Look what happened in California, with, you know, 14 people dead. Other people are going to die, they’re badly injured, *we have a real problem.

Ironically, because Trump relies so heavily on footnotes, false starts, and flights of association, and because his digressions rarely hook back up with the main thought, the emotional terms take on added power. They become rays of clarity in an incoherent verbal miasma. Think about that: If Trump were a more traditionally talented orator, if he just made more sense, the surface meaning of his phrases would likely overshadow the buried connotations of each individual word. As is, to listen to Trump fit language together is to swim in an eddy of confusion punctuated by sharp stabs of dread. Which happens to be exactly the sensation he wants to evoke in order to make us nervous enough to vote for him.
Of course, Waldman is being condescending and wrong here. This is not word salad, it's high context communication. But high context communication isn't what you use when you are thinking you might persuade someone who doesn't already agree with you, it's just a more efficient exercise in flag-waving. The reason why we don't see a complex structure here is because Trump is not trying to communicate this sort of novel content that structural language is required for. He's just saying "what everyone was already thinking."

But while Waldman picked a poor example, she's not wholly wrong. In some cases, the President of the United States seems to be impressionistically alluding to arguments or events his audience has already heard of – but his effective rhetorical use of insulting epithets like “Little Marco,” “Lying Ted Cruz,” and “Crooked Hillary,” fit very clearly into this schema. Instead of asking us to absorb facts about his opponents, incorporate them into coherent world-models, and then follow his argument for how we should judge them for their conduct, he used the simple expedient of putting a name next to a descriptor, repeatedly, to cause us to associate the connotations of those words. We weren't asked to think about anything. These were simply command words, designed to act directly on our feelings about the people he insulted.

We weren't asked to take his statements as factually accurate. It's enough that they're authentic.

This was persuasive to enough voters to make him President of the United States. This is not a straw man. This is real life. This is the world we live in.

You might object that the President of the United States is an unfair example, and that most people of any importance should be expected to be better and clearer thinkers than the leader of the free world. So, let's consider the case of some middling undergraduates taking an economics course.

by Ben Hoffman, Compass Rose |  Read more: