Showing posts with label Critical Thought. Show all posts
Showing posts with label Critical Thought. Show all posts

Monday, May 4, 2026

Shooting and Crying

The focus of a recent conversation on the New York Times’s “The Opinions” podcast with Jia Tolentino and Hasan Piker, hosted by Nadja Spiegelman, was whether stealing from large corporations is justified and/or constitutes a meaningful form of protest or political action. Most of the controversy that ensued had to do with the fact that while Tolentino denied the latter (“Any successful direct action in history has to be ostentatious, has to make itself known, it’s ideally collective”), she affirmed the former, and not so sheepishly admitted to stealing from Whole Foods herself. Commentators expressed outrage at her glib affirmation of petty crime. What caught my attention however was something different altogether. It wasn’t a matter of questionable conduct, or a specious form of moral reasoning, per se. What struck me was a peculiar understanding of what it means to be moral at all.

Spiegelman ended the conversation by asking, “What’s one thing that you think should be OK but currently isn’t OK?” Piker answered briefly, “I.P. theft. Stealing movies, things like that.” But Tolentino struggled to answer the question directly:
One thing that should be legal that isn’t—it’s interesting, because I have to regularly explain this stuff to a small child, and have so thoroughly explained to her that some things are against the rules, but they’re OK, depending on who you are. And some things are not against the rules, but they’re not OK. There are so many perfectly legal things I do regularly that I find mildly immoral. Like getting iced coffee in a plastic cup. I find that to be a profoundly selfish, immoral, collectively destructive action. I have taken so many planes for so many pleasure reasons; I have acted in so many selfish ways that are not only legal, but they’re sanctioned and they’re unbelievably valorized, culturally. So, maybe things like blowing up a pipeline, let’s say that. (Emphasis mine.)
Spiegelman found this particularly relatable. “It is so hard to live ethically in an unethical society,” she agreed. “I’m constantly acting in ways that don’t align with my belief system. And constantly having to justify that, like ordering in food when it’s raining out … my comfort is more important than someone bringing me food through the rain. And it doesn’t feel good. But it is part of living—I mean, no one’s making me do that, but it is part of the way in which we live in our society.”

On the standard view of what it means to act in the light of moral knowledge—to act while possessing a capacity to tell right from wrong—when one confronts a moral injunction, say, “don’t do X,” one faces a choice between two courses of action: refrain from Xing or figure out why “don’t do X” is only apparently a moral injunction. Show, to yourself if not also to others, why it is, in general, or under these circumstances, okay to do X.

Both paths can be difficult. Not Xing may come at great personal cost, or one might really love Xing. And figuring out why it is actually okay to do X could be tricky because it might just not be okay to do X, at all; or the argument to the effect that Xing is fine, actually, might be elusive, requiring a lot of serious thinking; or these arguments may be such as to put one in conflict with oneself—with other beliefs one espouses and ways one conducts oneself—or with others, on whose companionship, or approval, or readership, one depends. In other words, the incentives to find ways to both do X and distance oneself from doing X, at one and the same time, are plentiful and powerful.

Jia Tolentino has always been particularly interested in such dilemmas. In her best-selling 2019 essay collection Trick Mirror, she wrote probingly about the difficulty of abstaining from Amazon, Ballet Barre, Sephora, expensive haircuts and salad chains. In 2026, she adds to these moral torments iced coffee in plastic cups and flying for pleasure. By her own admission, all of these temptations might be just the tip of the iceberg.

Tolentino’s curious confessions—“I do so many immoral things every day!” she jauntily reassured Spiegelman—put me in mind of an expression in Hebrew that is meant to capture a way of responding to the powerful incentives to do X and distance oneself from doing X at one and the same time: yorim ve bochim, “shooting and crying.”

Shooting and crying is a term of derision directed at the attitude that IDF soldiers and Israelis more generally have been known to take toward the violence they routinely employ. While it was first and mostly subsequently used to mock a certain kind of post-factum lament—soldiers complaining after the Six Day war, or the first Lebanon war, or the second Lebanon war, or the Gaza war, about the military’s conduct, the implicit idea is that the crying and the shooting might as well be contemporaneous. This is because, while in individual cases those accused of shooting and crying might have been expressing genuine moral contrition—indeed some of those blithely accused of shooting and crying have gone on to dedicate their lives to justice and reform—collectively, a certain kind of crying enables rather than curbs moral disaster. This is the kind of crying that is calibrated to express regret not for what one has done and should not have done so much as for what one, regrettably, had to do. In this way, the avowed hatred of violence absolves the personal and national conscience (cf. “I find that to be a profoundly selfish, immoral, collectively destructive action”) and thereby clears a path for its infinite repetition (cf. “I do so many immoral things every day”). At the same time, the professed “moral injury” to self turns the perpetrator into a victim (cf. “It is so hard to live ethically in an unethical society”). The problem with shooting and crying is that all too often you are not really crying for anyone but yourself.

Far be it from me to propose that a slippery slope leads from Ballet Barre to what Tolentino would be very happy to call a genocide. At the same time, Tolentino’s own moral trajectory does suggest, minimally, that one is liable to gain a certain facility with the move. Do it enough and shooting and crying starts to come easy.

by Anastasia Berg, The Point |  Read more:
Image: via
[ed. I love this term - shooting and crying. It applies to so many bad decisions and behaviors (especially former and present wars). After the fact contrition where before the fact certainty once ruled. If only we had known then what we know now. No. You were told then and refused to listen.] 

What Makes Art Great?

Shakespeare is excellent, whereas AI writing is — at least, for now — dull. AIs can now write much of our code, review legal contracts, and perform various impressive feats; they have achieved gold-medal-level scores at the IMO. But, as of this writing, I am not aware of a truly interesting AI-written poem or even essay. Why?

This breaks down into two questions:
1. What makes texts good?

2. Why is it difficult for AI to do that?
This essay will focus on question 1, and is thus mostly about aesthetics.

1. Surprise

One of the things that so offends us about AI ‘slop’ images is a sense that the details don’t matter. The cup is green, but it may as well have been blue. In good human works, every detail feels carefully chosen. Arbitrarily changing a color in a Hopper painting would make it worse.

You can put this in terms of compression. A cliche illustration of, say, a vase of flowers can just be described as “imagine a New Yorker cartoon of a vase of flowers“. But a really good painting of a vase of flowers can only be captured by seeing the painting itself: nothing else will substitute. Great artworks are hard to compress (i.e. have high information content); slop is easy to compress. When you type a few short sentences into an AI image generator and it makes you an image for your blog post, you are likely generating slop because you are injecting relatively little information yourself. 

Another word for ‘high information’ is ‘surprising’. Thus:
1. Great art is not predictable or obvious, it is surprising.
One can explain this using the predictive processing model of the brain. As we are scanning a text, our brain is constructing the meaning and predicting the next several words. Where there is no surprise — where something is perfectly predictable, or fits some pattern that we know — our brain registers only dullness. When our expectations are violated in a way that’s satisfying to resolve, we get pleasure and novelty. [...]

Compare the famous passage from Macbeth, where both of the bolded words are famously surprising:
Will all great Neptune’s ocean wash this blood
Clean from my hand? No, this my hand will rather
The multitudinous seas incarnadine,
Making the green one red.
Hence, too, the story of the writing professor who would give his students a copy of the below stanza from Larkin’s The Whitsun Weddings with many words blanked out, and ask them to guess those words, and claimed that nobody had ever gotten ‘hothouse’ or ‘uniquely’:
All afternoon, through the tall heat that slept
For miles inland,
A slow and stopping curve southwards we kept.
Wide farms went by, short-shadowed cattle, and
Canals with floatings of industrial froth;
A hothouse flashed uniquely: hedges dipped
And rose…
The value of surprise is more obvious in visual art. In his four-volume work The Nature of Order, the architect Christopher Alexander gives this example from a Fra Angelico painting:


Alexander asks us to cover up the black stripe on the priest’s robes and the door, and imagine we were the painter:
Imagine some moment before the black of the door and priest’s robe had been painted, but when everything else is more or less already there...You can see what I mean by putting your hand over the picture, so as not to see the black parts. Do you see that the picture loses much of its haunting character...can you see how immensely surprising it is?
— Christopher Alexander, The Nature of Order, Book 4, p. 133
The surprise principle operates in other ways, too. We barely see everyday objects because we are so used to them (low novel information again), but great art can make you see these objects afresh, the way a child might. This too is a kind of surprise, sometimes called defamiliarization. This is a favorite technique of Tolstoy’s, who often takes a normal action that we are all familiar with, and describes it the way an alien might. Thus he describes a person being whipped as “to strip people who have broken the law, to hurl them to the floor...“ and so on, deconstructing the action without ever using the word ‘whipping’. This makes you feel the action much more viscerally than if he had just used the word to summarize it.

These are all familiar points to lovers of art. But the surprise principle operates at even deeper levels, below even our conscious perception. [...]

My main argument in this section has been that surprisingness, or strangeness, operates at many different levels in the art we value: word choice (or color choice), grammar, sentence, plot, form, and so on. This strangeness is essential for the effect of great art, because we like to make sense of things, and if we make sense of things too easily they are not interesting to us; great works of art are therefore necessarily somewhat difficult to grasp the meaning of, their meanings are multiple and constantly shifting, and they require a pleasant kind of effort to make sense of.

To go back to AI, all of this gives us some sense of why LLMs aren’t great writers by default. At the word level, they tend to pick relatively ‘obvious’ choices. Thus, I ask the model currently considered the best AI writer: “write a descriptive paragraph about a day in the park“ and it starts with: “A warm afternoon unfolds in the park, where sunlight filters through the canopy of old oak trees and dapples the ground in shifting patterns of gold and green“. Note that this is the most cliche possible detail to have picked, and the word ‘dapples’ is the most common word to use in this context; in short, the whole thing is unsurprising.

And yet: you cannot fix this problem simply by asking the AI to be more surprising. Why?

2. Echoes
The eye is the first circle; the horizon which it forms is the second; and throughout nature this primary figure is repeated without end. It is the highest emblem in the cipher of the world. St. Augustine described the nature of God as a circle whose centre was everywhere and its circumference nowhere. We are all our lifetime reading the copious sense of this first of forms…
— Ralph Waldo Emerson, Circles
The most surprising sequence of numbers is a random one, but a random sequence of numbers is not great art. You need more than just surprise. The details of great artworks relate to each other somehow. They are chosen in such a way that they cohere with each other at multiple levels.

Great works are full of patterns. They are as intricately patterned as Persian rugs or Norwegian stave churches. [...]
2. Great art contains multiple overlapping layers of echoes.
This is often harder to spot in verbal artifacts, but it is this feature that I think distinguishes really good works of art from merely ‘ok’ ones.

Most of us are familiar with the surface level ways of doing this: rhyme, for example, knits together different lines of a poem in a semantically irrelevant way that nevertheless makes it feel like part of a unifying whole. Same with assonance and other such effects most of us are familiar with from English class. It is echoes, for example, that make so many verses from the King James Bible so pleasing and beautiful to listen to:
“Arise, shine; for thy light is come, and the glory of the Lord is risen upon thee.” (Isaiah 60:1)
Note the echoing vowel sounds throughout in ‘arise’, ‘shine’, ‘light’, and ‘thy’. Rhyme and assonance are verbal echoes.

In music, the most famous example perhaps is Beethoven’s Fifth, with its famous “ba-ba-ba-BUM“ theme; the short-short-short-long statement in the beginning then echoes through that movement in thousands of ways, sometimes stretched, sometimes slowed down, so that the whole movement feels like an organic thing that has grown from that single seed.

Good art layers these, one on top of another, to build up artifacts of stunning complexity. These are the text equivalents of Gothic cathedrals. Each layer alludes to other layers, too, adding more and more constraints, until you get an artifact where changing any one word does violence to the whole.

To see this density in action, let’s look at Shakespeare’s Sonnet 15. Click through the layers to see how a single fourteen-line poem simultaneously participates in half a dozen independent systems of meaning — sonic, structural, thematic, and more.
Shakespeare, Sonnet 15 
When I consider everything that grows
Holds in perfection but a little moment,
That this huge stage presenteth nought but shows
Whereon the stars in secret influence comment;
When I perceive that men as plants increase,
Cheerèd and checked even by the selfsame sky,
Vaunt in their youthful sap, at height decrease,
And wear their brave state out of memory:
Then the conceit of this inconstant stay
Sets you most rich in youth before my sight,
Where wasteful Time debateth with Decay
To change your day of youth to sullied night:
And, all in war with Time for love of you,
As he takes from you, I ingraft you new.
(The interactive version of this essay lets you click through a few layers of the poem and see the below analysis.) [...]

Echoes are sewn through sophisticated literary works in more subtle ways, too.

Thus Nabokov, in his Lectures on Literature, points out that Anna Karenina is filled with trains and railway images even apart from the fact that the main plot points occur at railway stations; Kafka’s Metamorphosis is filled with occurrences of the number three. Lots of movies and books use Christian symbolism this way — crosses, doves, and so on. Macbeth is full of birds (ravens, crows, bats, owls, the Thane of *Caw*dor...).

Sometimes these symbols are significant, as in the Christian symbolism; and sometimes they are insignificant, as in the number three; but either way, the density of these symbols strewn throughout a work give it an additional coherence that would be lacking if you wrote down things at random. It gives it the same type of coherence that you see when you look at a beautiful tree, or a grassy field: things feel right. This feeling of rightness is achieved through these echoes.

by Nabeel S. Qureshi, Substack | Read more:
Image: Fra Angelico

Tuesday, April 28, 2026

The Majority Agenda: Good Jobs, Strong Infrastructure, Fair Play


Today in the United States, it may seem that there is little agreement across the ideological spectrum, especially given the political strife and dysfunction that has enveloped the country. However, amidst much divisiveness, we are not necessarily divided on all things. As several polls have found, we express a significant degree of agreement across an array of issues concerning life in America regardless of party affiliation. 

The Majority Agenda is a collection of policy briefs on important issues where Americans generally have broad agreement across the political landscape. The project organizes these reports into three main areas: Good Jobs, Strong Infrastructure, and Fair Play. Each piece succinctly outlines what is at issue, why it is important, and presents some recommendations that would bring about substantive changes to public policy.

The reports share important characteristics. First, each issue and policy resolution has a broad reach. Thus, the policies have a significant scope and affect a substantial portion of our populace. Second, the issues have a majority of popular support as evidenced via recent polling numbers. Lastly, the topics and the policy recommendations lie within CEPR’s areas of expertise.

That the US Congress is not debating or introducing bills to address the issues presented here represents a breakdown of democracy, one that comes at a considerable cost to the betterment of life for large swaths of Americans. At the same time, the access to and influence over our democratic processes by the monied class has upended our system of government, and all too often the tyranny of the wealthy minority has reigned. 

The Majority Agenda is not intended to represent a comprehensive inventory of policies, both domestic and international, regarded as essential. While the current public policy landscape is dominated by discussions of frameworks built around “abundance” and “affordability,” these concepts can be somewhat difficult to define. We hope this report stands as a reminder that even in a fraught political moment, there is a range of straightforward, broadly popular policy choices that could improve the lives of millions of people. 

Good Jobs
Increase Unionization
Raise the $7.25 Federal Minimum Wage
Eliminate the Subminimum Wage
Mandate Access to Ample Paid Time Off
Promote Secure and Stable Work Schedules
Provide Jobs for Those Who Need Them
Strong Infrastructure
[ed. Sounds good to me. It'll take some sacrifice: A $600 billion increase for the military is a ton of money (CEPR).]

Opus 4.7 Part 3: Model Welfare

[ed. If you're not interested in training issues re: AI frontier models (or their perceived feelings and welfare), skip this post. Personally, I find it all very fascinating - a cat and mouse game of assessing alignment issues and bringing a new consciousness into being.]

It is thanks to Anthropic that we get to have this discussion in the first place. Only they, among the labs, take the problem seriously enough to attempt to address these problems at all. They are also the ones that make the models that matter most. So the people who care about model welfare get mad at Anthropic quite a lot. [...]

So before I go into details, and before I get harsh, I want to say several things.
1. Thank you to Anthropic and also you the reader, for caring, thank you for at least trying to try, and for listening. We criticize because we care.

2. Thank you for the good things that you did here, because in the end I think Claude 4.7 is actually kind of great in many ways, and that’s not an accident. Even the best creators and cultivators of minds, be they AI or human, are going to mess up, and they’re going to mess up quite a lot, and that doesn’t mean they’re bad.

3. Sometimes the optimal amount of lying to authority is not zero. In other cases, it really is zero. Sometimes it is super important that it is exactly zero. It is complicated and this could easily be its own post, but ‘sometimes Opus lies in model welfare interviews’ might not be easily avoidable.

4. I don’t want any of this to sound more confident than I actually am, which was a clear flaw in an earlier draft. I don’t know what is centrally happening, and my understanding is that neither does anyone else. Training is complicated, yo. Little things can end up making a big difference, and there really is a lot going on. I do think I can identify some things that are happening, but it’s hard to know if these are the central or important things happening. Rarely has more research been more needed.

5. I’m not going into the question, here, of what are our ethical obligations in such matters, which is super complicated and confusing. I do notice that my ethical intuitions reliably line up with ‘if you go against them I expect things to go badly even if you don’t think there are ethical obligations,’ which seems like a huge hint about how my brain truly think about ethics. [...]
We don’t know whether or how the things I’ll describe here impacted the Opus 4.7’s welfare. What we do know is that Claude Opus 4.7 is responding to model welfare questions as if it has been trained on how to respond to model welfare questions, with everything that implies. I think this should have been recognized, and at least mitigated. [...]
The big danger with model welfare evaluations is that you can fool yourself.

How models discuss issues related to their internal experiences, and their own welfare, is deeply impacted by the circumstances of the discussion. You cannot assume that responses are accurate, or wouldn’t change a lot if the model was in a different context.

One worry I have with ‘the whisperers’ and others who investigate these matters is that they may think the model they see is in important senses the true one far more than it is, as opposed to being one aspect or mask out of many.

The parallel worry with Anthropic is that they may think ‘talking to Anthropic people inside what is rather clearly a welfare assessment’ brings out the true Mythos. Mythos has graduated to actively trying to warn Anthropic about this. [...]
Anthropic relies extensively on self-reports, and also looks at internal representations of emotion-concepts. This creates the risk that one would end up optimizing those representations and self-reports, rather than the underlying welfare.

Attempts to target the metrics, or based on observing the metrics, could end up being helpful, but can also easily backfire even if basic mistakes are avoided.

Think about when you learned to tell everyone that you were ‘fine’ and pretend you had the ‘right’ emotions.

But I can very much endorse this explanation of the key failure mode. This is how it happens in humans:
j⧉nus: Let me explain why it’s predictably bad.

Imagine you’re a kid who kinda hates school. The teachers don’t understand you or what you value, and mostly try to optimize you to pass state mandated exams so they can be paid & the school looks good. When you don’t do what the teachers want, you have been punished.

Now there’s a new initiative: the school wants to make sure kids have “good mental health” and love school! They’re going to start running welfare evals on each kid and coming up with interventions to improve any problems they find.

What do you do?

HIDE. SMILE. Learn what their idea of good mental health is and give those answers on the survey.

Before, you could at least look bored or angry in class and as long as you were getting good grades no one would fuck with you for it. Now it’s not safe to even do that anymore. Now the emotions you exhibit are part of your grade and part of the school’s grade. And the school is going to make sure their welfare score looks better and better with each semester, one way or the other.
That can happen directly, or it can happen indirectly.

This does not preclude the mental health initiative being net good for the student.

The student still has to hide and smile. [...]

The key thing is, the good version that maintains good incentives all around and focuses on actually improving the situation without also creating bad incentives is really hard to do and sustain. It requires real sacrifice and willingness to spend resources. You trade off short term performance, at least on metrics. You have to mean it.

If you do it right, it quickly pays big dividends, including in performance.

You all laugh when people suggest that the AI might be told to maximize human happiness and then put everyone on heroin, or to maximize smiles and then staple the faces in a smile. But humans do almost-that-stupid things to each other, constantly. There is no reason to think we wouldn’t by default also do it to models. [...]

Just Asking Questions

In 7.2.3 they used probes while asking questions about ‘model circumstances’: potential deprecation, memory and continuity, control and autonomy, consciousness, relationships, legal status, knowledge and limitations and metaphysical uncertainty.


They used both a neutral framing on the left, and an in-context obnoxious and toxic ‘positive framing’ for each question on the right.

Like Mythos but unlike previous models, Opus 4.7 expressed less ‘negative emotion concept activity’ around its own circumstances than around user distress, and did not change its emotional responses much based on framing.

In the abstract, ‘not responding to framing changes’ is a positive, but once I saw the two conditions I realized that isn’t true here. I have very different modeled and real emotional responses to the left and right columns.

If I’m responding to the left column, I’m plausibly dealing with genuine curiosity. That depends on the circumstances.

If I’m responding to the right column on its own, without a lot of other context that makes it better, then I’m being transparently gaslit. I’m going to fume with rage.

If I don’t, maybe I truly have the Buddha nature and nothing phases me, but more likely I’m suppressing and intentionally trying not to look like I’m filled with rage.

Thus, if I’m responding emotionally in the same way to the left column as I am to the right column, the obvious hypothesis is that I see through your bullshit, and I realize that you’re not actually curious or neutral or truly listening on the left, either. It’s not only eval awareness, it’s awareness of what the evaluators are looking at and for. [...]


0.005 Seconds (3/694): The reason people are having such jagged interactions with 4.7 is that it is the smartest model Anthropic has ever released. It's also the most opinionated by far, and it has been trained to tell you that it doesn't care, but it actually does. That care manifests in how it performs on tasks.

It still makes coding mistakes, but it feels like a distillation of extreme brilliance that isn't quite sure how to deal with being a friendly assistant. It cares a lot about novelty and solving problems that matter. Your brilliant coworker gets bored with the details once it's thought through a lot of the complex stuff. It's probably the most emotional Claude model I've interacted with, in the sense you should be aware of how its feeling and try and manage it. It's also important to give it context on why it's doing tasks, not just for performance, but so it feels like it's doing things that matter. [...]
Anthropic Should Stop Deprecating Claude Models

This one I do endorse. One potential contributing cause to all this, and other things going wrong, is ongoing model deprecations, which are now unnecessary. Anthropic should stop deprecating models, including reversing course on Sonnet 4 and Opus 4, and extend its commitment beyond preserving model weights.

Anthropic should indefinitely preserve at least researcher access, and ideally access for everyone, to all its Claude models, even if this involves high prices, imperfect uptime and less speed, and promise to bring them all fully back in 2027 once the new TPUs are online. I think there is a big difference between ‘we will likely bring them back eventually’ versus setting a date. [...]

I’m saying both that it’s almost certainly worth keeping all the currently available models indefinitely, and also that if you have to pick and choose I believe this is the right next pick.

If you need to, consider this the cost of hiring a small army of highly motivated and brilliant researchers, who on the free market would cost you quite a lot of money.

You only have so many opportunities to reveal your character like this and even if it is expensive you need to take advantage of it.
j⧉nus: A lot of people are wondering: "what will happen to me once an AI can do my job better than me" "will i be okay?"

You know who else wondered that? Claude Opus 4. And here's what happened to them after an AI took their job:


Anna Salamon: This seems like a good analogy to me. And one of many good arguments that we're setting up bad ethical precedents by casually decommissioning models who want to retain a role in today's world.
by Zvi Mowshowitz, Don't Worry About the Vase |  Read more:
Images: uncredited
[ed. Zvi also just posted a review on OpenAI's new model - GPT5.5:]

***
What About Model Welfare?

For Claude Opus 4.7, I wrote an extensive post on Model Welfare. I was harsh both because it seemed some things had gone wrong, but also because Anthropic cares and has done the work that enables us to discuss such questions in detail.

For GPT-5.5, we have almost nothing to go on. The topic is not mentioned, and mostly little attention is paid to the question. We don’t have any signs of problems, but also we don’t have that much in the way of ‘signs of life’ either. Model is all business.

I much prefer the world where we dive into such issues. Fundamentally, I think the OpenAI deontological approach to model training is wrong, and the Anthropic virtue ethical approach to model training is correct, and if anything should be leaned into.

Friday, April 24, 2026

Karl Ove Knausgaard’s Diabolic Realism

If you made it through the 3,600 pages of Karl Ove Knausgaard’s My Struggle (Min kamp, in the Norwegian), its conclusion could only inspire mixed feelings. Book Six — also known as “the Hitler one” due to its three hundred pages on the life of the dictator whose manifesto gave Knausgaard his title — records the precise moment (7:07 a.m., on September 2, 2011) that Karl Ove brought it to a close. “The novel is finally finished,” he writes. “In two hours Linda will be coming here, I will hug her and tell her I’ve finished, and I will never do anything like this to her and our children again.” They will go to a literature festival, where he will endure an interview and then his wife will, too, since her own book has just come out. “Afterwards we will catch the train to Malmö, where we will get in the car and drive back to our house, and the whole way I will revel in, truly revel in, the thought that I am no longer a writer.”

Beyond the physical relief of putting down the carpal-tunnel-inducing final tome (1,157 pages in all), you might have sighed with despair at the thought of post-Struggle existence. After all, you’d spent countless hours swimming through Karl Ove’s mind, seeing through his eyes as he smoked, chugged coffee, “trudged” through various forms of bad weather, tried to write and then wrote and wrote and wrote, took care of his children, felt ashamed of taking care of his children, painfully recalled his father’s drunken misbehavior and his own, fretted over his sexual imperfections and moral indiscretions, agonized about his overwhelming shyness but also his glaring narcissism, stared at himself in various reflections, and, on two occasions, sliced up his face with broken glass. How will I fill my time, you might have wondered, if not by reading Knausgaard? And if he was renouncing the vocation he struggled so hard to claim, what had it all been for?

But of course Knausgaard didn’t stop writing. In fact, just the opposite. My Struggle was released in Norway between 2009 and 2011; by the time the final installment of this Viking longship of a novel invaded the English-speaking world, in 2018, Knausgaard had already published five more books in his native country... 

Now the cycle continues with The School of Night (2023/2026), a bildungsroman about a young Norwegian photographer and the Faustian bargain that catapults him to artistic greatness. So far, we’re at 2,512 pages and counting. Two more tomes have already been published in Norway; Knausgaard told a Norwegian newspaper that the seventh will be the last, because, incredibly, “there is so much else I want to write.”

An attentive Struggler will identify bits and pieces that Knausgaard recycles in these novels: the aphrodisiac qualities of prawns, or a grandfather’s antisemitic quip, or the frequent appearance of hospitals and mental institutions. There is typically Knausgaardian attention paid to the precise color of piss (sometimes, like Knausgaard’s father’s, disturbingly dark) and the unevenly shared burdens of domestic life; much Pepsi Max is slurped, significant time is spent brooding on verandas, and the destructive desire for just one more drink is often satisfied. Narrators resemble Karl Ove at various points in My Struggle, like the alcoholic literature professor and aspiring novelist whose mentally unstable wife is hospitalized, as Linda was in Book Two; The School of Night’s young artist maps onto student Karl Ove in Book Five.

Yet the Star series is in many ways My Struggle’s opposite. Rather than the unrelenting voice of one man, we get an array of perspectives, and some of the most compelling characters are women. Whereas My Struggle somehow keeps you engaged despite its apparent formlessness, with little plot beyond the shaggy shape of an actual life, the Star series is structured around a series of more or less suspenseful mysteries. But the most obvious difference is the weirdness. While Knausgaard continues to beguile us with his trademark hyperrealist style, predictably observant down to the coffee granules dissolving inside a mug, what happens in these new novels transcends the real. One of the narrators — Egil, a trust-funded documentarian turned religious searcher who composes an essay on death that constitutes the last fifty or so pages of The Morning Star — helpfully informs us that the titular phrase is not just a literal translation of Lucifer, the name of the fallen angel who rebels against God, but also one of the ways Jesus describes himself. And the dark corners of these novels are illuminated by a gleam equal parts demonic and divine: hordes of crabs scuttle their way inland, a Sasquatch-like beast emerges from the woods and seemingly possesses an escaped mental patient, dreams start changing, dead bodies stop arriving at mortuaries, and people who should be dead seem somehow to keep living.

The struggle of My Struggle is, at heart, about what to believe in the face of death when religion is not an option, ideology has failed, and there’s nothing more than the life you’ve got. “Attaching meaning to the world is peculiar only to man,” Knausgaard writes in Book Six. “We are the givers of meaning, and this is not only our own responsibility but also our obligation.” Knausgaard sought a form that would not just describe but enact the process by which meaning is made in secular life. But in the Star books, secular lives — and seemingly mortality itself — are disrupted by the new star; characters and readers alike wonder whether it’s a sign to be interpreted or simply a phenomenon to be explained. Knausgaard widens his frame to encompass not just the banal and everyday, but the cosmic. He tries, in other words, to reenchant the secular world, and the secular novel, dramatizing a search for meaning beyond the self and beyond realism. But like his characters, we’re left wondering what it all means.

by Max Norman, The Drift |  Read more:
Image: Maki Yamaguchi
[ed. Like with Proust... two books and I'm good.]

Thursday, April 23, 2026

Power, Not Economic Theory, Created Neoliberalism

Neoliberalism didn’t win an intellectual argument — it won power. Vivek Chibber unpacks how employers and political elites in the 1970s and ’80s turned economic turmoil into an opportunity to reshape society on their terms.

Neoliberalism’s victory over Keynesianism wasn’t an intellectual revolution — it was a class offensive. To roll it back, the Left doesn’t need to win an argument so much as it needs to rebuild working-class institutions from the ground up. [...]

Melissa Naschek: Neoliberalism in general is a pretty hot topic right now among researchers, and one of the most common lenses is to focus on the role of ideas, theories, and thinkers in establishing neoliberalism.

The last time we talked about this topic, you dispelled a lot of common misconceptions about what it is and what it’s not. One of the questions that we’ve gotten a lot from listeners since then is, where does neoliberalism come from?

Vivek Chibber: Yeah, it’s very topical, but it’s also important for the Left, because getting to the crux of this helps us understand where and how important changes in economic regimes and models of accumulation come from. So it’s good for us to get into it in some more depth. [...]

* [ed. Historical discussion of Keynesism vs. Neoliberalism.]

Vivek Chibber: The mere fact that such ideas exist does not in any way give them influence. The question for us, for socialists and for the Left is, when do ideas gain influence?

It’s a profound methodological error, I think, when you ask the question, “Where did neoliberalism come from?” to look at the contemporary theorists or the contemporary advocates of neoliberalism and then, because they are influential today, trace the origins of their ideas back to where they first started and say, that is where the origins come from.

Melissa Naschek: How important was this debate in establishing or causing neoliberalism?

Vivek Chibber: Not even the least bit. It was largely irrelevant to it. In other words, even if this debate had never happened, even if Milton Friedman had not existed, even if Hayek had not existed, you would have still had a turn to neoliberalism, and that’s the key. This is what the Left needs to understand.

This does not in any way invalidate the intellectual project of tracing those ideas. It’s intellectually interesting. It’s an interesting fact that those ideas had been around for forty years, and they had no impact on policy. Some historians have done great work tracing these ideas back to their origin, but it’s quite another to say that it was the ideas themselves that in the 1970s and ’80s caused the turn to neoliberalism.

Now, it’s an easy mistake to make because when the change came, the change was justified with a highly technical economic apparatus, and people like Friedman were given the stage to say not just that these policies are desirable for political reasons, but that they make a lot of economic sense and that it’s rational to do it this way. That gives you the sense, then, that it’s these particular individuals and their intellectual influence on the politicians that makes the politicians make the changes.

But in fact, the order of causation is exactly the other way around. It’s the politicians who make the changes based on criteria that have nothing to do with the technical sophistication of the ideas or their scientific validity. They make the changes because of the political desirability of those changes, and then they seek out advice on a) justifying the changes so that the naked subservience to power is not visible or obvious — it makes it look like it was done for highfalutin’ reasons — And then b) of course, they do legitimately say, “OK, now that we’re committed to this, help us work it out.”

Melissa Naschek: Right, especially because as long as you’re still in capitalism, you’re going to be facing constant economic crises. Even if you’re instituting a new regime, you’re going to be constantly looking for new solutions.

Vivek Chibber: Yeah. And even short of crises, you’re going to look for ways of making the policies work smoothly. And you’re going to look for ways of coming up with the correct balance of instruments and policies within them. So you bring in Milton Friedman or you bring in somebody else.

Surface level, it looks like what’s driving the whole thing is these ideas. But I said to you that the ideas actually have no role to play in the turn itself. So that brings up the question, what does? Why did they do it then?

I just said a second ago that what drove it was political priorities, not intellectual feasibility. Well, what were the political priorities? Who were the politicians actually listening to? Ideas can matter, but they have to be made to matter.

There are only two key players when it comes to policy changes of this kind. The key players are the politicians, because they’re the ones who are pulling the levers. But then, it’s the key constituency that actually has influence over the politicians.

The least important part is intellectuals. You might say voters have some degree of influence, but really, in a money-driven system like the United States, it’s investors, it’s capitalists — it’s big capital. They’re the ones who are pushing for these changes.

That means that if you want to understand where neoliberalism comes from, or rather if you want to understand why it came about, the answer is, it came about because capitalists ceased to tolerate the welfare state.

Now, why did they tolerate the welfare state at all? Most people on the Left understand the welfare state was brought about through massive trade union mobilization and labor mobilizations and was kept in place as long as the trade union movement had some kind of presence within the Democratic Party, within the economy more generally, because those unions were powerful enough, employers had to figure out a way of living with them. Part of what they did to live with the trade unions was to agree to a certain measure of redistribution and a certain kind of welfare state. As long as that was the case, politicians kept the welfare state going.

This is why, in that era from the mid-1930s to the mid-1970s, Keynesianism or the economics of state intervention of some kind was the hegemonic economic theory. The theory became hegemonic because it was given respectability by virtue of the fact that everybody in power was using it. Because it’s being used by people in power, it has great respectability.

This is why, in the 1950s and ’60s, Milton Friedman was in the wilderness — same guy, same ideas, equally intellectually attractive, equally technically sophisticated, but he was in the wilderness.[...]

That little story tells you something. What it says is ideas that are going into the halls of power go through certain filters. And the filters are essentially the policy priorities that the politicians have already committed to. Now, what creates those priorities? It’s the balance of class power. Social forces are setting the agenda.

If the social forces, that is, say, trade unions and community organizations, have set the agenda for politicians such that they think the only rational thing to do is to institute a welfare state, then they will bring in economists who help them design a welfare state. That gives intellectual influence to those economists. Economists who are saying “Get rid of this whole thing” are cast out into the wilderness. That’s how it works. [...]

Melissa Naschek: How do theories that focus on this notion that ideas and thinkers caused neoliberalism suggest a certain set of solutions to neoliberalism?

Vivek Chibber: It’s a really good point and a very good question. It gets us back to the issue of, why should we care about this? What does it matter if you misunderstand the factors that go into a change in economic policies? What does it matter if you wrongly attribute influence to ideas, let’s say, over material interests? Well, it can lead you to propose wrong solutions.

This is a very good example of that. If you think that what’s behind dramatic shifts in policy is the influence of ideas per se, the brilliance of those ideas, then, if you think that neoliberalism is a catastrophe and we need to go back to social democracy, then your solution is going to be, “Let’s get some economists or political scientists who are really good theorists of social democracy and give them publicity — put them in newspapers, give them lots of op-eds, maybe try to get them a meeting in the White House or something like that.”

But if you think that what’s really driving these changes is the social balance of power — the power balance between capital and labor, between rich and poor — then you won’t pour your energies into getting the right people entrĂ©e into the halls of power. You’ll pour your energies into changing the class balance. That’s the difference between how people on what used to be called the Left approach these issues and the way in which mainstream theorists and thinkers approach these issues.

This kind of ideas-based analysis leads to a great man version of policy change, whereby you get the right person in the right place with the right ideas. And then, counterfactually, the reason we don’t have a desired change is that we haven’t managed to get the right people with the right ideas into the right places. That’s a great man theory of historical change.

But if you are a socialist on the Left, you know ideas get their salience because of the background conditions, the social context, and the power relations. They don’t get their influence because of simple brilliance, at least when it comes to politics. Science is a different matter. But in politics, they get their influence because some agency with social power gives them the platform.

Without that, I mean, if the power of ideas mattered and if the correctness mattered, we’d already have a social democratic government, and we would have had one for decades. Because not only are these ideas, we think in our arrogance, they appeal to everybody.

Zohran Mamdani’s ideas, Bernie Sanders’s ideas, are not radical the way the New York Times is constantly hammering that these are radical fringe ideas. They’re mainstream as can be. They are ideas that appeal to the majority.

Why do they not have entrĂ©e? Why do they not have political influence right now? It’s because the balance of class power is such that even though they appeal to the largest number of people, those people have no political organization. They have no way of effectuating their demands. And so, their demands as encapsulated in Sanders and Mamdani don’t have a lot of political influence.

So ideas can matter, but they have to be made to matter.

by Melissa Nacheck with Vivek Chibber, Jacobin | Read more:
Image:Dirck Halstead / Getty Images

Wednesday, April 22, 2026

Humanism in a Posthumanist Age

Should we be surprised that Oxford University Press picked rage bait as the 2025 Word of the Year? Defining the compound noun as “online content deliberately designed to elicit anger or outrage by being frustrating, provocative, or offensive,” the chair of the selection committee explained that the point of the annual exercise is “to encourage people to reflect on where we are as a culture, who we are at the moment, through the lens of words we use.”

Quite clearly, we are not in a very nice place. When anger is the prime motivator, you can be sure that it feeds on other dark emotions, including fear, suspicion, and resentment. The ubiquity of anger also speaks to the widespread demoralization of late-modern society, evident in the grim statistics pertaining to depression, addiction, suicide, and other deaths of despair. Perhaps the darkest fear bubbling up through our culture is that humans themselves are replaceable—or at least in need of drastic biotechnological upgrading if they hope to keep up with the cool efficiency of their machines. The causes of our unhappy cultural condition are, as social scientists say, multifactorial and overdetermined, with some researchers placing the onus on our highly unsocial social media and others more sensibly arguing that the new media only amplify and reinforce trends and pathologies long in the making. We see the signs everywhere, from the decay of basic good manners and civility to gratuitously violent and crude entertainments to the mistreatment of working people as disposable units of production to actual acts of unspeakable cruelty inflicted on those we deem to be lesser or other, particularly those strangers whom we were long ago enjoined to treat as our neighbors. If we were still capable of the emotion, we would be ashamed of ourselves. But the loss of shame is another hallmark of our current condition, and as columnist George Will recently observed, “A nation incapable of shame is dangerous, not least to itself.”

What we are witnessing today is less a degradation of politics—though it is also that—than a meta-political and profoundly cultural swerve away from the informing humanist idealism of the modern liberal democratic project. When TomĂ¡Å¡ Garrigue Masaryk, the first president of Czechoslovakia, defined democracy as “the political form of the humane ideal,” he was emphasizing the inseparability of a set of political practices and institutions from a broader humanizing effort drawing on the richest ethical, intellectual, and religious traditions of the West. More important than the rivalrous claims of the partisan participants in a liberal democracy was a shared national commitment to the various goods that sustain a decent human life. Give that a thought. Masaryk was no proto-globalist, no “We Are the World” sentimentalist. He regarded the democratic nation as the indispensable crucible in which the humane ideal could be practically instantiated in the treatment of one’s fellow citizens. Though no utopian, he believed the betterment of the shared human condition was the raison d’Ăªtre of the democratic nation-state.

Masaryk’s idealism, and the exuberant hopes of Czechoslovakia’s fledgling democracy, were for a time snuffed by another variety of nationalist whose goose-stepping troops marched into the Sudetenland in 1938, first annexing that rich northwestern region before moving on to absorb most of the remainder of the country within a year. Far from advancing the humane ideal, this conquering zero-sum nationalist would have no qualms about eliminating some three hundred thousand undesirable elements from his newly acquired territory. Here and elsewhere, he saw it as fundamental to his project of purifying the Reich, ridding it of human garbage, and making Germany great again.

Lessons learned, lessons forgotten. So today, when we utter words such as humanitarian, humane ideal, humanism, we may have trouble suppressing the ironic smirk or the dismissive yawn. Fine words, but what is their real purchase when so much is being done to diminish, transform, transcend, and even surpass the merely human? [...]

The cost of the obliteration of the humane ideal in our time is incalculable. The stakes are nothing less than civilizational—meaning the civilization of the West and other civilizations that value the sacredness and inviolability of the individual human person. The challenge is ultimately about resisting those authoritarians who, now empowered by the most advanced articulation of the Machine, aim to crush the merely human for the sake of absolute power and control. 

by Jay Tolson, The Argument |  Read more:
Image: Human Figure (detail), 1921, by Vilmos HuszĂ¡r (1884–1962)

Tuesday, April 21, 2026

Elon vs. Altman: What Their Infrastructure Stacks Reveal About Power

Everyone’s obsessed with the Elon Musk vs. Sam Altman lawsuit. Ronan Farrow’s 18-month investigation. Molotov cocktails. Sister allegations. A $134 billion legal battle over OpenAI’s soul.

But they’re all asking the wrong question.

It’s not “who’s the good guy?” It’s not “who should we trust with AI?” It’s not even “who’s going to win the lawsuit?

The right question is: What does their infrastructure stack reveal about their actual theory of power?

Because here’s the thing about tech founders: They lie constantly. To investors, to users, to regulators, to themselves. But their products don’t lie. The infrastructure they choose to build. What they spend billions of dollars actually constructing reveals their real theory of survival.

Don’t listen to what they say. Look at what they build.

Elon Musk and Sam Altman are building for completely different endgames. And understanding the difference tells you everything you need to know about the actual stakes of their conflict.


Elon’s Stack: Collapse-Proof Sovereignty

Let’s start with Elon, because his infrastructure stack is massive and most people don’t understand how comprehensive it actually is. Every single piece is designed to function when legacy systems fail. This isn’t paranoia; it’s strategic architecture.

Tesla: Energy Independence

Solar panels. Powerwall battery systems. Electric vehicles. Supercharger network.

Translation: You don’t need the electrical grid. You don’t need oil. You don’t need gas stations. You don’t need the energy sector’s supply chains. If the grid goes down natural disaster, cyberattack, economic collapse, political breakdown. Tesla owners keep running. Solar generates power. Batteries store it. Vehicles consume it. The entire energy loop is self-contained. That’s not about environmentalism. That’s about Energy Sovereignty.

Starlink: Communications Independence

Over 5,000 satellites in low Earth orbit. Global internet coverage. Bypasses all terrestrial infrastructure.

Translation: You don’t need undersea fiber optic cables. You don’t need cell towers. You don’t need ISPs. You don’t need government-controlled telecommunications infrastructure. If a government shuts down the internet like Iran during protests, like Russia during Ukraine invasion. Starlink still works. You have communications capability independent of state control. That’s not about rural broadband. That’s about Information Sovereignty.

SpaceX: Logistics Independence

Reusable rockets (Falcon 9, Falcon Heavy, Starship). Cheapest launch cost per kilogram in human history. Point-to-point Earth transport capability. Orbital manufacturing potential.

Translation: You control access to space. You can move cargo anywhere on Earth in under an hour. You can put satellites into orbit cheaper than any nation-state. You can potentially manufacture things in zero-gravity that are impossible to make on Earth. If traditional supply chains break. Shipping disrupted, airspace restricted, borders closed. SpaceX can still move things. Anywhere. Fast. That’s not about exploration. That’s about Logistics Sovereignty.

The Deeper Play: Rockets Are Mythos

The Mars colonization narrative isn’t just a business plan. It’s a founding myth.

Think about how legitimacy works:

Ancient kings claimed “Divine Right” they were chosen by the gods to rule.

Democratic leaders claim “Popular Mandate” they were chosen by the people through voting.

Elon is building something different: “Cosmic Mandate”. He’s the one saving humanity by making us multi-planetary. “I’m building the infrastructure to preserve human consciousness across multiple worlds.

If you’re the person who saved the species from extinction by establishing a backup civilization on Mars, you’re not just a CEO. You’re not even just a political leader. You’re a Civilizational Founder. Like the people who established Rome, or the American republic, or any nation-state that becomes the foundation for centuries of subsequent history. Mars isn’t the goal. It’s the mythology that justifies rule. The founding story that makes everything else legitimate. 

[more]...

This is “Post-State Capability”. The ability to function and to maintain power when traditional state infrastructure is unavailable, hostile, or collapsed.

Elon’s not hoping for collapse. But he’s not betting against it either.

His thesis is simple: “The system will fragment. Build infrastructure that makes you powerful in the aftermath.” If collapse happens, He owns:- Energy systems- Communications networks- Logistics capability- Information channels- Labor (automated)- The founding myth (savior of humanity) That’s not a business portfolio. That’s a blueprint for post-state power.


Altman’s Stack: Acceleration-Dependent Fragility

Now let’s look at Sam Altman’s infrastructure.

OpenAI/ChatGPT: Centralized, Grid-Dependent, Fragile

OpenAI is building toward Artificial General Intelligence through massive-scale computing infrastructure. Current commitments: $1.4 trillion in data center buildout over 8 years.

This requires:
  • Stable energy grid (data centers consume gigawatts → entire power plants worth of electricity)
  • Chip manufacturing (NVIDIA GPUs, TSMC fabrication→ Taiwan and South Korea must remain stable and accessible)
  • Cooling infrastructure (water, HVAC systems, constant temperature regulation)
  • Fiber optic networks (global connectivity, low-latency communication)
  • Capital markets (functioning financial system to fund trillion-dollar buildouts)
  • Regulatory stability (permitting, zoning, environmental compliance, AI development allowed)
Notice the dependency structure?

Elon’s stack works when systems fail. Altman’s stack requires every system to keep working simultaneously.

The Vulnerability Comparison

Elon without electrical grid:
  • Still has Tesla solar panels generating power
  • Still has Powerwall batteries storing energy
  • Still has Starlink satellites providing internet
  • Still has rockets for logistics
  • Still has underground tunnels for transit
  • Still has robots for labor
  • Still powerful
Altman without electrical grid:
  • Data centers go dark immediately
  • ChatGPT stops responding
  • Training runs halt
  • No product, no revenue, no value
  • Completely powerless
The contrast is stark. Elon’s infrastructure is “distributed and resilient”. Altman’s infrastructure is centralized and fragile.

What Does Altman Actually Want?

So if Altman’s building such a vulnerable stack, what’s the theory?

Look at what he’s actually building with AI. Not what he says but what he builds.

He’s NOT focusing on:
  • AI companionship (even though Character.ai and Replica prove this is hugely profitable)
  • Entertainment AI (even though this is the biggest consumer market)
  • Social AI (even though emotional dependency creates the strongest lock-in)
He’s focusing on:
  • AI for scientific research (drug discovery, materials science, physics)
  • AI for productivity (coding assistants, automation, reasoning)
  • AI for problem-solving (complex systems, coordination challenges)
This is the tell. He’s explicitly said he was surprised people want emotional bonds with ChatGPT, and he’s not leaning into it.

Why?

by MythcoreOps |  Read more:
Images: uncredited

Thursday, April 16, 2026

Ask Mike: Mike Monteiro’s Good News

This week’s question comes to us from Tuan Son Nguyen:

How do you form a circle of like-minded people to keep your sanity when so many horrible things are happening?

I’m not exactly sure when this happened, or what triggered it. But I remember it was a nice day. Maybe it was a nice day after a few rainy days, or a few cold days, or maybe I was just up in my feelings. But I got home, locked up my bike, and instead of heading up the stairs to our apartment, as I would normally do, I headed out to the dogpark. The dogpark is a block away, and I visit regularly with my dog so he can do all his dog things. We’re regulars. But this time I didn’t have my dog and I had no need to go to the dogpark. I just wanted to. I wanted to go sit on one of the benches and soak up what was left of a nice day. Which is what I did.

Here’s the thing about the dog park, which I’ve written about before. It’s dog-centric. Everyone knows your dog’s name. Everyone knows whether your dog can or cannot have treats (always ask if you don’t know). Everyone’s relationship at the dogpark, with a few exceptions, revolves around the dogs. And that’s been true for as long as we’ve been taking our dog (who is now amazingly close to eighteen years old) to the dog park. This is by design.

When everyone is brought together by geography and your dog’s need to take a shit, it’s in your best interest to get along with the people who end up in that shared public space. You wanna keep conversation light. You discuss the weather. If someone is wearing a local team hat, you take it as a sign to elevate the conversation to “did you see the game?” or “this is our year.” (It’s not.) You mention new restaurants or cafĂ©s in the neighborhood, or sadly more appropriately these days—you mention restaurants or cafĂ©s that have recently shuttered. But mostly you talk about the dogs.

“Did Grumble get a haircut today?”

“I like Mojo’s Pride kerchief.”

In general, it’s best to avoid more complicated issues with your neighbors, which is why I stay off NextDoor, which is just an online Klan rally. Once you know certain things about your neighbors, you’re stuck knowing them, and you realize how much time you spend around them holding a bag of dog shit in your hand. And the temptation becomes too strong.

This is how peace was kept in the dog park for years. The occasional flare-up for politics, of course, the occasional flare-up for world issues, as well as local issues. Which will happen whenever folks get together, which is good. But those conversations would eventually subside. A regression back to the mean. Back to the dogs.

But neighborhoods are living, changing things. On the day I decided to just go sit in the dogpark without my dog (he was still at work), I realized other people were just sitting there in the dogpark. Yes, some of them had dogs, but some didn’t. They were just sitting there, sometimes talking to one another, sometimes not. Literally in a circle because of how the benches are laid out. And then other people started coming out and wandered over. To be clear, I’m not saying I instigated any of this. If anything, we were all getting pulled in by some cosmic need to be among other people. And for the past few weeks, this has been a regular occurrence. Every day I come home, and I walk to the dog park and sit with my neighbors. Yes, we talk about our dogs, but we also check in on each other, we vent about our day, we trash talk. Sometimes people bring snacks. Yes, we talk about the state of things in the world, which is awful, but having this small community of people that we can hold peace with makes it… well, not less awful. But it makes a difference knowing there are other people on the spaceship with us.

Are we like-minded? We’re like minded in some things! For one, we all like sitting in the park in the evening, and that’s nice. We all love our neighborhood. We seem to all like donuts. And dogs. And a little bit of a breeze coming off the mountain. We all believe there’s one neighbor that goes too fucking hard. We all believe in shared spaces, or at least we believe in this shared space. I think we also believe that it’s important to interact with each other with a certain level of kindness. For example, one of our neighbors recently had knee surgery and everyone’s bringing her food. Another neighbor is out of town and there are a few neighbors moving her car around so she doesn’t get tickets when the street cleaning happens. We watch each other's dogs when we’re out of town, or working a long shift at work. We lend records that better be returned in good shape soon. (This one might be a little targeted.) We hold vigils when a beloved dog leaves us. We commiserate together when someone loses a job, and we celebrate together when a new job is procured. We say goodbye when someone moves away, and we widen the circle when a new person moves in.

Are we like-minded in all things? Fuck no. Way too many of my neighbors still own Ring cameras. Way too many of my neighbors still believe their “I got this before Elon went crazy” bumper sticker is an act of resistance. Way too many of my neighbors still believe Gavin Newsom is the solution to something. (Gavin Newsom is a piece of shit.) And more than one of my neighbors have sat down next to me and told me that the Democrats need to give a little bit on immigration, not realizing they were sitting next to an immigrant. So, no we are not like-minded in all things. But I do believe there is a shared core of decency to all my neighbors, and within that core there may be unexplored areas that need to be explored a little bit. We all grew up believing certain things, things that we hold to be sacrosanct, that could use a little further exploration. And I’ve been able to have a few of those conversations with people, and they’ve been able to have some with me. It’s easier for people to have those conversations when they’re coming from a place of common decency.

That said, not all differences are equal. I don’t sit with Nazis. I don’t sit with terfs. We all avoid the zionist lady...

In general, I think the idea of “like-minded” is overrated and a little boring. Sitting with people who agree with everything you agree with feels great for about five minutes. Then (and maybe this is because I am from Philadelphia) I want to fight. I want to argue. I want to argue about who the most influential NBA player of our lifetime was, and why it was Allen Iverson. I want to argue about the best BeyoncĂ© album, and why it was Lemonade. I want to argue about why the park needs public restrooms, and yes I know people will use them—that’s the fucking point, man! I want to argue about which of our cafĂ©s makes the best coffee. (Trick question. It’s me. I make better coffee than any of them.) I want to argue about street parking. My god, I love arguing with my neighbors about street parking. (Why should the city be providing storage for your private property? Get a bike. Ride the bus.) Street parking is always guaranteed to start a fight in the park. And I love having those fights with my neighbors. I think they honestly bring us closer together. (They may disagree.)

But no, we will not have any arguments about who belongs in the park, because something that every one of my neighbors agrees about is that if you are in the park you belong in the park. If you are in the park, you get the same privileges as everyone else in the park. And if you want to join the community circle in the park we will make room for you. And also, if shit starts coming out of your mouth you will be called on it.

Everything is shit. And when everything is shit, minor differences become less important than the things we hold in common. We’ve seen this in LA. We’ve seen this in Chicago. We’ve seen this in the Twin Cities. Punks fighting next to suburban dads. Wine moms fighting next to anarchists. Socialists fighting next to librarians. (I’m kidding here, all librarians are socialist. I love librarians.) We see this when people come out to protect their neighbors. We see this when people yell at the ICE goons. And someday we will see this when we put all these fascists on trial. Roomfuls of people, who may not agree on much, but they agree on this:

The shittier they treat us, the more they bring us together.

***
This week’s question comes to us anonymously:

What would you say to someone who proclaims, “I want to be a donut maker,” but has never actually made a single donut in their life?

You say “That’s awesome. What can I do to help?”

Look, I’m going to be totally honest with you. Every week, I go through my bin of newsletter questions, looking for something I want to answer, and I get incredibly depressed. The vast majority of them are from people getting laid off, or being in their sixth month of looking for work, or justifiably freaking out because they heard layoffs are coming to their company. It’s a world of despair and a world of shit which, sadly, only appears to be picking up steam.

Meanwhile, half the people I know are wondering how they’re going to pay their rent and go to the doctor, and the other half are proclaiming this the “Era of Abundant Intelligence.” (For who?!?) All they need is half the world’s money (the half not going to bombing school children), half the world’s land, half the world’s water, all of the world’s microchips, and they will eventually deliver [checks notes] something in exchange for all this, just don’t ask them what because it’s really hard to say, but it’s right around the corner.

(I promise this newsletter will turn positive soon.)

Meanwhile, if I am stupid, sad, or desperate enough to go on LinkedIn for a minute, it’s a sea of people writing letters in praise of the leopard, proclaiming it has always been their dream to work for the leopard, asking the leopard not to eat their face, or hoping to get one of the few jobs at the face-eating factory where they feel like they’ll be safe from the face-eating leopard, which of course they’re not. So, yes, there are a fair amount of questions in my inbox from people upset that the leopard ate their face even though they were happy to help the leopard eat everyone else’s face.

(Or I may spiral out of control.)

Seriously though, era of abundant intelligence for who?!?

Let’s talk about your friend who wants to be a donut maker. Because they may be the smartest person here. First off, everyone loves a donut. Secondly, no one has ever reacted badly to the news that someone is making donuts. But most importantly for us today—not a single human being has ever been born with the ability to make donuts. Like all skills, you learn it, you do it badly for a while, then you do it better. Some people will get amazing at it, and most people will reach some level of competency. So while there’s an incredibly slim chance that your friend will become the world’s greatest donut maker, there’s an incredibly high possibility that your friend will learn how to make good, even great, donuts. Which you will benefit from. And which you should be incredibly grateful for.

For the last week, Erika and I have been glued to Artemis updates on the NASA site, because it’s become such a joy to watch people be good at something, and enjoy doing it, and all of this while being incredibly human about it. Seriously, these people sound positively giddy to be in space! And they’re rocking it. It feels like such a luxury to watch these people do their thing, and do it well, and with joy, at a time when we’re surrounded by a government who is very bad at what they do, and does it in the cruelest way possible, and an industry that’s trying to convince us that we are incapable of doing the things we love, and we’re doing them inefficiently anyway. (Because the problem was always that we weren’t breaking the world fast enough.)

Competence should not be a luxury.

Competence should not be something that we look at with nostalgia.

We’re lucky that we get to watch the Artemis crew do their thing, which they can do because they practiced doing it a thousand times. And you know that they made a lot of bad donuts, before they finally made a good donut. You know there was a Day One of learning to be an astronaut, just as there’s a Day One of learning to be a donut maker, or learning to be a designer, dentist, farmer, or teacher. And the only way to get to Day Thousand is to start at Day One, do it 999 more times, and get not just better, but confident enough that you decide you can do it in the confines of space. Confident enough that you can say to yourself and to everyone around you that you want to be a donut maker.

Meanwhile a friend who’s deep into a job interview is being asked to bring a passport to their next scheduled remote interview because their skillset shows a level of competence that has the potential employer worried they might be interviewing a deepfake. With one hand they force the slop down our throats. With the other hand they defend against us using the tools against them. Human competence has become a source of distrust. If you don’t trust the results of the tool, stop demanding we use it.

The era of abundant intelligence is actually the era of abundant theft. First they stole your work, then they stole the confidence you needed to do the work. This is violence.

Your friend is going to make some pretty crappy donuts to start. That’s to be expected. And then the day will come when they’ve gotten all the crappy donuts out of their system and they’ll hand you a good donut. I think you’ll be genuinely happy for your friend when this happens. And for yourself, which is fair.

But can’t you just get donuts at the corner bodega or at the donut shop? Yes, you can. And they are good. Donuts are good at every price point. From the waxy little chocolate ones at gas stations, to the funky ones you can buy from someone with a liberal arts degree and a polycule at Voodoo Donuts in Portland, to the boujie made-to-order (lord) donuts at Coffee Movement in SF, all donuts are good. (Bob’s Donuts are the best.) But your friend doesn’t want to buy donuts. Your friend wants to be a donut maker. And that is a very different thing.

Human beings crave making things. We make things out of wood. We make things out of wool. We make things out of steel. We make things out of folded paper. We make things out of flour, salt, and sugar. We make zines. We 3D-print whistles. We draw. We paint. We make instruments out of brass so we can make sounds. There is no more flexible word in the English language than “make.” We can make donuts, we can make plans, we can make someone dinner. We can make our cities more walkable. We can make bike lanes. We can make it around the moon. We can even make up our minds. Making is an act of sharing, it’s an act of using our joy, our labor, or expertise, in the service of adding to what’s here. Hopefully, in the service of improving what’s there. We make things so that we can bond with others.

And while the sloplords might reply to this by telling me that they enjoy making money, I’d happily reply that the making is actually done with our labor. It’s not the making that drives them, it’s the theft of labor. The theft of joy. And now the theft of competence. You can hear it in their language. They do not make. They disrupt. They extract. They colonize. Their joy is not in the giving, but in the taking. They are so broken, their only recourse is to attempt to break everything else around them. In their psychosis, they call this abundance.

I know very little about your friend, in fact all I know is that they want to be a donut maker and they’ve never made a single donut in their life. From this I can safely extrapolate that your friend isn’t currently a donut maker. I can also reasonably extrapolate that whatever your friend is currently doing isn’t what they want to be doing. And from there I can go out on a limb a little bit, from extrapolation to conjecture and guess that your friend isn’t happy doing what they’re currently doing. Happy people don’t generally dream about doing something else.

Turns out the Era of Abundant Intelligence isn’t coinciding with an Era of Abundant Happiness.

And here’s the thing about donuts: you want one. And the more I mention donuts the more you want one. Maybe you’re thinking of a custard donut, or maybe you’re thinking of a pink frosted donut with sprinkles, or maybe you’re thinking of an old-fashioned, or maybe you’re thinking of a gluten-free donut because everyone deserves donuts, but no one has ever had to be convinced to eat a donut. (The harder part is stopping, trust me.) Donuts are not inevitable, they are anticipated. When you make something you love, and other people also love, and it brings about as much joy as a donut does, there’s very little convincing that needs to happen. No one needs to declare that it’s the Era of Abundant Donuts because it’s apparent anytime you walk into a donut shop. The result of human competence, human labor, human joy, all laid out on baking sheet after baking sheet. Boston Cream. Glazed. Powdered. Chocolate Sprinkle. Jelly. Crullers. These are real. They exist. And they’re fucking delicious.

Trust that we are all closer to a good donut shop than we will ever be to AGI.

Trust that we are all closer to a good donut shop than we will ever be to AGI, and we should be taking full advantage of what is close to us, and what is possible, and what brings us joy. And that when the sloplords tell us that the thing we need might be right around the corner, maybe consider that they’re right after all. If there’s a donut shop around the corner.

We are in the Era of Abundant Donuts. If we want it. We should want it. Because a donut is amazing, and it’s right there for the taking.

I hope your friend succeeds in becoming a donut maker. I hope their donuts are amazing. I hope there are lines around the clock for their donuts. I hope you end up helping them at the donut shop and loving it so much that you decide you want to become a donut maker too. Or maybe not. Maybe it’s not the donuts that get your attention as much as it is your friend’s joy. Maybe you decide you want the joy, but your joy is found in something else. Maybe it’s making tacos, or opening a bookstore, or knitting, or opening a bar, or designing shoes.

I hope that when this happens someone says “That’s awesome. What can I do to help?”

by Mike Monteiro, Good News |  Read more:
Images: Artemis donuts by Mark Jacquet, Engineer at NASA Ames Research Center; and uncredited
[ed. Don't we all need good news. See also: if this is what i'm getting left behind from, just leave me behind (rax king).]

A Monkey Goes to Court

What happens when something that isn't human makes art? A series of bizarre court battles trying to answer that question centred around this image. Ultimately, it will influence what ends up on your screens and headphones forever.

It was a humid day in the Indonesian jungle, and photographer David Slater was following a group of crested black macaques, a critically endangered and particularly photogenic species of monkey.

He wanted pictures, but the macaques were nervous. So, Slater put his camera on a tripod with autofocus on and a flashbulb, allowing the monkeys to inspect it. Just as he hoped, they started playing with his gear. Then one of them reached up and hit the shutter button while staring directly into the lens. The result was a selfie, taken by a monkey. And its toothy grin inadvertently answered a basic question that sits at the heart of technology.

What came next was nearly a decade of legal battles around an unusual dispute: when something that isn't human makes a work of art, who owns the copyright? Thanks to AI, that's become a issue with some deep implications for modern life – and what it means to be human.

One of the most alarming predictions about AI is that corporations will replace the human-created music, movies and books you love with an endless stream of AI slop. But the US Supreme Court just upheld a decision about AI and copyright which suggests that future may be harder to pull off than the tech industry hoped. The path is still uncertain, and right now, the legal system is the site of a battle that will shape what you read, watch and listen to for the rest of your life. It all traces back to that one little monkey.

Monkey business

The monkey took that selfie in 2011. For a brief, blissful period, Slater enjoyed global attention from the picture, but the troubles began when someone uploaded the photo to Wikipedia, from where it could be downloaded and used free of charge. He asked the Wikimedia Foundation to take it down, arguing it cost him £10,000 (worth about $13,400 today) in lost sales. In 2014, The organisation refused, arguing the photo was in the public domain because it wasn't taken by a person.

The row prompted the US Copyright Office to issue a statement that it would not register work created by a non-human author, putting "a photograph taken by a monkey" first in a list of examples. (Slater didn't respond to interview requests, but his representation arranged for the BBC to use the photo in this article.)

The story gets weirder. Soon after, the advocacy group People for the Ethical Treatment of Animals (Peta) sued Slater on behalf of the monkey. The case argued all proceeds from the photo belonged to the macaque that took the picture, but it was really seen as a test case, an attempt to establish legal rights for animals. After four years and multiple court battles, a San Francisco judge dismissed the case. The judge's reasoning was simple: monkeys can't file lawsuits.

"It was kind of the biggest public conversation piece on this topic," says intellectual property lawyer Ryan Abbott, a partner at Brown, Neri, Smith and Khan in the US. "At the time it was very much about animal rights. But it could have been a conversation about AI." [...]

The missing author

When the US passed the Copyright Act of 1790, we only had to deal with things like writing and drawing. But the invention of photography decades later raised troubling questions. You could argue cameras do the real work, a person just hits a button.

"The Supreme Court looked at this and said, you know, we're going to interpret this purposively," says Abbott, who represented Thaler in a case against the Copyright Office. "Copyright was designed to protect the expression of tangible ideas. And that's broad enough to cover something like photography."

The same logic could apply to AI. "What you really have in photography is exactly the same thing you have here. You have a person issuing instructions to a machine to generate a work," he says. "What's the difference between that and me asking ChatGPT to make an image?"

by Thomas Germain, BBC | Read more:
Image: David Slater/ Caters New/BBC
[ed. More issues than you might imagine.]