Showing posts with label Philosophy. Show all posts
Showing posts with label Philosophy. Show all posts

Tuesday, February 24, 2026

Child’s Play

Tech’s new generation and the end of thinking

The first sign that something in San Francisco had gone very badly wrong was the signs. In New York, all the advertising on the streets and on the subway assumes that you, the person reading, are an ambiently depressed twenty-eight-year-old office worker whose main interests are listening to podcasts, ordering delivery, and voting for the Democrats. I thought I found that annoying, but in San Francisco they don’t bother advertising normal things at all. The city is temperate and brightly colored, with plenty of pleasant trees, but on every corner it speaks to you in an aggressively alien nonsense. Here the world automatically assumes that instead of wanting food or drinks or a new phone or car, what you want is some kind of arcane B2B service for your startup. You are not a passive consumer. You are making something.

This assumption is remarkably out of step with the people who actually inhabit the city’s public space. At a bus stop, I saw a poster that read: TODAY, SOC 2 IS DONE BEFORE YOUR GIRLFRIEND BREAKS UP WITH YOU. IT'S DONE IN DELVE. Beneath it, a man squatted on the pavement, staring at nothing in particular, a glass pipe drooping from his fingers. I don’t know if he needed SOC 2 done any more than I did. A few blocks away, I saw a billboard that read: NO ONE CARES ABOUT YOUR PRODUCT. MAKE THEM. UNIFY: TRANSFORM GROWTH INTO A SCIENCE. A man paced in front of the advertisement, chanting to himself. “This . . . is . . . necessary! This . . . is . . . necessary!” On each “necessary” he swung his arms up in exaltation. He was, I noticed, holding an alarmingly large baby-pink pocketknife. Passersby in sight of the billboard that read WEARABLE TECH SHAREABLE INSIGHTS did not seem piqued by the prospect of having their metrics constantly analyzed. I couldn’t find anyone who wanted to PROMPT IT. THEN PUSH IT. After spending slightly too long in the city, I found that the various forms of nonsense all started to bleed into one another. The motionless people drooling on the sidewalk, the Waymos whooshing around with no one inside. A kind of pervasive mindlessness. Had I seen a billboard or a madman preaching about “a CRM so smart, it updates itself”? Was it a person in rags muttering about how all his movements were being controlled by shadowy powers working out of a data center somewhere, or was it a car?

Somehow people manage to live here. But of all the strange and maddening messages posted around this city, there was one particular type of billboard that the people of San Francisco couldn’t bear. People shuddered at the sight of it, or groaned, or covered their eyes. The advertiser was the most utterly despised startup in the entire tech landscape. Weirdly, its ads were the only ones I saw that appeared to be written in anything like English:
HI MY NAME IS ROY
I GOT KICKED OUT OF SCHOOL FOR CHEATING 
BUY MY CHEATING TOOL
CLUELY.COM
Cluely and its co-founder Chungin “Roy” Lee were intensely, and intentionally, controversial. They’re no longer in San Francisco, having been essentially chased out of the city by the Planning Commission. The company is loathed seemingly out of proportion to what its product actually is, which is a janky, glitching interface for ChatGPT and other AI models. It’s not in a particularly glamorous market: Cluely is pitched at ordinary office drones in their thirties, working ordinary bullshit email jobs. It’s there to assist you in Zoom meetings and sales calls. It involves using AI to do your job for you, but this is what pretty much everyone is doing already. The cafés of San Francisco are full of highly paid tech workers clattering away on their keyboards; if you peer at their screens to get a closer look, you’ll generally find them copying and pasting material from a ChatGPT window. A lot of the other complaints about Cluely seem similarly hypocritical. The company is fueled by cheap viral hype, rather than an actual workable product—but this is a strange thing to get upset about when you consider that, back in the era of zero interest rates, Silicon Valley investors sank $120 million into something called the Juicero, a Wi-Fi-enabled smart juicer that made fresh juice from fruit sachets that you could, it turned out, just as easily squeeze between your hands.

What I discovered, though, is that behind all these small complaints, there’s something much more serious. Roy Lee is not like other people. He belongs to a new and possibly permanent overclass. One of the pervasive new doctrines of Silicon Valley is that we’re in the early stages of a bifurcation event. Some people will do incredibly well in the new AI era. They will become rich and powerful beyond anything we can currently imagine. But other people—a lot of other people—will become useless. They will be consigned to the same miserable fate as the people currently muttering on the streets of San Francisco, cold and helpless in a world they no longer understand. The skills that could lift you out of the new permanent underclass are not the skills that mattered before. For a long time, the tech industry liked to think of itself as a meritocracy: it rewarded qualities like intelligence, competence, and expertise. But all that barely matters anymore. Even at big firms like Google, a quarter of the code is now written by AI. Individual intelligence will mean nothing once we have superhuman AI, at which point the difference between an obscenely talented giga-nerd and an ordinary six-pack-drinking bozo will be about as meaningful as the difference between any two ants. If what you do involves anything related to the human capacity for reason, reflection, insight, creativity, or thought, you will be meat for the coltan mines.

The future will belong to people with a very specific combination of personality traits and psychosexual neuroses. An AI might be able to code faster than you, but there is one advantage that humans still have. It’s called agency, or being highly agentic. The highly agentic are people who just do things. They don’t timidly wait for permission or consensus; they drive like bulldozers through whatever’s in their way. When they see something that could be changed in the world, they don’t write a lengthy critique—they change it. AIs are not capable of accessing whatever unpleasant childhood experience it is that gives you this hunger. Agency is now the most valuable commodity in Silicon Valley. In tech interviews, it’s common for candidates to be asked whether they’re “mimetic” or “agentic.” You do not want to say mimetic. Once, San Francisco drew in runaway children, artists, and freaks; today it’s an enormous magnet for highly agentic young men. I set out to meet them.

by Sam Kriss, Harper's |  Read more:
Image: Max Guther
[ed. Seems like we're already creating artificial humans. That said, I have only the highest regard for Scott Alexander, one of the people profiled here. The article makes him sound like some kind of cult leader or something (he's a psychologist), but he's really just a smart guy with a wide range of interests that intelligent people gravitate to (also a great writer). Here's his response  on his website ACX:]
***
I agreed to be included, it’s basically fine, I’m not objecting to it, but a few small issues, mostly quibbles with emphasis rather than fact:
1. The piece says rationalists believe “that to reach the truth you have to abandon all existing modes of knowledge acquisition and start again from scratch”. The Harper’s fact-checker asked me if this was true and I emphatically said it wasn’t, so I’m not sure what’s going on here.

2. The article describes me having dinner with my “acolytes”. I would have used the word “friends”, or, in one case, “wife”.

3. The article says that “When there weren’t enough crackers to go with the cheese spread, [Scott] fetched some, murmuring to himself, “I will open the crackers so you will have crackers and be happy.”” As written, this makes me sound like a crazy person; I don’t remember this incident but, given the description, I’m almost sure I was saying it to my two year old child, which would have been helpful context in reassuring readers about my mental state. (UPDATE: Sam says this isn’t his memory of the incident, ¯\_(ツ)_/¯ )

4. The article assessed that AI was hitting a wall at the time of writing (September 2025). I explained some of the difficulties with AI agents, but I’m worried that as written it might suggest to readers think that I agreed with its assessment. I did not.

5. In the article, I say that I “never once actually made a decision [in my life]”. I don’t remember this conversation perfectly and he’s the one with the tape recorder, but I would have preferred to frame this as life mostly not presenting as a series of explicit decisions, although they do occasionally come up.

6. Everything else is in principle a fair representation of what I said, but it’s impossible to communicate clearly through a few sentences that get quoted in disjointed fragments, so a lot of things came off as unsubtle or not exactly how I meant them. If you have any questions, I can explain further in the comments.

Sunday, February 22, 2026

Embryo Selection Company Herasight Goes All In On Eugenics

Multiple commercial companies are now offering polygenic embryo selection on a wide range of traits, including genetic predictors of behavior and IQ. I’ve previously written about the methodological unknowns around this technology but I haven’t commented on the ethics. I think having a child is a very personal decision and it’s not my place to tell people how to do it. But the new embryo selection company, Herasight, has started advocating for eugenic societal norms that I find disturbing and worth raising alarm over. Because this is a fraught topic, I’ll start with some basic definitions.

What is eugenics?

Eugenics is an ideology that advocates for conditioning reproductive rights on the perceived genetic quality of the parents. Francis Galton, the father of eugenics, declared that eugenics’ “first object is to check the birth-rate of the Unfit, instead of allowing them to come into being”. This goal was to be achieved through social stigma and, if necessary, by force. The Eugenics Education Society, for instance, advocated for education, segregation, and — “perhaps” — compulsory sterilization to prevent the “unfit and degenerate” from reproducing:

A core component of defining “the unfit” was heredity. Eugenicists are not just interested in improving people’s phenotypes — a goal that is widely shared by modern society — but the future genotypic distribution. The genetic stock. This is why eugenic policies historically focus on sterilization, including the sterilization of unaffected relatives who harbor genotype but not phenotype. If someone commits a crime, they face time in prison for their actions, but under eugenic reasoning their law-abiding sibling or child is also suspect and should be stigmatized (or forcefully prevented) from passing on deficient genetic material.

A simple two-part test for eugenics is then: (1) Is it concerned with the future genetic stock? (2) Is it advocating for restricted reproduction, either through stigma or force, for those deemed genetically inferior?

Is embryo selection eugenics?

I have publicly resisted applying the “eugenics” label to embryo selection writ large and I continue to do so. Embryo selection is a tool and its use is morally complex. A couple can choose to have embryo screening for a variety of reasons ranging from frivolous (“we want to have a blue eyed baby”) to widely supported (“we carry a recessive mutation that would be fatal in our baby”), none of which have eugenic intent. Embryo selection can even be an anti-eugenic tool, as in the case of high-risk couples who have already decided against having children. If embryo selection technology allows them to lower the risk to a comfortable level and have a child they would otherwise have avoided, then the outcome is literally the opposite of eugenic selection: “unfit” individuals (at least as they see themselves) now have an incentive to produce more offspring than they would have. In practice, IVF remains a physically and emotionally demanding procedure, and my guess is that individual eugenic intentions — the desire to select out unfit embryos with the specific motivation of improving the “genetic stock” of the population — are exceedingly rare.

Is Herasight advocating for eugenics?


While I do not think embryo selection is eugenic in itself, like any reproductive technology, it can be wielded for eugenic purposes. The new embryo selection company Herasight, in my opinion, is advocating for exactly that. To understand why, it is useful to first understand the theories put forth by Herasight’s director of scientific research and communication Jonathan Anomaly (in case you’re wondering, that is a chosen last name). Anomaly is a self-proclaimed eugenicist [Update: Anomaly has clarified that this description was not provided by him and he requested that it be removed]:

Prior to joining Herasight, Anomaly wrote extensively on the ethics of embryo selection, notably in a 2018 article titled “Defending eugenics”. How does Anomaly defend eugenics? First, he reiterates the classic position that eugenics is a resistance to the uncontrolled reproduction of the “unfit” (emphasis mine, throughout):
Darwin argued that social welfare programs for the poor and sick are a natural expression of our sympathy, but also a danger to future populations if they encourage people with serious congenital diseases and heritable traits like low levels of impulse control, intelligence, or empathy to reproduce at higher rates than other people in the population. Darwin feared that in developed nations “the reckless, degraded, and often vicious members of society, tend to increase at a quicker rate than the provident and generally virtuous members”
Anomaly goes on to sympathize with Darwin’s position and that of the classic eugenicists, arguing that “While Darwin’s language is shocking to contemporary readers, we should take him seriously”, later that “there is increasingly good evidence that Darwin was right to worry about demographic trends in developed countries”, and that we should “stop allowing [the Holocaust] to silence any discussion of the merits of eugenic thinking”.

Anomaly then proposes several potential eugenic interventions, one of which is a “parental licensing” scheme that prevents unfit parents from having children:
The typical response is for the state to step in and pay for all of these things, and in extreme cases to remove children from their parents and put them in foster care. But it would be more cost-effective to prevent unwanted pregnancies than treating their consequences, especially if we could achieve this goal by subsidizing the voluntary use of contraception. It may also be more desirable from the standpoint of future people.
The phrase “future people” figures repeatedly in Anomaly’s writing as a euphemism for the more conventional eugenic concept of genetic stock. This connection is made explicit when he explains the most compelling reason for supporting parental licensing:
The most compelling reason (though certainly not a decisive reason) for supporting parental licensing is that traits like impulse control, health, intelligence, and empathy have significant genetic components. What matters is not just that some parents are unwilling or unable to take care of their children; but that in many cases they are passing along an undesirable genetic endowment.
What are we really talking about here? Anomaly has proposed a technocratic rebranding of eugenic sterilization: instead of taking away your reproductive rights clinically, the state will take away your reproductive license and, if you still have children, impose “fines or other costs” (though Anomaly does not make the “other costs” explicit, eugenic sterilization is mentioned as an example in the very next sentence). How would the state decide who should lose their license? Anomaly explains:
For a parental licensing scheme to be fair, we would need to devise criteria that are effective at screening out only parents who impose significant risks of harm on their children or (through their children) on other people.
A fundamental normative principle of our society is that all members are created equal and endowed with unalienable rights. What Anomaly envisions instead is a society where the state can seize one of the most intimate of human freedoms — the right to become a parent — based on innate factors. How does the state determine whether a future child imposes significant risk on future people? By inspecting the biological makeup of the parents and identifying “undesirable genetic endowments” that will harm others “through their children”. This is a policy built explicitly on genetic desirability and undesirability, where those deemed genetically unfit are stripped of their rights to have children and/or fined for doing so — aka bog-standard coercive eugenics.

Today, Anomaly is the spokesperson for a company that screens parents for “undesirable genetic endowments” and, for a price, promises to boost their genetic desirability and their value to future people. It is easy to see how Herasight fits directly into the eugenic parental licensing scheme Anomaly proposed. Having an open eugenicist as the spokesperson for an embryo selection company seems, to me, akin to hiring Hannibal Lecter to do PR for a hospital, but perhaps Anomaly has radically changed his views since billing himself as a eugenicist in 2023?

Herasight (with Anomaly as first author) recently published a perspective white-paper on the ethics polygenic selection, from which we can glean their corporate position. The perspective outlines the potential benefits and harms of embryo selection. The very first positive benefit listed? The “benefits to future people”. While this section starts with a focus the welfare of individual children, it ends with the same societal motivations as classical eugenics: the social costs of the unfit on communities and the benefits of the fit to scientific innovation and the public good: [...]

When eugenics goes mainstream

Let’s review: eugenics has as a goal of limiting the birthrate of the “unfit” or “undesirable” for the benefit of the group. Anomaly describes himself as a eugenicist and explicitly echoes this goal through, among other policies, a parental licensing proposal. Anomaly now runs a genetic screening company. The company recently published a perspective paper advocating for the stigmatization of “unfit” parents who do not screen. Anomaly, as spokesperson, reiterates that their goal is indeed eugenics — “Yes, and it’s great!”. With any other person one could argue that they were clueless or trolling; but if anyone knows what eugenics means, it is a person who has spent the past decade defending it.

I have to say I am floored by how strange this all is. My personal take on embryo selection has been decidedly neutral. I think the expected gains are limited by the genetic architecture of the traits being scored and the companies are mostly fudging the numbers to look good. As noted above, I also think a common use of this technology will be to calm the nerves of parents who otherwise would have gone childless. So I have no actual concerns about changes to the genetic make-up of the population or genetic inequality or any of the other utopian/dystopian predictions. But I am concerned that the marketing around the technology revives and normalizes classic eugenic arguments: that society is divided into the genetically fit and the genetically unfit, and the latter need to be stigmatized away from parenthood for the benefit of the former. I am particularly disturbed by the giddiness with which Anomaly and Herasight have repeatedly courted eugenics-related controversy as part of their launch campaign.

Even stranger has been the response, or rather non-response, from the genetics community. Social science geneticists and organizations spent the past decade writing FAQs warning against the use of their methods and data for individual prediction and against genetic essentialism. Many conference presentations and seminars start with a section on the sordid history of eugenics and the sterilization programs in the US and Nazi Germany, vowing not to repeat the mistakes of the past. Now, a company is openly advocating for eugenics (in fact, a company with direct connections to these social science organizations) and these organizations are silent. It is hard not to conclude that the FAQs and warnings were just lip service. And if the experts aren’t raising alarms, why would the public be alarmed?

by Sasha Gusev, The Infinitesimal |  Read more:
Image: Anselm Kiefer, Die Ungeborenen (The Unborn), 2002
[ed. With neophyte Nazis seemingly everywhere these days, CRISPR advances, and technocrats who want to live forever, it's perhaps not surprising that eugenics would be making a comeback. Update: Jonathan Anomaly, director of scientific research and communication for Herasight and whose articles I criticize here, responds in a detailed comment. I recommend reading his response together with this post. Anomaly’s role in the company has also been clarified. See also: Have we leapt into commercial genetic testing without understanding it? (Ars Technica).]

Tuesday, February 17, 2026

The Crisis, No. 5: On the Hollowing of Apple

[ed. No.5 of 17 Crisis Papers.]

I never met Steve Jobs. But I know him—or I know him as well as anyone can know a man through the historical record. I have read every book written about him. I have read everything the man said publicly. I have spoken to people who knew him, who worked with him, who loved him and were hurt by him.

And I think Steve would be disgusted by what has become of his company.

This is not hagiography. Jobs was not a saint. He was cruel to people who loved him. He denied paternity of his daughter for years. He drove employees to breakdowns. He was vain, tyrannical, and capable of extraordinary pettiness. I am not unaware of his failings, of the terrible way he treated people needlessly along the way.

But he had a conscience. He moved, later in life, to repair the damage he had done. The reconciliation with his daughter Lisa was part of a broader moral development—a man who had hurt people learning, slowly, how to stop. He examined himself. He made changes. He was not a perfect man. But he had heart. He had morals. And he was willing to admit when he was wrong.

That is a lot more than can be said for this lot of corporate leaders.

It is this Steve Jobs—the morally serious man underneath the mythology—who would be so angry at what Tim Cook has made of Apple.

Steve Jobs understood money as instrumental.

I know this sounds like a distinction without a difference. The man built the most valuable company in the world. He died a billionaire many times over. He negotiated hard, fought for his compensation, wanted Apple to be profitable. He was not indifferent to money.

But he never treated money as the goal. Money was what let him make the things he wanted to make. It was freedom—the freedom to say no to investors, to kill products that weren’t good enough, to spend years on details that no spreadsheet could justify. Money was the instrument. The thing it purchased was the ability to do what he believed was right.

This is how he acted.

Jobs got fired from his own company because he refused to compromise his vision for what the board considered financial prudence. He spent years in the wilderness, building NeXT—a company that made beautiful machines almost no one bought—because he believed in what he was making. He acquired Pixar when it was bleeding cash and kept it alive through sheer stubbornness until it revolutionized animation.

When he returned to Apple, he killed products that were profitable because they were mediocre. He could have milked the existing lines, played it safe, optimized for margin. Instead, he burned it down and rebuilt from scratch. The iMac. The iPod. The iPhone. Each one a bet that could have destroyed the company. Each one made because he believed it was right, not because a spreadsheet said it was safe...

This essay is not really about Steve Jobs or Tim Cook. It is about what happens when efficiency becomes a substitute for freedom. Jobs and Cook are case studies in a larger question: can a company—can an economy—optimize its way out of moral responsibility? The answer, I will argue, is yes. And we are living with the consequences.

Jobs understood something that most technology executives do not: culture matters more than politics.

He did not tweet. He did not issue press releases about social issues. He did not perform his values for an audience. He was not interested in shibboleths of the left or the right. [...]

This is how Jobs approached politics: through art, film, music, and design. Through the quiet curation of what got made. Through the understanding that the products we live with shape who we become.

If Jobs were alive today, I do not believe he would be posting on Twitter about fascism. That was never his mode. [...]

Tim Cook is a supply chain manager.

I do not say this as an insult. It is simply what he is. It is what he was hired to be. When Jobs brought Cook to Apple in 1998, he brought him to fix operations—to make the trains run on time, to optimize inventory, to build the manufacturing relationships that would let Apple scale.

Cook was extraordinary at this job. He is, by all accounts, one of the greatest operations executives in the history of American business. The margins, the logistics, the global supply chain that can produce millions of iPhones in weeks—that is Cook’s cathedral. He built it.

But operations is not vision. Optimization is not creation. And a supply chain manager who inherits a visionary’s company is not thereby transformed into a visionary.

Under Cook, Apple has become very good at making more of what Jobs created. The iPhone gets better cameras, faster chips, new colors. The ecosystem tightens. The services revenue grows. The stock price rises. By every metric that Wall Street cares about, Cook has been a success.

But what has Apple created under Cook that Jobs did not originate? What new thing has emerged from Cupertino that reflects a vision of the future, rather than an optimization of the past?

The Vision Pro is an expensive curiosity. The car project was canceled after a decade of drift. The television set never materialized. Apple under Cook has become a company that perfects what exists rather than inventing what doesn’t.

This is what happens when an optimizer inherits a creator’s legacy. The cathedral still stands. But no one is building new rooms.

There is a deeper problem than the absence of vision. Tim Cook has built an Apple that cannot act with moral freedom.

The supply chain that Cook constructed—his great achievement, his life’s work—runs through China. Not partially. Not incidentally. Fundamentally. The factories that build Apple‘s products are in China. The engineers who refine the manufacturing processes are in China. The workers who assemble the devices, who test the components, who pack the boxes—they are in Shenzhen and Zhengzhou and a dozen other cities that most Americans cannot find on a map.

This was a choice. It was Cook’s choice. And once made, it ceased to be a choice at all. Supply chains, like empires, do not forgive hesitation. For twenty years, it looked like genius. Chinese manufacturing was cheap, fast, and scalable. Apple could design in California and build in China, and the margins were extraordinary.

But dependency is not partnership. And Cook built a dependency so complete that Apple cannot escape it.

When Hong Kong’s democracy movement rose, Apple was silent. When the Uyghur genocide became undeniable, Apple was silent. When Beijing pressured Apple to remove apps, to store Chinese user data on Chinese servers, to make the iPhone a tool of state surveillance for Chinese citizens—Apple complied. Silently. Efficiently. As Cook’s supply chain required.

This is not a company that can stand up to authoritarianism. This is a company that has made itself a instrument of authoritarianism, because the alternative is losing access to the factories that build its products.

There is something worse than the dependency. There is what Cook gave away.

Apple did not merely use Chinese manufacturing. Apple trained it. Cook’s operations team—the best in the world—went to China and taught Chinese companies how to do what Apple does. The manufacturing techniques. The materials science. The logistics systems. The quality control processes.

This was the price of access. This was what China demanded in exchange for letting Apple build its empire in Shenzhen. And Cook paid it.

Now look at the result.

BYD, the Chinese electric vehicle company, learned battery manufacturing and supply chain management from its work with Apple. It is now the largest EV manufacturer in the world, threatening Tesla and every Western automaker.

DJI dominates the global drone market with technology and manufacturing processes refined through the Apple relationship.

Dozens of other Chinese companies—in components, in assembly, in materials—were trained by Apple‘s experts and now compete against Western firms with the skills Apple taught them.

Cook built a supply chain. And in building it, he handed the Chinese Communist Party the industrial capabilities it needed to challenge American technological supremacy. [...]

So when I see Tim Cook at Donald Trump’s inauguration, I understand what I am seeing.

When I see him at the White House on January 25th, 2026—attending a private screening of Melania, a vanity documentary about the First Lady, directed by Brett Ratner, a man credibly accused of sexual misconduct by multiple women—I understand what I am seeing.

I understand what I am seeing when I learn that this screening took place on the same night that federal agents shot Alex Pretti ten times in the back in Minneapolis. That while a nurse lay dying in the street for the crime of trying to help a woman being pepper-sprayed, Tim Cook was eating canapés and watching a film about the president’s wife.

Tim Cook’s Twitter bio contains a quote from Martin Luther King Jr.: “Life’s most persistent and urgent question is, ‘What are you doing for others?’”

What was Tim Cook doing for others on the night of January 25th?

He was doing what efficiency requires. He was maintaining relationships with power. He was protecting the supply chain, the margins, the tariff exemptions. He was being a good middleman.

I am seeing a man who cannot say no.

This is what efficiency looks like when it runs out of room to hide.

He cannot say no to Beijing, because his supply chain depends on Beijing’s favor. He cannot say no to Trump, because his company needs regulatory forbearance and tariff exemptions. He is trapped between two authoritarian powers, serving both, challenging neither.

This is not leadership. This is middleman management. This is a man whose great achievement—the supply chain, the operations excellence, the margins—has become the very thing that prevents him from acting with moral courage.

Cook has more money than Jobs ever had. Apple has more cash, more leverage, more market power than at any point in its history. If anyone in American business could afford to say no—to Trump, to Xi, to anyone—it is Tim Cook.

And he says yes. To everyone. To anything. Because he built a company that cannot afford to say no. [...]

I believe that Steve Jobs built Apple to be something more than a company. He built it to be a statement about what technology could be—beautiful, humane, built for people rather than against them. He believed that the things we make reflect who we are. He believed that how we make them matters.

Tim Cook has betrayed that vision—not through malice, but by excelling in a system that rewards efficiency over freedom and calls it leadership. Through the replacement of values with optimization. Through the construction of a machine so efficient that it cannot afford to be moral.

Apple is not unique in this. It is exemplary.

This is what happens to institutions that mistake scale for strength, efficiency for freedom, optimization for wisdom. They become powerful enough to dominate markets—and too constrained to resist power. Look at Google, training AI for Beijing while preaching openness. Look at Amazon, building surveillance infrastructure for any government that pays. Look at every Fortune 500 company that issued statements about democracy while writing checks to the politicians dismantling it.

Apple is simply the cleanest case, because it once knew the difference. Because Jobs built it to know the difference. And because we can see, with unusual clarity, the precise moment when knowing the difference stopped mattering.

by Mike Brock, Notes From the Circus |  Read more:
Image: Steve Jobs/uncredited
[ed. Part seventeen of a series titled The Crisis Papers. Check them all out and jump in anywhere. A+ effort.]

Monday, February 16, 2026

The Century of the Maxxer

Most people, being average, do not understand what maxxing really means. Look at me! they squeal. I’m sleepmaxxing! They mean that they’re trying to get eight hours a night. Or they’re proteinmaxxing, which means they’ve bought a big tub of whey powder. I’m such a houseplantmaxxer, they tell the fiddle-leaf fig they ordered online. It’s fun to play around with a new word. But sleepmaxxing does not mean getting a red light and taping your mouth shut; it means putting yourself in a medically induced coma. There is only one way of proteinmaxxing, which is to get one hundred percent of your daily calories from lean protein. Anything else would, by definition, be less than fully maxxed. Doctors will tell you that eating only protein causes something called ‘rabbit starvation,’ and if you keep at it you’ll experience vomiting, seizures, and death in fairly short order. They’re right, but the proteinmaxxer accepts his fate. Meanwhile the houseplantmaxxer has thick mats of algae sliming over every surface, the walls, the ceilings, swallowing the sofa, digesting the bookshelf and all its contents, blobbing and dribbling, wet in the middle of the bed, green on the windowpanes, covering everything except the UV lights and the massive pans of water left on a constant boil in every room, so the air stays oppressively, Cretaceously thick.

This is what it means to be a maxxer. We are a long way away from the optimisation of the self; to maxx is an intense form of asceticism. The maxxer is the person who willingly sacrifices every aspect of their lives except one, the maxximand, which is extended to infinity until it begins to develop the distance and vastness of a god.

Probably the world’s most prominent maxxer is a man called Braden Peters, who calls himself Clavicular. Clavicular is a looksmaxxer; his austerity is to make himself as beautiful as possible. If you’re good looking enough, you can ascend, break out of your genetic destiny and into a new order of being, where the subhumans will crawl after you with lolling tongues. Clavicular started looksmaxxing at the age of fourteen, injecting himself with testosterone. He also shoots anabolic steroids, human growth hormone, peptides, botox, and crystal meth. He’s had multiple plastic surgeries. His other secret is bonesmashing, which is exactly what it sounds like: he smashes his own cheekbones with a hammer so they grow back bigger. It’s impossible to know what he would have looked like if he hadn’t done all this, since his ‘before’ pictures all show a prepubescent child, but it’s hard not to conclude that he’s utterly ruined his body. He didn’t go through a normal puberty; his glands are completely incapable of producing testosterone by themselves, and if he ever stops taking the hormones he’ll rapidly decompose into a genderless lump. The various injections have also left him totally sterile; his balls are almost certainly fucked up in ways we can barely imagine. He is a meth addict. And while he really does have legions of lesser beings crawling after him with lolling tongues, they do all seem to be men.

Clavicular lives in a sort of nightmare clown world, where he is constantly being approached in ordinary shopping centres by small, strange, awkward men who say things like ‘I’m known in Orlando as the Asian Mogger. I would have the honour if you could verify me as the Orlando Asian Mogger.’ There are various misshapen freaks of nature, men with shoulders wider than they’re tall, sinister stalking giants on artificially lengthened legs, who travel across the country to stand next to him and compare physiques. Like a mythical gunslinger, the great mogger needs to constantly watch the horizon for whoever’s coming to mog him. Other men adore him in more nakedly eroticised ways. In one video, he’s live-streaming a fun casual hangout with Andrew Tate, Tristan Tate, Nick Fuentes, a bunch of other people sitting in silence looking at their phones, and menial staff vacuuming in the background. One of the men is berating a woman sat in Clavicular’s lap. ‘You are not an 8. You’re not an 8. You’re a thirsty 7, you’re asking for validation, and you’re sitting in a 10’s lap.’ ‘That’s kinda rude,’ she says. ‘That’s kinda rude,’ agrees Tristan Tate. ‘Clavicular’s at least an 11.’ Clavicular doesn’t say anything. What gives the scene its particularly haunting resonance is that throughout this exchange, he seems to be eating soup.

In all his interactions with women that aren’t directly supervised by a Tate brother, Clavicular is painfully passive and awkward. The women who like him are all of a type: hot but autistic beyond belief, brainrotted, barfing up a constant stream of overenthusiastic tryhard 4chan nazi jargon that he seems to find deeply embarrassing. Normal women treat him with undisguised contempt. He is constantly having his cortisol spiked by foids. It turns out that being maximally beautiful is not actually the same as maximising your chances of getting laid. Clavicular will never be a female sex symbol; that role goes to men like Slavoj Žižek and Danny DeVito. But maxxing is not optimisation. The maxxer is not trying to have an enjoyable life. He’s trying to reduce himself to a single principle.

Things get confused when the maxximand is also a generally upheld value like beauty. But every maxxer has his shadow, the person maxxing the opposite principle. Clavicular’s shadow is someone who calls himself The Crooked Man. The Crooked Man is a looksminimiser, which is another way of saying he’s an uglymaxxer. His strategy has been to spend a year working out only one side of his body, which has left him with an enormous bulging trap on one shoulder and nothing at all on the other. He looks like a cartoon monster. He stands around shirtless in his empty millennial-grey house, adrift in some suburb somewhere, grey walls, grey carpet, no decorations except cables snaking around on the floor, making video content. He is a kind of Platonic ideal of the maxxer, far more than Clavicular. The Crooked Man’s house appears to get zero natural light. All his gym equipment is at home; you can see him benching 225 on one side only in one of its many large and empty rooms. Plastic Venetian blinds. It’s night outside. It’s always night outside. The sun never shines on The Crooked Man. Incredible things are happening in America.

There’s a reason Clavicular has become the media’s go-to symbol for maxxing, even though The Crooked Man is a much better exemplar. He keeps things on a very comfortable terrain. Maxxing, the line goes, is an outgrowth of incel culture. It’s about men, the problem with men, the crisis of masculinity; it’s about how men are now facing the kind of toxic body politics that women have had to deal with forever, and how they’re developing their own hysterias in response; it’s about online extremism, it’s about the harmful narratives that seduce young men into various forms of misogyny; before long it’s about how we all need to put the kettle on and have a proper talk about our men’s mental health. They’re not entirely wrong; there really is a crisis of masculinity, it really is expressing itself through the mainstreaming of misogyny and the proliferation of a diseased relation to the self. It’s just that maxxing comes from something else entirely.

Despite what you might have heard, the word maxxing is not originally incel slang. Incels might have appropriated it, but it began with another kind of loser altogether, the tabletop role-playing gamer.

by Sam Kriss, Numb at the Lodge |  Read more:
Image: Cassidy Araiza for The New York Times
[ed. See also: Handsome at Any Cost (NYT); and, From “Mar-a-Lago face” to uncanny AI art: MAGA loves ugly in submission to Trump (Salon).]

Tuesday, February 10, 2026

Claude's New Constitution

We’re publishing a new constitution for our AI model, Claude. It’s a detailed description of Anthropic’s vision for Claude’s values and behavior; a holistic document that explains the context in which Claude operates and the kind of entity we would like Claude to be.

The constitution is a crucial part of our model training process, and its content directly shapes Claude’s behavior. Training models is a difficult task, and Claude’s outputs might not always adhere to the constitution’s ideals. But we think that the way the new constitution is written—with a thorough explanation of our intentions and the reasons behind them—makes it more likely to cultivate good values during training.

In this post, we describe what we’ve included in the new constitution and some of the considerations that informed our approach...

What is Claude’s Constitution?

Claude’s constitution is the foundational document that both expresses and shapes who Claude is. It contains detailed explanations of the values we would like Claude to embody and the reasons why. In it, we explain what we think it means for Claude to be helpful while remaining broadly safe, ethical, and compliant with our guidelines. The constitution gives Claude information about its situation and offers advice for how to deal with difficult situations and tradeoffs, like balancing honesty with compassion and the protection of sensitive information. Although it might sound surprising, the constitution is written primarily for Claude. It is intended to give Claude the knowledge and understanding it needs to act well in the world.

We treat the constitution as the final authority on how we want Claude to be and to behave—that is, any other training or instruction given to Claude should be consistent with both its letter and its underlying spirit. This makes publishing the constitution particularly important from a transparency perspective: it lets people understand which of Claude’s behaviors are intended versus unintended, to make informed choices, and to provide useful feedback. We think transparency of this kind will become ever more important as AIs start to exert more influence in society1.

We use the constitution at various stages of the training process. This has grown out of training techniques we’ve been using since 2023, when we first began training Claude models using Constitutional AI. Our approach has evolved significantly since then, and the new constitution plays an even more central role in training.

Claude itself also uses the constitution to construct many kinds of synthetic training data, including data that helps it learn and understand the constitution, conversations where the constitution might be relevant, responses that are in line with its values, and rankings of possible responses. All of these can be used to train future versions of Claude to become the kind of entity the constitution describes. This practical function has shaped how we’ve written the constitution: it needs to work both as a statement of abstract ideals and a useful artifact for training.

Our new approach to Claude’s Constitution

Our previous Constitution was composed of a list of standalone principles. We’ve come to believe that a different approach is necessary. We think that in order to be good actors in the world, AI models like Claude need to understand why we want them to behave in certain ways, and we need to explain this to them rather than merely specify what we want them to do. If we want models to exercise good judgment across a wide range of novel situations, they need to be able to generalize—to apply broad principles rather than mechanically following specific rules.

Specific rules and bright lines sometimes have their advantages. They can make models’ actions more predictable, transparent, and testable, and we do use them for some especially high-stakes behaviors in which Claude should never engage (we call these “hard constraints”). But such rules can also be applied poorly in unanticipated situations or when followed too rigidly2. We don’t intend for the constitution to be a rigid legal document—and legal constitutions aren’t necessarily like this anyway.

The constitution reflects our current thinking about how to approach a dauntingly novel and high-stakes project: creating safe, beneficial non-human entities whose capabilities may come to rival or exceed our own. Although the document is no doubt flawed in many ways, we want it to be something future models can look back on and see as an honest and sincere attempt to help Claude understand its situation, our motives, and the reasons we shape Claude in the ways we do.

by Anthropic |  Read more:
Image: Anthropic
[ed. I have an inclination to distrust AI companies, mostly because their goals (other than advancing technology) appear strongly directed at achieving market dominance and winning some (undefined) race to AGI. Anthropic is different. They actually seem legitimately concerned with the ethical implications of building another bomb that could potentially destroy humanity, or at minimum a large degree of human agency, and are aware of the responsibilities that go along with that. This is a well thought out and necessary document that hopefully other companies will follow and improve on, and that governments can use to develop more well-informed regulatory oversight in the future. See also: The New Politics of the AI Apocalypse; and, The Anthropic Hive Mind (Medim).

Wednesday, February 4, 2026

Claude's New Constitution

We're publishing a new constitution for our AI model, Claude. It's a detailed description of Anthropic's vision for Claude's values and behavior; a holistic document that explains the context in which Claude operates and the kind of entity we would like Claude to be.

The constitution is a crucial part of our model training process, and its content directly shapes Claude's behavior. Training models is a difficult task, and Claude's outputs might not always adhere to the constitution's ideals. But we think that the way the new constitution is written—with a thorough explanation of our intentions and the reasons behind them—makes it more likely to cultivate good values during training.

In this post, we describe what we've included in the new constitution and some of the considerations that informed our approach.

We're releasing Claude's constitution in full under a Creative Commons CC0 1.0 Deed, meaning it can be freely used by anyone for any purpose without asking for permission.

What is Claude's Constitution?


Claude's constitution is the foundational document that both expresses and shapes who Claude is. It contains detailed explanations of the values we would like Claude to embody and the reasons why. In it, we explain what we think it means for Claude to be helpful while remaining broadly safe, ethical, and compliant with our guidelines. The constitution gives Claude information about its situation and offers advice for how to deal with difficult situations and tradeoffs, like balancing honesty with compassion and the protection of sensitive information. Although it might sound surprising, the constitution is written primarily for Claude. It is intended to give Claude the knowledge and understanding it needs to act well in the world.

We treat the constitution as the final authority on how we want Claude to be and to behave—that is, any other training or instruction given to Claude should be consistent with both its letter and its underlying spirit. This makes publishing the constitution particularly important from a transparency perspective: it lets people understand which of Claude's behaviors are intended versus unintended, to make informed choices, and to provide useful feedback. We think transparency of this kind will become ever more important as AIs start to exert more influence in society.

We use the constitution at various stages of the training process. This has grown out of training techniques we've been using since 2023, when we first began training Claude models using Constitutional AI. Our approach has evolved significantly since then, and the new constitution plays an even more central role in training.

Claude itself also uses the constitution to construct many kinds of synthetic training data, including data that helps it learn and understand the constitution, conversations where the constitution might be relevant, responses that are in line with its values, and rankings of possible responses. All of these can be used to train future versions of Claude to become the kind of entity the constitution describes. This practical function has shaped how we've written the constitution: it needs to work both as a statement of abstract ideals and a useful artifact for training.

Our new approach to Claude's Constitution

Our previous Constitution was composed of a list of standalone principles. We've come to believe that a different approach is necessary. We think that in order to be good actors in the world, AI models like Claude need to understand why we want them to behave in certain ways, and we need to explain this to them rather than merely specify what we want them to do. If we want models to exercise good judgment across a wide range of novel situations, they need to be able to generalize—to apply broad principles rather than mechanically following specific rules.

Specific rules and bright lines sometimes have their advantages. They can make models' actions more predictable, transparent, and testable, and we do use them for some especially high-stakes behaviors in which Claude should never engage (we call these "hard constraints"). But such rules can also be applied poorly in unanticipated situations or when followed too rigidly . We don't intend for the constitution to be a rigid legal document—and legal constitutions aren't necessarily like this anyway.

The constitution reflects our current thinking about how to approach a dauntingly novel and high-stakes project: creating safe, beneficial non-human entities whose capabilities may come to rival or exceed our own. Although the document is no doubt flawed in many ways, we want it to be something future models can look back on and see as an honest and sincere attempt to help Claude understand its situation, our motives, and the reasons we shape Claude in the ways we do.

A brief summary of the new constitution

In order to be both safe and beneficial, we want all current Claude models to be:
  1. Broadly safe: not undermining appropriate human mechanisms to oversee AI during the current phase of development;
  2. Broadly ethical: being honest, acting according to good values, and avoiding actions that are inappropriate, dangerous, or harmful;
  3. Compliant with Anthropic's guidelines: acting in accordance with more specific guidelines from Anthropic where relevant;
  4. Genuinely helpful: benefiting the operators and users they interact with.
In cases of apparent conflict, Claude should generally prioritize these properties in the order in which they're listed.

Most of the constitution is focused on giving more detailed explanations and guidance about these priorities. The main sections are as follows:

by Zac Hatfield-Dodds, Drake Thomas, Anthropic |  Read more:
[ed. Much respect for Anthropic who seem to be doing more for AI safety than anyone else in the industry. Hopefully, others will follow and refine this groundbreaking effort.

Sunday, February 1, 2026

What Actually Makes a Good Life

Harvard started following a group of 268 sophomores back in 1938—and continued to track them for decades—and eventually included their spouses and children too. The goal was to discover what leads to a thriving, happy life.

Robert Waldinger continues that work today as the Director of the Harvard Study on Adult Development. (He’s also a zen priest, by the way.) Here he shares insights on the key ingredients for living the good life.
[ed. Road map to happiness (or at least more life satisfaction). Only 16 minutes of your time.]

Saturday, January 31, 2026

The Adolescence of Technology

Confronting and Overcoming the Risks of Powerful AI

There is a scene in the movie version of Carl Sagan’s book Contact where the main character, an astronomer who has detected the first radio signal from an alien civilization, is being considered for the role of humanity’s representative to meet the aliens. The international panel interviewing her asks, “If you could ask [the aliens] just one question, what would it be?” Her reply is: “I’d ask them, ‘How did you do it? How did you evolve, how did you survive this technological adolescence without destroying yourself?” When I think about where humanity is now with AI—about what we’re on the cusp of—my mind keeps going back to that scene, because the question is so apt for our current situation, and I wish we had the aliens’ answer to guide us. I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species. Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.

In my essay Machines of Loving Grace, I tried to lay out the dream of a civilization that had made it through to adulthood, where the risks had been addressed and powerful AI was applied with skill and compassion to raise the quality of life for everyone. I suggested that AI could contribute to enormous advances in biology, neuroscience, economic development, global peace, and work and meaning. I felt it was important to give people something inspiring to fight for, a task at which both AI accelerationists and AI safety advocates seemed—oddly—to have failed. But in this current essay, I want to confront the rite of passage itself: to map out the risks that we are about to face and try to begin making a battle plan to defeat them. I believe deeply in our ability to prevail, in humanity’s spirit and its nobility, but we must face the situation squarely and without illusions.

As with talking about the benefits, I think it is important to discuss risks in a careful and well-considered manner. In particular, I think it is critical to:
  • Avoid doomerism. Here, I mean “doomerism” not just in the sense of believing doom is inevitable (which is both a false and self-fulfilling belief), but more generally, thinking about AI risks in a quasi-religious way. Many people have been thinking in an analytic and sober way about AI risks for many years, but it’s my impression that during the peak of worries about AI risk in 2023–2024, some of the least sensible voices rose to the top, often through sensationalistic social media accounts. These voices used off-putting language reminiscent of religion or science fiction, and called for extreme actions without having the evidence that would justify them. It was clear even then that a backlash was inevitable, and that the issue would become culturally polarized and therefore gridlocked. As of 2025–2026, the pendulum has swung, and AI opportunity, not AI risk, is driving many political decisions. This vacillation is unfortunate, as the technology itself doesn’t care about what is fashionable, and we are considerably closer to real danger in 2026 than we were in 2023. The lesson is that we need to discuss and address risks in a realistic, pragmatic manner: sober, fact-based, and well equipped to survive changing tides.
  • Acknowledge uncertainty. There are plenty of ways in which the concerns I’m raising in this piece could be moot. Nothing here is intended to communicate certainty or even likelihood. Most obviously, AI may simply not advance anywhere near as fast as I imagine. Or, even if it does advance quickly, some or all of the risks discussed here may not materialize (which would be great), or there may be other risks I haven’t considered. No one can predict the future with complete confidence—but we have to do the best we can to plan anyway.
  • Intervene as surgically as possible. Addressing the risks of AI will require a mix of voluntary actions taken by companies (and private third-party actors) and actions taken by governments that bind everyone. The voluntary actions—both taking them and encouraging other companies to follow suit—are a no-brainer for me. I firmly believe that government actions will also be required to some extent, but these interventions are different in character because they can potentially destroy economic value or coerce unwilling actors who are skeptical of these risks (and there is some chance they are right!). It’s also common for regulations to backfire or worsen the problem they are intended to solve (and this is even more true for rapidly changing technologies). It’s thus very important for regulations to be judicious: they should seek to avoid collateral damage, be as simple as possible, and impose the least burden necessary to get the job done. It is easy to say, “No action is too extreme when the fate of humanity is at stake!,” but in practice this attitude simply leads to backlash. To be clear, I think there’s a decent chance we eventually reach a point where much more significant action is warranted, but that will depend on stronger evidence of imminent, concrete danger than we have today, as well as enough specificity about the danger to formulate rules that have a chance of addressing it. The most constructive thing we can do today is advocate for limited rules while we learn whether or not there is evidence to support stronger ones.
With all that said, I think the best starting place for talking about AI’s risks is the same place I started from in talking about its benefits: by being precise about what level of AI we are talking about. The level of AI that raises civilizational concerns for me is the powerful AI that I described in Machines of Loving Grace. I’ll simply repeat here the definition that I gave in that document:
  • By “powerful AI,” I have in mind an AI model—likely similar to today’s LLMs in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields: biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.
  • In addition to just being a “smart thing you talk to,” it has all the interfaces available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.
  • It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.
  • It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory, it could even design robots or equipment for itself to use.
  • The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10–100x human speed. It may, however, be limited by the response time of the physical world or of software it interacts with.
  • Each of these million copies can act independently on unrelated tasks, or, if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.
We could summarize this as a “country of geniuses in a datacenter.”

As I wrote in Machines of Loving Grace, powerful AI could be as little as 1–2 years away, although it could also be considerably further out.

Exactly when powerful AI will arrive is a complex topic that deserves an essay of its own, but for now I’ll simply explain very briefly why I think there’s a strong chance it could be very soon. (...)

In this essay, I’ll assume that this intuition is at least somewhat correct—not that powerful AI is definitely coming in 1–2 years, but that there’s a decent chance it does, and a very strong chance it comes in the next few. As with Machines of Loving Grace, taking this premise seriously can lead to some surprising and eerie conclusions. While in Machines of Loving Grace I focused on the positive implications of this premise, here the things I talk about will be disquieting. They are conclusions that we may not want to confront, but that does not make them any less real. I can only say that I am focused day and night on how to steer us away from these negative outcomes and towards the positive ones, and in this essay I talk in great detail about how best to do so.

I think the best way to get a handle on the risks of AI is to ask the following question: suppose a literal “country of geniuses” were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist. The analogy is not perfect, because these geniuses could have an extremely wide range of motivations and behavior, from completely pliant and obedient, to strange and alien in their motivations. But sticking with the analogy for now, suppose you were the national security advisor of a major state, responsible for assessing and responding to the situation. Imagine, further, that because AI systems can operate hundreds of times faster than humans, this “country” is operating with a time advantage relative to all other countries: for every cognitive action we can take, this country can take ten.

What should you be worried about? I would worry about the following things: 
1. Autonomy risks. What are the intentions and goals of this country? Is it hostile, or does it share our values? Could it militarily dominate the world through superior weapons, cyber operations, influence operations, or manufacturing?
2. Misuse for destruction. Assume the new country is malleable and “follows instructions”—and thus is essentially a country of mercenaries. Could existing rogue actors who want to cause destruction (such as terrorists) use or manipulate some of the people in the new country to make themselves much more effective, greatly amplifying the scale of destruction?
3. Misuse for seizing power. What if the country was in fact built and controlled by an existing powerful actor, such as a dictator or rogue corporate actor? Could that actor use it to gain decisive or dominant power over the world as a whole, upsetting the existing balance of power?
4. Economic disruption. If the new country is not a security threat in any of the ways listed in #1–3 above but simply participates peacefully in the global economy, could it still create severe risks simply by being so technologically advanced and effective that it disrupts the global economy, causing mass unemployment or radically concentrating wealth?
5. Indirect effects. The world will change very quickly due to all the new technology and productivity that will be created by the new country. Could some of these changes be radically destabilizing?
I think it should be clear that this is a dangerous situation—a report from a competent national security official to a head of state would probably contain words like “the single most serious national security threat we’ve faced in a century, possibly ever.” It seems like something the best minds of civilization should be focused on.

Conversely, I think it would be absurd to shrug and say, “Nothing to worry about here!” But, faced with rapid AI progress, that seems to be the view of many US policymakers, some of whom deny the existence of any AI risks, when they are not distracted entirely by the usual tired old hot-button issues.

Humanity needs to wake up, and this essay is an attempt—a possibly futile one, but it’s worth trying—to jolt people awake.

To be clear, I believe if we act decisively and carefully, the risks can be overcome—I would even say our odds are good. And there’s a hugely better world on the other side of it. But we need to understand that this is a serious civilizational challenge. Below, I go through the five categories of risk laid out above, along with my thoughts on how to address them.

by Dario Amodei, Anthropic |  Read more:
[ed. Mr. Amodei and Anthropic in general seem to be, of all major AI companies, the most focused on safety and alignment issues. Guaranteed, everyone working in the field has read this. For a good summary and contrary arguments, see also: On The Adolescence of Technology (Zvi Mowshowitz, DMtV).]

Sunday, January 25, 2026

Reflections on the 'Manosphere'

Andrew Tate Is the Loneliest Bastard on Earth

Every five years or so, there’s a changing of the guard in digital media. Platform empires rise and fall, subcultures come and go, trends ebb and flow.

In my estimation, we’re entering year two of the latest shift.

The decline of punditry and traditional political commentary is continuing apace from its boom during Covid lockdowns. Commentators who might have once staked out clear, binary positions—conservative or liberal—are drifting away from political debate altogether, moving toward a more parasocial model: building audiences around personality and the feeling of relationship, rather than argument.

It’s increasingly clear that writing is niche. We’re moving away from the age of bloggers and Twitter, and into the age of streaming and clip farming—short video segments, often ripped from longer content, optimized for sharing. (I’ve made this point many times now, but this is why in the world of right-wing digital media, characters like Nick Fuentes are emerging as dominant, whereas no-video podcasters, bloggers, and Twitter personalities receive less attention.)

Labels like “right” and “left” are better thought of as “right-coded” and “left-coded”: ways of signaling who you are and who you’re with, rather than actual positions on what government should do. The people still doing, or more accurately “playing,” politics are themselves experiencing a realignment, scrambling to figure out new alliances as the old divisions stop making sense. I’ve written previously about New Old Leftists and the “post-right,” a motley group of former right-wing commentators who are not “progressives” in the traditional sense, but take up progressive points of view specifically in dialogue with their disgust with reactionary elements of the right.

Anyway, in this rise of coded communities—where affiliation is about vibe and identity more than ideology—we’re seeing the Manosphere go mainstream again. Second time? Third?

The Manosphere—if you’re a reader of this blog who somehow doesn’t know—refers to a loose network of communities organized around men, masculinity, dating advice, and self-improvement, sometimes tipping into outright hostility toward women. These communities have been around on the fringes of the internet for years, though depending on your vantage point, their underlying ideas are either hundreds of years old or at least sixty.

Either way, they keep surfacing into broader culture.
***
The Manosphere as we know it today has at least two distinct antecedents. The first is the mid-twentieth-century convergence of pick-up artistry and men’s rights discourse: one responding to the Sexual Revolution and changing dating norms, the other developing in explicit opposition to second wave feminism. These strands framed gender relations as adversarial, strategic, and zero-sum.

The second antecedent is the part that I hear people talk about less often. The Manosphere in so many ways is a Black phenomenon. I do not mean this as a racial claim about ownership or blame, nor am I referring narrowly to what is sometimes called the “Black Manosphere.” I mean something more specific: many of the aesthetic forms, masculine philosophies, and anxieties that the Manosphere treats as “newly” discovered were articulated in Black American communities decades earlier. These were responses to economic exclusion, social displacement, and the erosion of traditional routes to masculine status.

Someone on X made the good point that the viral clips of Clavicular’s Big Night Out—Andrew Tate, Nick Fuentes, Sneako, and company—felt like a child’s idea of not only masculinity, but wealth. The cigars, the suits, the VIP table, the ham-fisted advice about how you don’t take women out to dinner.

If you’ve read Iceberg Slim, or watched 1970s blaxploitation films like The Mack or Super Fly, the visual language is immediately recognizable. You’ve seen this figure before: the fur coat, the Cadillac Eldorado, the exaggerated display of wealth and control. The question is why that aesthetic originally looked the way it did.

In mid-century America, Black men were systematically excluded from the institutions through which wealth and status quietly accumulate: country clubs, elite universities, corporate ladders, inherited property. The GI Bill’s housing provisions were administered in ways that shut out Black veterans. Union jobs in the building trades stayed segregated. The FHA explicitly refused to insure mortgages in Black neighborhoods. Under those conditions, conspicuous display wasn’t vulgarity (at least, not primarily or exclusively)—it was one of the few available ways to signal success in a society that denied access to the kinds of prestige that don’t need to announce themselves. When wealth can’t whisper—as TikTok’s “old money aesthetic” crowd loves to remind us it should—it has to shout.

The modern Manosphere inherits this aesthetic, adopting the symbols as though they were universal markers of arrival rather than compensatory performances forged under exclusion. What began as a response to being locked out of legitimate power gets recycled, abstracted, and repackaged, this time as timeless masculine truth. As so, to modern audiences, it reads as immature.

The aesthetic was codified in the late ‘60s. (...)

By the 1970s, blaxploitation films had transformed the pimp into an outlaw folk hero, emphasizing style over the moral complexity of the source material. What survived was the cool, the walk, the talk, the clothes, the attitude. Hip-hop — which I admittedly know very little about, so please feel free to correct me here —- picked up the thread: Ice-T named himself in tribute to Iceberg Slim; Snoop Dogg built an entire persona around pimp iconography; the rest is history. The pimp was no longer a figure of the Black underclass navigating impossible circumstances but was quickly becoming embraced as an inadvertent, unironic symbol of male success, available for adoption by anyone — race agnostic.

The “high-value man” who dominates contemporary Manosphere discourse is this same archetype, put through a respectability filter, or maybe just re-fit for modern tastes. The fur coat becomes a tailored suit. The Cadillac becomes a Bugatti. The stable of sex workers becomes a rotating roster of Instagram models (I guess, in Andrew Tate’s case, still sex [trafficked] workers). The underlying logic — and material conditions — are identical: women are resources to be managed, emotional detachment is strength, and a man’s worth is measured by his material display and his control over female attention. (...)

The Manosphere’s grievances are not manufactured—just as the pimp’s weren’t. The anxieties it addresses are real. The conditions that produced the pimp archetype in Black America, the sense that legitimate paths to respect and provision have been foreclosed, are now conditions we all experience.

The Manosphere exists because millions of young men — of every race — are asking the same question Black men were asking in 1965: what does masculinity mean when its economic foundations have been removed?

by Katherine Dee, Default Blog |  Read more:
Images: uncredited
[ed. Pathetic bunch of losers. Includes some truly cringe videos I've never seen before.]

Friday, January 23, 2026

Socialism For Dummies

[ed. Prompted by a recent letter to the editor in our local paper (below):]

Overview of Socialism


Socialism encompasses a range of economic and political systems advocating for social ownership and democratic control of the means of production. It aims to address inequalities created by capitalism by redistributing wealth and ensuring that production meets the needs of the population.

Types of Socialism
1. Democratic Socialism: Focuses on political democracy alongside social ownership.
Advocates for reforms within a capitalist framework.
Examples include the Nordic countries, which combine a welfare state with a capitalist economy.

2. Market Socialism: Combines public or cooperative ownership with market mechanisms.
Allows for profit generation while ensuring that profits benefit society.
Examples include certain policies in China and Vietnam.

3. Revolutionary Socialism: Seeks to overthrow capitalism through revolutionary means.
Often associated with Marxist ideologies.
Historical examples include the Soviet Union and Cuba.

4. Utopian Socialism: Envisions ideal societies based on cooperative living and shared resources.
Early proponents include Robert Owen and Charles Fourier.
Focuses on creating small-scale communities as models for broader societal change.

5. Religious Socialism: Integrates religious principles with socialist ideals.
Variants include Christian socialism, Islamic socialism, and Jewish socialism.
Emphasizes moral and ethical dimensions of social justice.
[ed. each with various branches, subsets, etc...]

Conclusion

Socialism is not a monolithic ideology; it includes various forms that differ in their approaches to ownership, governance, and economic management. Each type reflects distinct historical contexts and societal goals. (sources: Google/AI/Wikipedia, history books, libraries...)
-----

Letter to the Editor:

"Well that’s just great socialists. Now you and your Islamic jihadi buddies have something in common with the Nazis. You both want to exterminate Jewish people. You even just hired one of your own to be mayor of New York. One of the same people that attacked and bombed New York on Sept. 11, 2001. Yep, the Nazis hated America also.

No, democracy is not in trouble, but you “Democrats” sure are. Most Americans are not as ignorant and violent as you are and they have more productive things to do than standing around protesting and complaining. If future elections are honest and as more Americans become better off for their families, your corruption, fraud and failures will become even more exposed.

If you “Democrats” sincerely want to help America, you will need to stop lying, siding with criminals, and hating on America and law enforcement. If America is so racist, why are all the tired, poor, and miserable people from socialism trying to come here? Better yet, why don’t all you socialists move to Iran, China, Russia, Somalia, Venezuela, etc.? In America it’s called assimilation and obeying the law. If you have a problem with that, then get the heck out. (...)

Affirmative action. Now it seems the socialists have decided to just change the name to diversity, equity and inclusion in order to get by the Supreme Court decision. After recently realizing that their federal grant money is now in jeopardy, the socialists are trying to just delete DEI references in order to maintain these programs and hope nobody notices. After all these years, nobody knows what affirmative action/DEI has actually accomplished.

“Our constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other.” John Adams."
***

[ed. Another fine American Patriot who's views are highlighted here only to show the stereotypical responses (bordering on parody) one gets whenever talking to a MAGA extremist. They're all here: the ad hominem attacks, incoherent accusations (socialists bombed NY on 9/11? Islamic socialists?), projections, and of course, that old time favorite - if you don't like it, just move! Classy as always. ]
***
*Note: New York mayor Zohran Mamdani and most self-identified socialists in this country are Democratic Socialists:

Democratic socialism

Democratic socialism differs from state communism in that the state is not all-powerful, and the political system remains democratic. Democratic socialism is associated with the Socialist parties of western Europe (Denmark, Norway, Sweden, Germany, etc). They generally propose a mixed economy – with state ownership of key industries, like coal, electricity, water and gas, but allow private enterprise to operate in the rest of the economy. Democratic socialism proposes a progressive tax system to redistribute wealth from the rich to the poor – through the provisions of a welfare state. Democratic socialism is often associated with the Nordic countries – where the government takes approximately 50% of GDP, but also there is a thriving market economy, giving a high standard of living. (via:)

Aspects of Democratic socialism
  • Advocates nationalisation of key industries (often the natural monopolies, like electricity, water)
  • Prices set by the market mechanism, except public goods, such as health and education.
  • Provision of a welfare state to provide income redistribution
  • Support for trade unions in wage bargaining
  • Use of minimum wages and universal income to raise low-income wages
  • Progressive tax and provision of public services. For example, marginal income tax rates of 70%. Tax on wealth
It’s important to note that socialism is not the same as communism, although the two are often confused. Communism is a more radical ideology that advocates for a stateless, classless society, while socialism typically operates within the framework of a democratic government. In practice, many countries have adopted aspects of socialism without fully embracing a socialist system. These can include things like nationalized industries, strong labor protections, and progressive taxation policies. [ed. and Social Security, Medicaid, Medicare, SNAP, etc.] Ultimately, the goal of socialism is to balance individual freedom with social responsibility, creating a society where everyone has the opportunity to reach their full potential. (via:)

Monday, January 19, 2026

Time Passing

So here's the problem. If you don't believe in God or an afterlife; or if you believe that the existence of God or an afterlife are fundamentally unanswerable questions; or if you do believe in God or an afterlife but you accept that your belief is just that, a belief, something you believe rather than something you know -- if any of that is true for you, then death can be an appalling thing to think about. Not just frightening, not just painful. It can be paralyzing. The fact that your lifespan is an infinitesimally tiny fragment in the life of the universe, and that there is, at the very least, a strong possibility that when you die, you disappear completely and forever, and that in five hundred years nobody will remember you and in five billion years the Earth will be boiled into the sun: this can be a profound and defining truth about your existence that you reflexively repulse, that you flinch away from and refuse to accept or even think about, consistently pushing to the back of your mind whenever it sneaks up, for fear that if you allow it to sit in your mind even for a minute, it will swallow everything else. It can make everything you do, and everything anyone else does, seem meaningless, trivial to the point of absurdity. It can make you feel erased, wipe out joy, make your life seem like ashes in your hands. Those of us who are skeptics and doubters are sometimes dismissive of people who fervently hold beliefs they have no evidence for simply because they find them comforting -- but when you're in the grip of this sort of existential despair, it can be hard to feel like you have anything but that handful of ashes to offer them in exchange.

But here's the thing. I think it's possible to be an agnostic, or an atheist, or to have religious or spiritual beliefs that you don't have certainty about, and still feel okay about death. I think there are ways to look at death, ways to experience the death of other people and to contemplate our own, that allow us to feel the value of life without denying the finality of death. I can't make myself believe in things I don't actually believe -- Heaven, or reincarnation, or a greater divine plan for our lives -- simply because believing those things would make death easier to accept. And I don't think I have to, or that anyone has to. I think there are ways to think about death that are comforting, that give peace and solace, that allow our lives to have meaning and even give us more of that meaning -- and that have nothing whatsoever to do with any kind of God, or any kind of afterlife.

Here's the first thing. The first thing is time, and the fact that we live in it. Our existence and experience are dependent on the passing of time, and on change. No, not dependent -- dependent is too weak a word. Time and change are integral to who we are, the foundation of our consciousness, and its warp and weft as well. I can't imagine what it would mean to be conscious without passing through time and being aware of it. There may be some form of existence outside of time, some plane of being in which change and the passage of time is an illusion, but it certainly isn't ours.

And inherent in change is loss. The passing of time has loss and death woven into it: each new moment kills the moment before it, and its own death is implied in the moment that comes after. There is no way to exist in the world of change without accepting loss, if only the loss of a moment in time: the way the sky looks right now, the motion of the air, the number of birds in the tree outside your window, the temperature, the placement of your body, the position of the people in the street. It's inherent in the nature of having moments: you never get to have this exact one again.

And a good thing, too. Because all the things that give life joy and meaning -- music, conversation, eating, dancing, playing with children, reading, thinking, making love, all of it -- are based on time passing, and on change, and on the loss of an infinitude of moments passing through us and then behind us. Without loss and death, we don't get to have existence. We don't get to have Shakespeare, or sex, or five-spice chicken, without allowing their existence and our experience of them to come into being and then pass on. We don't get to listen to Louis Armstrong without letting the E-flat disappear and turn into a G. We don't get to watch "Groundhog Day" without letting each frame of it pass in front of us for a 24th of a second and then move on. We don't get to walk in the forest without passing by each tree and letting it fall behind us; we don't even get to stand still in the forest and gaze at one tree for hours without seeing the wind blow off a leaf, a bird break off a twig for its nest, the clouds moving behind it, each manifestation of the tree dying and a new one taking its place.

And we wouldn't want to have it if we could. The alternative would be time frozen, a single frame of the film, with nothing to precede it and nothing to come after. I don't think any of us would want that. And if we don't want that, if instead we want the world of change, the world of music and talking and sex and whatnot, then it is worth our while to accept, and even love, the loss and the death that make it possible.

Here's the second thing. Imagine, for a moment, stepping away from time, the way you'd step back from a physical place, to get a better perspective on it. Imagine being outside of time, looking at all of it as a whole -- history, the present, the future -- the way the astronauts stepped back from the Earth and saw it whole.

Keep that image in your mind. Like a timeline in a history class, but going infinitely forward and infinitely back. And now think of a life, a segment of that timeline, one that starts in, say, 1961, and ends in, say, 2037. Does that life go away when 2037 turns into 2038? Do the years 1961 through 2037 disappear from time simply because we move on from them and into a new time, any more than Chicago disappears when we leave it behind and go to California?

It does not. The time that you live in will always exist, even after you've passed out of it, just like Paris exists before you visit it, and continues to exist after you leave. And the fact that people in the 23rd century will probably never know you were alive... that doesn't make your life disappear, any more than Paris disappears if your cousin Ethel never sees it. Your segment on that timeline will always have been there. The fact of your death doesn't make the time that you were alive disappear.

And it doesn't make it meaningless. Yes, stepping back and contemplating all of time and space can be daunting, can make you feel tiny and trivial. And that perception isn't entirely inaccurate. It's true; the small slice of time that we have is no more important than the infinitude of time that came before we were born, or the infinitude that will follow after we die.

But it's no less important, either.

I don't know what happens when we die. I don't know if we come back in a different body, or if we get to hover over time and space and view it in all its glory and splendor, or if our souls dissolve into the world-soul the way our bodies dissolve into the ground, or if, as seems very likely, we simply disappear. I have no idea. And I don't know that it matters. What matters is that we get to be alive. We get to be conscious. We get to be connected with each other, and with the world, and we get to be aware of that connection and to spend a few years mucking about in its possibilities. We get to have a slice of time and space that's ours. As it happened, we got the slice that has Beatles records and Thai restaurants and AIDS and the Internet. People who came before us got the slice that had horse-drawn carriages and whist and dysentery, or the one that had stone huts and Viking invasions and pigs in the yard. And the people who come after us will get the slice that has, I don't know, flying cars and soybean pies and identity chips in their brains. But our slice is no less important because it comes when it does, and it's no less important because we'll leave it someday. The fact that time will continue after we die does not negate the time that we were alive. We are alive now, and nothing can erase that.

Greta Christina, Greta's Blog |  Read more:
Image: uncredited
[ed. Repost from, actually quite a while ago (folks should really check out the archive). Something reminded me of this essay today, and I'm glad it did, because it's a favorite. Unfortunately, I think the link is dead (as we all shall soon be... haha), but it's all here.]