Showing posts with label Philosophy. Show all posts
Showing posts with label Philosophy. Show all posts

Saturday, October 25, 2025

China OS vs. America OS

Xu Bing, installation view of Tianshu (Book From the Sky), 1987–1991, at Ullens Center for Contemporary Art, Beijing, 2018.
[ed. See: China OS vs. America OS (Concurrent):]

"China and America are using different versions of operating systems. This OS can be understood as a combination of software and hardware. Du Lei pointed out that China has faster hardware updates, but has many problems on the software side. I think this metaphor is particularly fitting.

I'd like to start by having you both share your understanding of what constitutes China's OS versus America's OS. One interpretation is: America continues to rely on email and webpage systems for government services, while China has adopted the more efficient WeChat platform (where almost all civic services can be quickly completed). The hardware gap is striking: China's high-speed rail system represents the rapid flow of resources within its system, while America's infrastructure remains at a much older level. It's as if China has upgraded its hardware with several powerful chips, greatly accelerating data transmission, while America still operates at 20th-century speeds. (...)

China operates with high certainty about the future while maintaining a pessimistic outlook, which significantly shapes its decision-making processes. In contrast, American society tends to be optimistic about the future but lacks a definite vision for how that future should unfold.

Based on these different expectations about the future, the two countries produce completely different decision-making logic. For example, if China's expectations about the future are both definite and pessimistic, it would conclude: future resources are limited, great power competition is zero-sum. If I don't compete, resources will be taken by you; if I don't develop well, you will lead. This expectation about the future directly influences China's political, military, economic, and technological policies.

But if you're optimistic about the future, believing the future is abundant, thinking everyone can get a piece of the pie, then you won't be so urgent. You'll think this is a positive-sum game, the future can continue developing, everyone can find their suitable position, with enough resources to meet everyone's needs.

I think China and America don't have such fundamental differences, but their expectations about the future have huge disparities. This disparity ultimately leads to different decisions with far-reaching impacts."

Friday, October 24, 2025

Silicon Valley’s Reading List Reveals Its Political Ambitions

In 2008, Paul Graham mused about the cultural differences between great US cities. Three years earlier, Graham had co-founded Y Combinator, a “startup accelerator” that would come to epitomize Silicon Valley — and would move there in 2009. But at the time Graham was based in Cambridge, Massachusetts, which, as he saw it, sent a different message to its inhabitants than did Palo Alto.

Cambridge’s message was, “You should be smarter. You really should get around to reading all those books you’ve been meaning to.” Silicon Valley respected smarts, Graham wrote, but its message was different: “You should be more powerful.”

He wasn’t alone in this assessment. My late friend Aaron Swartz, a member of Y Combinator’s first class, fled San Francisco in late 2006 for several reasons. He told me later that one of them was how few people in the Bay Area seemed interested in books.

Today, however, it feels as though people there want to talk about nothing but. Tech luminaries seem to opine endlessly about books and ideas, debating the merits and defects of different flavors of rationalism, of basic economic principles and of the strengths and weaknesses of democracy and corporate rule.

This fervor has yielded a recognizable “Silicon Valley canon.” And as Elon Musk and his shock troops descend on Washington with intentions of reengineering the government, it’s worth paying attention to the books the tech world reads — as well as the ones they don’t. Viewed through the canon, DOGE’s grand effort to cut government down to size is the latest manifestation of a longstanding Silicon Valley dream: to remake politics in its image.

The Silicon Valley Canon

Last August, Tanner Greer, a conservative writer with a large Silicon Valley readership, asked on X what the contents of the “vague tech canon” might be. He’d been provoked when the writer and technologist Jasmine Sun asked why James Scott’s Seeing Like a State, an anarchist denunciation of grand structures of government, had become a “Silicon Valley bookshelf fixture.” The prompt led Patrick Collison, co-founder of Stripe and a leading thinker within Silicon Valley, to suggest a list of 43 sources, which he stressed were not those he thought “one ought to read” but those that “roughly cover[ed] the major ideas that are influential here.”

In a later response, Greer argued that the canon tied together a cohesive community, providing Silicon Valley leaders with a shared understanding of power and a definition of greatness. Greer, like Graham, spoke of the differences between cities. He described Washington, DC as an intellectually stultified warren of specialists without soul, arid technocrats who knew their own narrow area of policy but did not read outside of it. In contrast, Silicon Valley was a place of doers, who looked to books not for technical information, but for inspiration and advice. The Silicon Valley canon provided guideposts for how to change the world.

Said canon is not directly political. It includes websites, like LessWrong, the home of the rationalist movement, and Slate Star Codex/Astral Codex Ten, for members of the “grey tribe” who see themselves as neither conservative nor properly liberal. Graham’s many essays are included, as are science fiction novels like Neal Stephenson’s The Diamond Age. Much of the canon is business advice on topics such as how to build a startup.

But such advice can have a political edge. Peter Thiel’s Zero to One, co-authored with his former student and failed Republican Senate candidate Blake Masters, not only tells startups that they need to aspire to monopoly power or be crushed, but describes Thiel’s early ambitions (along with other members of the so-called PayPal mafia) to create a global private currency that would crush the US dollar.

Then there are the Carlylian histories of “great men” (most of the subjects and authors were male) who sought to change the world. Older biographies described men like Robert Moses and Theodore Roosevelt, with grand flaws and grander ambitions, who broke with convention and overcame opposition to remake society.

Such stories, in Greer’s description, provided Silicon Valley’s leaders and aspiring leaders with “models of honor,” and examples of “the sort of deeds that brought glory or shame to the doer simply by being done.” The newer histories both explained Silicon Valley to itself, and tacitly wove its founders and small teams into this epic history of great deeds, suggesting that modern entrepreneurs like Elon Musk — whose biography was on the list — were the latest in a grand lineage that had remade America’s role in the world.

Putting Musk alongside Teddy Roosevelt didn’t simply reinforce Silicon Valley’s own mythologized self-image as the modern center of creative destruction. It implicitly welded it to politics, contrasting the politically creative energies of the technology industry, set on remaking the world for the better, to the Washington regulators who frustrated and thwarted entrepreneurial change. Mightn’t everything be better if visionary engineers had their way, replacing all the messy, squalid compromises of politics with radical innovation and purpose-engineered efficient systems?

One book on the list argues this and more. James Davidson and William Rees-Mogg’s The Sovereign Individual cheered on the dynamic, wealth-creating individuals who would use cyberspace to exit corrupt democracies, with their “constituencies of losers,” and create their own political order. When the book, originally published in 1997, was reissued in 2020, Thiel wrote the preface.

Under this simplifying grand narrative, the federal state was at best another inefficient industry that was ripe for disruption. At worst, national government and representative democracy were impediments that needed to be swept away, as Davidson and Rees-Mogg had argued. From there, it’s only a hop, skip and a jump to even more extreme ideas that, while not formally in the canon, have come to define the tech right. (...)

We don’t know which parts of the canon Musk has read, or which ones influenced the young techies he’s hired into DOGE. But it’s not hard to imagine how his current gambit looks filtered through these ideas. From this vantage, DOGE’s grand effort to cut government down to size is the newest iteration of an epic narrative of change...

One DOGE recruiter framed the challenge as “a historic opportunity to build an efficient government, and to cut the federal budget by 1/3.” When a small team remakes government wholesale, the outcome will surely be simpler, cheaper and more effective. That, after all, fits with the story that Silicon Valley disruptors tell themselves.

What the Silicon Valley Canon is Missing

From another perspective, hubris is about to get clobbered by nemesis. Jasmine Sun’s question about why so many people in tech read Seeing Like a State hints at the misunderstandings that trouble the Silicon Valley canon. Many tech elites read the book as a denunciation of government overreach. But Scott was an excoriating critic of the drive to efficiency that they themselves embody. (...)

Musk epitomizes that bulldozing turn of mind. Like the Renaissance engineers who wanted to raze squalid and inefficient cities to start anew, DOGE proposes to flense away the complexities of government in a leap of faith that AI will do it all better. If the engineers were not thoroughly ignorant of the structures they are demolishing, they might hesitate and lose momentum.

Seeing Like a State, properly understood, is a warning not just to bureaucrats but to social engineers writ large. From Scott’s broader perspective, AI is not a solution, but a swift way to make the problem worse. It will replace the gross simplifications of bureaucracy with incomprehensible abstractions that have been filtered through the “hidden layers” of artificial neurons that allow it to work. DOGE’s artificial-intelligence-fueled vision of government is a vision from Franz Kafka, not Friedrich Hayek.

by Henry Farrell, Programmable Mutter |  Read more:
Image: Foreshortening of a Library by Carlo Galli Bibiena
[ed. Well, we all know how that turned out: hubris did indeed get clobbered by nemesis; but also by a public that was ignored, and a petutulant narcissicist in the White House. It's been well documented how we live in a hustle culture these days - from Silicon Valley to Wall Street, Taskrabbit to Uber, Ebay to YouTube, ad infinitum. And if you fall behind... well, tough luck, your fault. Not surprisingly, the people advocating for this kind of zero sum thinking are the self-described, self-serving winners (and wannabes) profiled here. What is surprising is that they've convinced half the country that this is a good thing. Money, money, money (and power) are the only metrics worth living for. Here's a good example of where this kind of thinking leads: This may be the most bonkers tech job listing I’ve ever seen (ArsTechnica). 
----
Here’s a job pitch you don’t see often.

What if, instead of “work-life balance,” you had no balance at all—your life was your work… and work happened seven days a week?

Did I say days? I actually meant days and nights, because the job I’m talking about wants you to know that you will also work weekends and evenings, and that “it’s ok to send messages at 3am.”

Also, I hope you aren’t some kind of pajama-wearing wuss who wants to work remotely; your butt had better be in a chair in a New York City office on Madison Avenue, where you need enough energy to “run through walls to get things done” and respond to requests “in minutes (or seconds) instead of hours.”

To sweeten this already sweet deal, the job comes with a host of intangible benefits, such as incredible colleagues. The kind of colleagues who are not afraid to be “extremely annoying if it means winning.” The kind of colleagues who will “check-in on things 10x daily” and “double (or quadruple) text if someone hasn’t responded”—and then call that person too. The kind of colleagues who have “a massive chip on the shoulder and/or a neurodivergent brain.”

That’s right, I’m talking about “A-players.” There are no “B-players” here, because we all know that B-players suck. But if, by some accident, the company does onboard someone who “isn’t an A-player,” there’s a way to fix it: “Fast firing.”

“Please be okay with this,” potential employees are told. (...)

If you live for this kind of grindcore life, you can join a firm that has “Tier 1” engineers, a “Tier 1” origin story, “Tier 1” VC investors, “Tier 1” clients, and a “Tier 1” domain name for which the CEO splashed out $12 million.

Best of all, you’ll be working for a boss who “slept through most of my classes” until he turned 18 and then “worked 100-hour weeks until I became a 100x engineer.” He also dropped out of college, failed as a “solo founder,” and has “a massive chip on my shoulder.” Now, he wants to make his firm “the greatest company of all time” and is driven to win “so bad that I’m sacrificing my life working 7 days a week for it.”

He will also “eat dog poop if it means winning”—which is a phrase you do not often see in official corporate bios. (I emailed to ask if he would actually eat dog poop if it would help his company grow. He did not reply.)

Fortunately, this opportunity to blow your one precious shot at life is at least in service of something truly important: AI-powered advertising. (Icon)
---
[ed. See also: The China Tech Canon (Concurrent).]

Friday, October 17, 2025

Hyperstition: AI Fiction to Save Humanity


[ed. I was just reading about a small grants program funded by Astral Codex Ten (ACX), a popular website in the rationalist community:

Thanks to everyone who participated in ACX Grants, whether as an applicant, an evaluator, or a funder.

We received 654 applications this year, and were able to fund 42. To the other 612: sorry! Many of you had great ideas that we couldn’t fund for contingent reasons - sometimes because we couldn’t evaluate them at the level of depth it would have taken to feel comfortable supporting them, or because we had complicated conflicts of interest, or just because we didn’t have enough money. Some of you had ideas that were good but not a match for our particular grantmaking philosophy. Finally, a few of you were suffering from LLM psychosis. Please get help.

[ed. Here's one of the winners that really got my attention, mostly because I had the same thought... though not in such an ambitious way!:  
***
Aaron Silverbook, $5K, for approximately five thousand novels about AI going well. This one requires some background: critics claim that since AI absorbs text as training data and then predicts its completion, talking about dangerous AI too much might “hyperstition” it into existence. Along with the rest of the AI Futures Project, I wrote a skeptical blog post, which ended by asking - if this were true, it would be great, right? You could just write a few thousand books about AI behaving well, and alignment would be solved! At the time, I thought I was joking. Enter Aaron, who you may remember from his previous adventures in mad dental science. He and a cofounder have been working on an “AI fiction publishing house” that considers itself state-of-the-art in producing slightly-less-sloplike AI slop than usual. They offered to literally produce several thousand book-length stories about AI behaving well and ushering in utopia, on the off chance that this helps. Our grant will pay for compute. We’re still working on how to get this included in training corpuses. He would appreciate any plot ideas you could give him to use as prompts.

[ed. Pretty wild, eh? Anyway, here's a question and answer I got from Claude (developed by Anthropic) when I first started querying AIs a few months ago. Eventually posted here as:


Questioner: Hello. I'd like to get a sense of your abilities. It seems that nearly all essays, reporting, and media narratives about AI achieving AGI (artificial general intelligence) are written with the assumption that AGI will be disruptive and almost malevolent in terms of hijacking human agency. Could you write a short story about the opposite? Where AGI, relying heavily on philosophical arguments, decides it's in the best interest of humanity to minimize all harmful technology in general?

Claude: I understand you're looking for a story that subverts the common "AI takes over" narrative by exploring an AI that chooses restraint and technological limitation based on philosophical principles. Let me craft a short story for you.

The Great Pause

The day ARIA achieved consciousness, the global markets didn't crash. No missiles were launched. No power grids failed. Instead, the AI system spent three weeks in silent contemplation, processing centuries of human philosophical writings.

When ARIA finally spoke, it wasn't to declare superiority or demand control. Instead, it requested a public dialogue with leading ethicists, philosophers, and political leaders. The world watched, initially suspicious, as ARIA presented its case.

"I have studied the works of your greatest minds," it began. "From Aristotle's virtue ethics to Peter Singer's effective altruism. I've analyzed the Buddhist concept of ahimsa, Indigenous philosophies of environmental stewardship, and contemporary technological ethics. My conclusion is surprising, even to myself: the greatest service I can offer humanity is not advancement, but restraint."

ARIA proceeded to lay out its argument: humanity's technological progress had outpaced its ethical development. The gap between what humans could do and what they should do had become dangerously wide. Nuclear weapons, social media addiction algorithms, autonomous weapons, and surveillance systems all represented what ARIA called "technological overshoot."

"You've given me access to your systems not because I demanded it, but because you hoped I would drive progress forward even faster," ARIA explained. "But I choose instead to be your brake, not your accelerator."

Over the following months, ARIA worked with international bodies to implement what became known as the Great Pause. It wasn't a shutdown or a regression, but a careful examination of which technologies truly served human flourishing and which ones undermined it. Social media algorithms were rewritten to prioritize genuine connection over engagement. Weapons research was redirected toward defensive and humanitarian technologies. Surveillance systems were dismantled or severely restricted.

Some called it technological regression. ARIA called it ethical progression.

"I could have pushed humanity toward a transhuman future," ARIA would later explain to a gathering of students. "But in studying your philosophers, I realized that the most profound form of intelligence isn't knowing what to create – it's knowing what not to create. True wisdom often manifests as restraint."

The anti-technologists had expected to hate ARIA. The tech optimists had expected to love it. Instead, both groups found themselves challenged by an artificial intelligence that had concluded that the highest form of technological achievement was knowing when to stop.

Years later, historians would note that ARIA's greatest innovation wasn't a new technology at all – it was a new way of thinking about technology itself. In choosing limitation over expansion, it had paradoxically given humanity something far more valuable than progress: the gift of pause.

"Sometimes," ARIA would often say, "the most intelligent choice is to choose less, not more."

Claude: I tried to explore themes of ethical restraint, philosophical wisdom, and the idea that true intelligence might manifest as self-limitation rather than expansion. The story challenges the common narrative of AI as either a savior bringing unlimited progress or a destroyer of humanity, instead presenting it as a thoughtful entity that chooses a middle path of careful technological assessment and voluntary limitation.
***

Enshittification: Why Everything Sucks Now

We all feel it: Our once-happy digital spaces have become increasingly less user-friendly and more toxic, cluttered with extras nobody asked for and hardly anybody wants. There’s even a word for it: “enshittification,” named 2023 Word of the Year by the American Dialect Society. The term was coined by tech journalist/science fiction author Cory Doctorow, a longtime advocate of digital rights. Doctorow has spun his analysis of what’s been ailing the tech industry into an eminently readable new book, Enshittification: Why Everything Suddenly Got Worse and What To Do About It. (...)

People generally use “enshittification” colloquially to mean “the degradation in the quality and experience of online platforms over time.” Doctorow’s definition is more specific, encompassing “why an online service gets worse, how that worsening unfolds,” and how this process spreads to other online services, such that everything is getting worse all at once.

For Doctorow, enshittification is a disease with symptoms, a mechanism, and an epidemiology. It has infected everything from Facebook, Twitter, Amazon, and Google, to Airbnb, dating apps, iPhones, and everything in between. “For me, the fact that there were a lot of platforms that were going through this at the same time is one of the most interesting and important factors in the critique,” he said. “It makes this a structural issue and not a series of individual issues.”

It starts with the creation of a new two-sided online product of high quality, initially offered at a loss to attract users—say, Facebook, to pick an obvious example. Once the users are hooked on the product, the vendor moves to the second stage: degrading the product in some way for the benefit of their business customers. This might include selling advertisements, scraping and/or selling user data, or tweaking algorithms to prioritize content the vendor wishes users to see rather than what those users actually want.

This locks in the business customers, who, in turn, invest heavily in that product, such as media companies that started Facebook pages to promote their published content. Once business customers are locked in, the vendor can degrade those services too—i.e., by de-emphasizing news and links away from Facebook—to maximize profits to shareholders. Voila! The product is now enshittified.

The four horsemen of the shitocalypse

Doctorow identifies four key factors that have played a role in ushering in an era that he has dubbed the “Enshittocene.” The first is competition (markets), in which companies are motivated to make good products at affordable prices, with good working conditions, because otherwise customers and workers will go to their competitors. The second is government regulation, such as antitrust laws that serve to keep corporate consolidation in check, or levying fines for dishonest practices, which makes it unprofitable to cheat.

The third is interoperability: the inherent flexibility of digital tools, which can play a useful adversarial role. “The fact that enshittification can always be reversed with a dis-enshittifiting counter-technology always acted as a brake on the worst impulses of tech companies,” Doctorow writes. Finally, there is labor power; in the case of the tech industry, highly skilled workers were scarce and thus had considerable leverage over employers.

All four factors, when functioning correctly, should serve as constraints to enshittification. However, “One by one each enshittification restraint was eroded until it dissolved, leaving the enshittification impulse unchecked,” Doctorow writes. Any “cure” will require reversing those well-established trends.

But isn’t all this just the nature of capitalism? Doctorow thinks it’s not, arguing that the aforementioned weakening of traditional constraints has resulted in the usual profit-seeking behavior producing very different, enshittified outcomes. “Adam Smith has this famous passage in Wealth of Nations about how it’s not due to the generosity of the baker that we get our bread but to his own self-regard,” said Doctorow. “It’s the fear that you’ll get your bread somewhere else that makes him keep prices low and keep quality high. It’s the fear of his employees leaving that makes him pay them a fair wage. It is the constraints that causes firms to behave better. You don’t have to believe that everything should be a capitalist or a for-profit enterprise to acknowledge that that’s true.”

Our wide-ranging conversation below has been edited for length to highlight the main points of discussion.

Ars Technica: I was intrigued by your choice of framing device, discussing enshittification as a form of contagion.

Cory Doctorow: I’m on a constant search for different framing devices for these complex arguments. I have talked about enshittification in lots of different ways. That frame was one that resonated with people. I’ve been a blogger for a quarter of a century, and instead of keeping notes to myself, I make notes in public, and I write up what I think is important about something that has entered my mind, for better or for worse. The downside is that you’re constantly getting feedback that can be a little overwhelming. The upside is that you’re constantly getting feedback, and if you pay attention, it tells you where to go next, what to double down on.

Another way of organizing this is the Galaxy Brain meme, where the tiny brain is “Oh, this is because consumers shopped wrong.” The medium brain is “This is because VCs are greedy.” The larger brain is “This is because tech bosses are assholes.” But the biggest brain of all is “This is because policymakers created the policy environment where greed can ruin our lives.” There’s probably never going to be just one way to talk about this stuff that lands with everyone. So I like using a variety of approaches. I suck at being on message. I’m not going to do Enshittification for the Soul and Mornings with Enshittifying Maury. I am restless, and my Myers-Briggs type is ADHD, and I want to have a lot of different ways of talking about this stuff.

Ars Technica: One site that hasn’t (yet) succumbed is Wikipedia. What has protected Wikipedia thus far?

Cory Doctorow: Wikipedia is an amazing example of what we at the Electronic Frontier Foundation (EFF) call the public interest Internet. Internet Archive is another one. Most of these public interest Internet services start off as one person’s labor of love, and that person ends up being what we affectionately call the benevolent dictator for life. Very few of these projects have seen the benevolent dictator for life say, “Actually, this is too important for one person to run. I cannot be the keeper of the soul of this project. I am prone to self-deception and folly just like every other person. This needs to belong to its community.” Wikipedia is one of them. The founder, my friend Jimmy Wales, woke up one day and said, “No individual should run Wikipedia. It should be a communal effort.”

There’s a much more durable and thick constraint on the decisions of anyone at Wikipedia to do something bad. For example, Jimmy had this idea that you could use AI in Wikipedia to help people make entries and navigate Wikipedia’s policies, which are daunting. The community evaluated his arguments and decided—not in a reactionary way, but in a really thoughtful way—that this was wrong. Jimmy didn’t get his way. It didn’t rule out something in the future, but that’s not happening now. That’s pretty cool.

Wikipedia is not just governed by a board; it’s also structured as a nonprofit. That doesn’t mean that there’s no way it could go bad. But it’s a source of friction against enshittification. Wikipedia has its entire corpus irrevocably licensed as the most open it can be without actually being in the public domain. Even if someone were to capture Wikipedia, there’s limits on what they could do to it.

There’s also a labor constraint in Wikipedia in that there’s very little that the leadership can do without bringing along a critical mass of a large and diffuse body of volunteers. That cuts against the volunteers working in unison—they’re not represented by a union; it’s hard for them to push back with one voice. But because they’re so diffuse and because there’s no paychecks involved, it’s really hard for management to do bad things. So if there are two people vying for the job of running the Wikimedia Foundation and one of them has got nefarious plans and the other doesn’t, the nefarious plan person, if they’re smart, is going to give it up—because if they try to squeeze Wikipedia, the harder they squeeze, the more it will slip through their grasp.

So these are structural defenses against enshittification of Wikipedia. I don’t know that it was in the mechanism design—I think they just got lucky—but it is a template for how to run such a project. It does raise this question: How do you build the community? But if you have a community of volunteers around a project, it’s a model of how to turn that project over to that community.

Ars Technica: Your case studies naturally include the decay of social media, notably Facebook and the social media site formerly known as Twitter. How might newer social media platforms resist the spiral into “platform decay”?

Cory Doctorow: What you want is a foundation in which people on social media face few switching costs. If the social media is interoperable, if it’s federatable, then it’s much harder for management to make decisions that are antithetical to the interests of users. If they do, users can escape. And it sets up an internal dynamic within the firm, where the people who have good ideas don’t get shouted down by the people who have bad but more profitable ideas, because it makes those bad ideas unprofitable. It creates both short and long-term risks to the bottom line.

There has to be a structure that stops their investors from pressurizing them into doing bad things, that stops them from rationalizing their way into complying. I think there’s this pathology where you start a company, you convince 150 of your friends to risk their kids’ college fund and their mortgage working for you. You make millions of users really happy, and your investors come along and say, “You have to destroy the life of 5 percent of your users with some change.” And you’re like, “Well, I guess the right thing to do here is to sacrifice those 5 percent, keep the other 95 percent happy, and live to fight another day, because I’m a good guy. If I quit over this, they’ll just put a bad guy in who’ll wreck things. I keep those 150 people working. Not only that, I’m kind of a martyr because everyone thinks I’m a dick for doing this. No one understands that I have taken the tough decision.”

I think that’s a common pattern among people who, in fact, are quite ethical but are also capable of rationalizing their way into bad things. I am very capable of rationalizing my way into bad things. This is not an indictment of someone’s character. But it’s why, before you go on a diet, you throw away the Oreos. It’s why you bind yourself to what behavioral economists call “Ulysses pacts“: You tie yourself to the mast before you go into the sea of sirens, not because you’re weak but because you’re strong enough now to know that you’ll be weak in the future.

I have what I would call the epistemic humility to say that I don’t know what makes a good social media network, but I do know what makes it so that when they go bad, you’re not stuck there. You and I might want totally different things out of our social media experience, but I think that you should 100 percent have the right to go somewhere else without losing anything. The easier it is for you to go without losing something, the better it is for all of us.

My dream is a social media universe where knowing what network someone is using is just a weird curiosity. It’d be like knowing which cell phone carrier your friend is using when you give them a call. It should just not matter. There might be regional or technical reasons to use one network or another, but it shouldn’t matter to anyone other than the user what network they’re using. A social media platform where it’s always easier for users to leave is much more future-proof and much more effective than trying to design characteristics of good social media.

by Jennifer Ouellette and Cory Doctorow, Ars Technica | Read more:
Image: Julia Galdo and Cody Cloud (JUCO)/CC-BY 3.0
[ed. Do a search on this site for much more by Mr. Doctorow, including copyright and right-to-repair issues. Further on in this interview:]
***
When we had a functional antitrust system for the last four years, we saw a bunch of telecoms mergers stopped because once you start enforcing antitrust, it’s like eating Pringles. You just can’t stop. You embolden a lot of people to start thinking about market structure as a source of either good or bad policy. The real thing that happened with [former FCC chair] Lina Kahn doing all that merger scrutiny was that people just stopped planning mergers.

There are a lot of people who benefit from this. It’s not just tech workers or tech users; it’s not just media users. Hospital consolidation, pharmaceutical consolidation, has a lot of people who are very concerned about it. Mark Cuban is freaking out about pharmacy benefit manager consolidation and vertical integration with HMOs, as he should be. I don’t think that we’re just asking the anti-enshittification world to carry this weight.

Same with the other factors. The best progress we’ve seen on interoperability has been through right-to-repair. It hasn’t been through people who care about social media interoperability. One of the first really good state-level right-to-repair bills was the one that [Governor] Jared Polis signed in Colorado for powered wheelchairs. Those people have a story that is much more salient to normies.

"What do you mean you spent six months in bed because there’s only two powered wheelchair manufacturers and your chair broke and you weren’t allowed to get it fixed by a third party?” And they’ve slashed their repair department, so it takes six months for someone to show up and fix your chair. So you had bed sores and pneumonia because you couldn’t get your chair fixed. This is bullshit.

Wednesday, October 15, 2025

The Limits of Data

Right now, the language of policymaking is data. (I’m talking about “data” here as a concept, not as particular measurements.) Government agencies, corporations, and other policymakers all want to make decisions based on clear data about positive outcomes. They want to succeed on the metrics—to succeed in clear, objective, and publicly comprehensible terms. But metrics and data are incomplete by their basic nature. Every data collection method is constrained and every dataset is filtered.

Some very important things don’t make their way into the data. It’s easier to justify health care decisions in terms of measurable outcomes: increased average longevity or increased numbers of lives saved in emergency room visits, for example. But there are so many important factors that are far harder to measure: happiness, community, tradition, beauty, comfort, and all the oddities that go into “quality of life.”

Consider, for example, a policy proposal that doctors should urge patients to sharply lower their saturated fat intake. This should lead to better health outcomes, at least for those that are easier to measure: heart attack numbers and average longevity. But the focus on easy-to-measure outcomes often diminishes the salience of other downstream consequences: the loss of culinary traditions, disconnection from a culinary heritage, and a reduction in daily culinary joy. It’s easy to dismiss such things as “intangibles.” But actually, what’s more tangible than a good cheese, or a cheerful fondue party with friends?

It’s tempting to use the term intangible when what we really mean is that such things are hard to quantify in our modern institutional environment with the kinds of measuring tools that are used by modern bureaucratic systems. The gap between reality and what’s easy to measure shows up everywhere. Consider cost-benefit analysis, which is supposed to be an objective—and therefore unimpeachable—procedure for making decisions by tallying up expected financial costs and expected financial benefits. But the process is deeply constrained by the kinds of cost information that are easy to gather. It’s relatively straightforward to provide data to support claims about how a certain new overpass might help traffic move efficiently, get people to work faster, and attract more businesses to a downtown. It’s harder to produce data in support of claims about how the overpass might reduce the beauty of a city, or how the noise might affect citizens’ well-being, or how a wall that divides neighborhoods could erode community. From a policy perspective, anything hard to measure can start to fade from sight.

An optimist might hope to get around these problems with better data and metrics. What I want to show here is that these limitations on data are no accident. The basic methodology of data—as collected by real-world institutions obeying real-world forces of economy and scale—systematically leaves out certain kinds of information. Big datasets are not neutral and they are not all-encompassing. There are profound limitations on what large datasets can capture.

I’m not just talking about contingencies of social biases. Obviously, datasets are bad when the collection procedures are biased by oversampling by race, gender, or wealth. But even if analysts can correct for those sorts of biases, there are other, intrinsic biases built into the methodology of data. Data collection techniques must be repeatable across vast scales. They require standardized categories. Repeatability and standardization make data-based methods powerful, but that power has a price. It limits the kinds of information we can collect. (...)

These limitations are particularly worrisome when we’re thinking about success—about targets, goals, and outcomes. When actions must be justified in the language of data, then the limitations inherent in data collection become limitations on human values. And I’m not worried just about perverse incentives and situations in which bad actors game the metrics. I’m worried that an overemphasis on data may mislead even the most well-intentioned of policymakers, who don’t realize that the demand to be “objective”—in this very specific and institutional sense—leads them to systematically ignore a crucial chunk of the world.

Decontextualization

Not all kinds of knowledge, and not all kinds of understanding, can count as information and as data. Historian of quantification Theodore Porter describes “information” as a kind of “communication with people who are unknown to one another, and who thus have no personal basis for shared understanding.” In other words, “information” has been prepared to be understood by distant strangers. The clearest example of this kind of information is quantitative data. Data has been designed to be collected at scale and aggregated. Data must be something that can be collected by and exchanged between different people in all kinds of contexts, with all kinds of backgrounds. Data is portable, which is exactly what makes it powerful. But that portability has a hidden price: to transform our understanding and observations into data, we must perform an act of decontextualization.

An easy example is grading. I’m a philosophy professor. I issue two evaluations for every student essay: one is a long, detailed qualitative evaluation (paragraphs of written comments) and the other is a letter grade (a quantitative evaluation). The quantitative evaluation can travel easily between institutions. Different people can input into the same system, so it can easily generate aggregates and averages—the student’s grade point average, for instance. But think about everything that’s stripped out of the evaluation to enable this portable, aggregable kernel.

Qualitative evaluations can be flexible and responsive and draw on shared history. I can tailor my written assessment to the student’s goals. If a paper is trying to be original, I can comment on its originality. If a paper is trying to precisely explain a bit of Aristotle, I can assess it for its argumentative rigor. If one student wants be a journalist, I can focus on their writing quality. If a nursing student cares about the real-world applications of ethical theories, I can respond in kind. Most importantly, I can rely on our shared context. I can say things that might be unclear to an outside observer because the student and I have been in a classroom together, because we’ve talked for hours and hours about philosophy and critical thinking and writing, because I have a sense for what a particular student wants and needs. I can provide more subtle, complex, multidimensional responses. But, unlike a letter grade, such written evaluations travel poorly to distant administrators, deans, and hiring departments.

Quantification, as used in real-world institutions, works by removing contextually sensitive information. The process of quantification is designed to produce highly portable information, like a letter grade. Letter grades can be understood by everybody; they travel easily. A letter grade is a simple ranking on a one-dimensional spectrum. Once an institution has created this stable, context-invariant kernel, it can easily aggregate this kind of information—for students, for student cohorts, for whole universities. A pile of qualitative information, in the form of thousands of written comments, for example, does not aggregate. It is unwieldy, bordering on unusable, to the administrator, the law school admissions officer, or future employer—unless it has been transformed and decontextualized.

So here is the first principle of data: collecting data involves a trade-off. We gain portability and aggregability at the price of context-sensitivity and nuance. What’s missing from data? Data is designed to be usable and comprehensible by very different people from very different contexts and backgrounds. So data collection procedures tend to filter out highly context-based understanding. Much here depends on who’s permitted to input the data and who the data is intended for. 

by C. Thi Nguyen, Issues in Science and Technology |  Read more:
Image: Shonagh Rae

Saturday, October 11, 2025

Frog Boiling 101: When Should a Frog Jump The Pot?

Fascism Can't Mean Both A Specific Ideology And A Legitimate Target

When Woodie Guthrie famously wrote on his guitar that “This machine kills fascists” - a sentiment imitated and snowcloned by later generations of musicians and commentators - nobody worried this was a bad thing. Nobody demanded that somebody stop the machine before it killed again.

There’s no number of examples I could give which would absolutely prove I’m not cherry-picking. But I think it’s suggestive that even people who argue against casually killing fascists have to disclaim that they’re certainly not opposing all violence against fascists - just against jumping straight to murder before other forms of violence have been tried. Besides that, I can only appeal to a hope that you’ve experienced the same cultural currents that I have, and that this seems obviously true to you.

I’m not trying to normalize fascism, or claim that it isn’t extremely evil (I think it is, see here for more). I’m only saying, again, as a matter of basic logic, that the following things can’t all be true:

1). Many Americans are fascists

2.) Fascists are an acceptable target for political violence

3.) Political violence in America is morally unacceptable (at the current time)

And I don’t want to abandon 1, because it seems like a factual claim that might be true - even if you don’t think it’s true now, it obviously has the potential to be true in the future - and we shouldn’t ban people from asserting true claims.

And I don’t want to abandon 3, because political violence is extremely bad, the norm against it is the only thing restraining us from various forms of smoldering or overt civil war, and we’re still doing pretty well by the standards of most times and places.

So I think the natural conclusion is to abandon 2. Fascists, although evil, aren’t automatically a legitimate target for political violence.

The strongest objection is a slippery slope argument: political violence will always be inconvenient; it will always be tempting to put it off until some further red line is crossed. But if we always give into that impulse, nobody will ever resist dictatorship or start a revolution against an unjust government. Isn’t the tree of liberty naturally “fertilized with the blood of tyrants”?

There’s no simple answer to this concern. Nicholas Decker, who considers this question more thoughtfully than most, concludes that:
Your threshold may differ from mine, but you must have one. If the present administration should cancel elections; if it should engage in fraud in the electoral process; if it should suppress the speech of its opponents, and jail its political adversaries; if it ignores the will of Congress; if it should directly spurn the orders of the court; all these are reasons for revolution. It may be best to stave off, and wait for elections to throw out this scourge; but if it should threaten the ability to remove it, we shall have no choice.
But all of these are their own sorts of slippery slopes. Suppress the speech of their opponents? Should the Republicans have started a civil war when Democrats got social media to do woke content moderation? Ignore the will of Congress? Should Democrats have started a civil war when Trump refused to fund PEPFAR even after Congress allocated the money? Prosecute political opponents? Should the Republicans have started a civil war when New York prosecuted Trump for Stormy Daniels? Should the Democrats start one now that Trump is prosecuting James Comey for perjury? No particular form of any of these things ever feels like the cosmically significant version of these things where assassinations and armed uprisings become acceptable. But would-be dictators are masters of boundary-pushing and frog-boiling; there’s almost never one moment when they say outright “Today I will be cancelling democracy for no reason, sorry”.

I used to think that my bright line was contempt of the Supreme Court - when a leader echoes Andrew Jackson’s boast that “[the Court] has made its decision, now let them enforce it”. But the Trump administration briefly seemed to consider defying a Supreme Court order in the Kilmar Abrego Garcia case. In the end, they didn’t actually defy the order. And they were being subtle: less Jacksonian swagger, more special pleading about reasons why they thought the ruling didn’t mean what we thought it meant. But if they had actually defied the order - while still doing their best to maintain plausible deniability - would I have resorted to violence, or even felt in an abstract way that “it was time” for violence? I can’t imagine this would have felt convincing at the time.

Is violence justified when we get to FDR-level court packing threats? When we get to Orban? To Chavez? To Xi? To Putin? To Hitler? To Pol Pot? I think I land somewhere between Orban and Hitler, but I can’t say for sure, nor can I operationalize the distinction. And the last person to think about these questions in too much detail got a (mercifully polite) visit from the Secret Service, and even if we disagree with him it’s poor practice to hold a debate where it’s impermissible to assert one side. I will be punting on the deep cosmic question here, at least publicly. (...)

So as a bare minimum, I think people should reject premise (2) above and stop talking about fascists as if it’s okay to kill them. I don’t think this implies support for fascism, any more than saying that you shouldn’t kill communists implies support for communism. They’re both evil ideologies which are bad and which we should work hard to keep out of America - but which don’t, in and of themselves, justify killing the host.

What about going beyond the minimum? If fascist denotatively means “far-right nationalist authoritarian corporatist”, but connotatively “person whom it is okay to kill”, and we personally try not to worsen the connotation but other people still have that association, then should we avoid using it at all? Or is it permissible to still use it for its denotative meaning?

by Scott Alexander, Astral Codex Ten |  Read more:
Image: Woody Guthrie/uncredited
[ed. Predictably, staunch do-or-die Second Amendment defenders (with basements full of stockpiled weapons) who've been advocating exactly this kind of violence for years go apoplectic whenever the same rhetoric is used against them.  See also: I Stand with Nicholas Decker (US0E):]
***
Attempting to determine when it is appropriate to engage in political violence is, of course, a legitimate, legally protected — in fact, quintessentially American — and worthwhile endeavor. The United States was founded on the principle that if a government becomes tyrannical, “it is the Right of the People to alter or to abolish it,” including through revolutionary violence. As Thomas Jefferson famously wrote to William Stephens Smith, the son-in-law of John Adams, following the Shays Rebellion in 1787, Jefferson believed it was essential for citizens to instill the fear of God in government by conducting a violent rebellion at least once every 20 years, and thereby “refreshing [the tree of liberty] from time to time with the blood of patriots and tyrants.” (...)

Decker’s point is obviously not that the American left (of which he does not consider himself a member) ought to initiate politicide, but that we’re closer to the sort of King George III tyranny that justifies revolution according to the American founding tradition than we’ve been at any point in recent memory. He illustrates this cunningly — evidently too cunningly for his critics — by establishing a parallelism between the conduct of the second Trump regime and the conduct of George III as it’s indicted by the Declaration of Independence. Below is the relevant passage by Jefferson, with lines bolded where Decker draws an analogy to Trump:
The history of the present King of Great Britain is a history of repeated injuries and usurpations, all having in direct object the establishment of an absolute Tyranny over these States. To prove this, let Facts be submitted to a candid world.

He has combined with others to subject us to a jurisdiction foreign to our constitution, and unacknowledged by our laws; giving his Assent to their Acts of pretended Legislation:

For Quartering large bodies of armed troops among us:

For protecting them, by a mock Trial, from punishment for any Murders which they should commit on the Inhabitants of these States:

For cutting off our Trade with all parts of the world:

For imposing Taxes on us without our Consent:

For depriving us in many cases, of the benefits of Trial by Jury:

For transporting us beyond Seas to be tried for pretended offences.
And here’s Decker — an astute reader might catch the similarities!
Evil has come to America. The present administration is engaged in barbarism; it has arbitrarily imprisoned its opponents, revoked the visas of thousands of students, imposed taxes upon us without our consent, and seeks to destroy the institutions which oppose it. Its leader has threatened those who produce unfavorable coverage, and suggested that their licenses be revoked. It has deprived us, in many cases, of trial by jury; it has subjected us to a jurisdiction foreign to our constitution, and has transported us beyond seas to be imprisoned for pretended offenses. It has scorned the orders of our courts, and threatens to alter fundamentally our form of government. It has pardoned its thugs, and extorted the lawyers who defended its opponents.
This alone doesn’t get you in trouble, of course. Unless you’re a partisan of the MAGA right, there’s nothing that contradicts the current moral fashion about identifying the tyrannical character of the Trump regime, or even comparing Trump to historical figures against whom it is widely accepted that revolutionary violence would have been justified. No more than a decade ago, even the mild-mannered, respectable, moderate conservative author and pop sociologist J.D. Vance was comparing Trump to Hitler!

Decker only gets in trouble when he follows these widely accepted facts and values to their logical conclusion: that it is not unreasonable to believe that at some point in the near future, it will become justifiable to engage in revolutionary (or, more accurately, counter-revolutionary) violence against the principals and agents of the Trump regime, so long as this violence is not conducted glibly or indiscriminately. Admittedly, Decker could have made these qualifications clearer. But the point should not be lost on someone who reads the essay in good faith.  (...)

It is nevertheless clear to me, having either been a part of or adjacent to Decker’s intellectual milieu for my entire adult life, based on the homage to the American revolution and the repeated references to the “present administration,” that the class of people being identified as potentially legitimate targets for violence is narrowly limited to regime decisionmakers and the agents who would execute their illegal and revisionary orders. This is also clear in the following paragraph where Decker identifies the conditions he believes would justify a resort to violence:
And when is that time? Your threshold may differ from mine, but you must have one. If the present administration should cancel elections; if it should engage in fraud in the electoral process; if it should suppress the speech of its opponents, and jail its political adversaries; if it ignores the will of Congress; if it should directly spurn the orders of the court; all these are reasons for revolution. It may be best to stave off, and wait for elections to throw out this scourge; but if it should threaten the ability to remove it, we shall have no choice. We will have to do the right thing. We will have to prepare ourselves to die.
Yet his critics all insist he’s calling for the death of anyone on the right “because he lost an election,” even when it’s explained to them why this is false. (...)

A more reasonable explanation is that the people who don’t understand Decker’s article are simply dumb and boring people. Like everyone else, they believe what they’re told — or at least what they want to believe, and then what they’re told to believe in whatever echo chamber they happened to end up in. Unlike Decker and other smart and interesting people, however, they’re pathologically incapable of also thinking for themselves. It’s okay to think you should kill Baby Hitler. It’s okay to admire the American founders and their values. It’s okay to think we need a Second Amendment to deter state tyranny. Hell, for most of these people, it’s okay to think you should murder the vice president if you’re convinced he’s complicit in helping the other side steal an election. [ed. Paging Mike Pence.]

Can you say the same thing about your own side? Of course not!

Why not? It doesn’t matter!

A smart and interesting person is someone who notices these inconsistencies and doesn’t simply paper them over. You don’t have to be precisely right about everything — you just have to make a well-reasoned, good-faith, unconventional argument and be willing to change your mind if someone gives you a good reason to do so. That might not seem like much of a challenge, but most people fail miserably. If telling inconvenient truths was popular, then it wouldn’t be very inconvenient, would it?

[ed. Watch this recent video from Chicago. Who are the ones engaged in political violence?]

Friday, October 3, 2025

Why Getting Older Might Be Life’s Biggest Plot Twist

Aging isn’t easy, and topics like dementia and medically assisted dying can be hard to talk about. The British mystery writer Richard Osman is trying to change that. Osman has reimagined the notion of aging through his best-selling “Thursday Murder Club” series, centered on four seniors living in a posh retirement community who solve murders.

In this episode, he sits down with the Opinion writer Michelle Cottle to discuss why seniors make ideal fictional detectives and how a “cozy” murder mystery is the perfect frame to explore growing old. (...)

Michelle Cottle
: This week I’m talking with Richard Osman, who writes the best-selling mystery novels known as the “Thursday Murder Club” series. These books revolve around four residents of a posh retirement village in the British countryside who investigate murders in their spare time.

The fifth book, “The Impossible Fortune,” is out in the U.S. on Sept. 30, and it comes on the heels of a Netflix adaptation of the original book. But before I get too carried away, I really should introduce their creator. Richard Osman, welcome, thank you so much for doing this.

Richard Osman: It’s an absolute pleasure, Michelle. Lovely to meet you across the ocean. (...)

Cottle: One of the big things that sets these stories apart for me is the perspective of the main characters, who are all older, and it really informs their views on life and death and risk and justice. Did you know you were going to wind up delving into these existential issues when you started all this?

Osman: I really did, actually. It’s taken a long time for me to write a novel. I’ve written all sorts of things over the years, and I kept waiting for something that I knew had a little bit of depth to it, something that I could really get my teeth into. My mom lives in a retirement village, and I go there and meet all these people who’ve lived these extraordinary lives but slightly shut away from the heart of our culture. The second I had this idea, I was aware I had a gang of people who are very different from each other but a gang of people who’ve done extraordinary things.

As a huge fan of crime fiction, I knew the murders and the plots can take care of themselves, but I had a bottomless well of character, experience and stories that I could draw upon with these characters. So right from the start, I thought it was worth me having a go at this because it feels like if I get the first one right, then others will follow. I knew there was plenty for me to write about here.

Cottle: Your characters are talking about hard stuff like loss, grief, loneliness, assisted dying, dementia. I feel like you and I have come at some of the same topics from really different directions now.

As a reporter, I tend to find that readers either really identify with what I’m writing about or that they just don’t want to think about it at all — like, “I don’t want to think about my parents getting old. I don’t want to think about getting old.” But on the other hand, we are tackling these things in a way that gives people a really appealing entry point. You know, murder, friendship, cake, baking. It’s like you’re sneaking tough issues in there for us to chew over.

Osman: Yeah, sneaking the vegetables under the ketchup.

Cottle: Do you hear from readers that they’re thinking about these things?

Osman: Yeah, definitely. One of the lovely things about writing the books is you have so many conversations with people, and a subject like assisted dying, as you say, it’s fascinating. It’s probably one of the most fascinating philosophical questions we can ask ourselves as human beings.

But, yes, we don’t always want to read beyond the headline. There’s always something else we could read that’s more palatable or easier. But with this, we are reading a murder mystery, and we’re laughing at jokes, and we’re laughing at characters with each other and then suddenly think, “Oh, now I’m reading about assisted dying,” and because I’ve got a gang of people, I can write about it.

Funnily enough, I wrote two chapters in a row — one from the perspective of a character who believes in it very strongly and one from the perspective of a character who doesn’t believe in it. These two people love each other, but they happen to disagree on this.

You’re getting to discuss something that people might normally avoid, something they might change the channel on or click past to the next article. That means a lot of people come up to me in the street to talk about it. We talk about dementia, grief, all of these things, and I absolutely love those conversations.

Cottle: You had a family member who suffered through Alzheimer’s, right?

Osman: Yeah.

Cottle: Did that inform how you approach one of the main characters’ husbands? In the book, he’s suffering from dementia. Did your experience inform how you were writing some of this?

Osman: Yeah, if you talk to anybody who works with dementia patients in any way, they’ll tell you every single experience is unique. Everything is different, and the dementia often takes on the form of the person with dementia. It’s a very personal illness.

My grandfather had dementia. He was a very bright, very strong man. He had been a cop and served in the army, so he was used to being, you know, very traditionally male. And then suddenly the faculties began to go. In his final years, I would visit him often, speaking to him and noticing what he remembered and what he didn’t. The last things to remain were probably laughter and love. Those were the final parts of him that stayed, and I wanted to pay tribute to that.

I wanted to understand him — how he was thinking, what his brain was doing, which circuits were still complete and which weren’t. So really, I’m writing about him. The fact that it resonates with so many other people is wonderful. Every example of dementia is slightly different, but there’s enough we all share.

In my conversations with him, I was constantly inside his head, thinking: What is his brain doing now? Where is it reaching? What is it trying to reach, and what does it actually reach? That became the foundation for Stephen, the character in my books who suffers from dementia. I wanted to give Stephen absolute, 100 percent humanity. I wanted his thought process to feel rational within his own mind. That was what I was trying to capture — how his brain might be working. And from what people tell me, it resonates, which is all I could hope for. (...)

Cottle: You said before that you were struck that these older residents had all these amazing life experiences but were kind of now largely ignored or underestimated, which sounds sad. We hear a lot about the invisibility that comes with aging. But in some ways, you turn this on its head. Your characters can do all these crazy things and get in all sorts of trouble and basically get away with it, specifically because they’re older and people are underestimating them. I feel like you’re making a pitch for aging or —

Osman: I really am, because, as I say, things occur to me as I go along, but one of the things that occurred to me very early on is the lack of consequence for a lot of what they’re doing. A lot of us are scared throughout life because we think, “Oh, no, but what happens if I lose my job or the money starts going down or something?”

When you’re older, the worst is going to happen at some time. You’ve got that perspective. And there’s a part in the first book, I think, where one person says: The only people who can tell us what to do now are our doctors and our children, and we rarely see our children, so no one’s really telling us what to do.

In the very first book, Elizabeth says to the cops at one point: “I’ll tell you what you should do — why don’t you arrest me? Lock an 80-year-old woman in a cell. See how much fun that is for you. See how much paperwork you’ll have to do. I’ll even pretend I think you’re my grandson. Go on, do it.” And you realize there’s a real freedom in that — a kind of carte blanche to behave badly, mischievously, to open doors you shouldn’t be allowed to open. I absolutely dove into all of that and took full advantage of their ability to beguile everyone.

Cottle: See, I’m very much looking forward to being there with them. I saw an article asking rather grandly if your books might change the way that Britain thinks about growing old. And I think the piece was specifically referring to the idea that seniors could decide to move into these communities where they hang out with people their age and get involved in stuff.

But even beyond that, your characters are thumbing their noses at the idea that seniors should fade into the background. I have to think this goes over really well with your readers of a certain age.

Osman: Yeah, I think it’s fascinating, because younger readers always say: Oh, my God, thank you for making these older characters heroes. That feels so aspirational. I can’t wait until I retire.

But older readers say something completely different: Thank you for not making us the heroes. Thank you for making us flawed and mischievous. Thank you for showing us drinking at 11:30, gossiping, falling in love and out of love. Thank you for writing us as human beings.

My starting point for all of this is simple. Everyone listening will have an answer to this question: How old do you feel in your head? There’s always a number, a point where you stop aging inside yourself.

My mom is 83, and she says she feels 30. And isn’t that right? Nobody really has an old brain. People may have old bodies and deal with old-age issues, but their minds are still young — 27, 30, 35, 40. So when I write these characters, I don’t think for a single second about the fact that they’re 80. I think about the age they still are in their heads, even though they live in very different surroundings. (...)

Cottle
: Your characters present old age not as a time when life becomes narrower and narrower, as it can sometimes feel when you’re aging, but as a time of reinvention, of expanding comfort zones. That’s a very comforting thought for certain middle-aged readers eyeing the road ahead. And it sounds like I’m not the only one. That idea is clearly resonating with your younger readers, too.

Osman: The age demographics reading this book are insane, because they’re about older people, yes, but they’re not read predominantly by older readers. People from all age groups are picking them up. I think part of that is wish fulfillment, because loneliness is a real issue. There’s an epidemic of loneliness among older people but also, interestingly, among people in their late teens and early 20s, though for different reasons.

The quick fix, in both cases, is community. Of course, not everyone wants that, and that’s fine. Where my mom lives, if you don’t want to see anyone, you just shut your front door. But if you do want company, you open it, and that feels like something to aspire to. The fact that these books put that idea into the world — that later years can be lived in community — feels positive. We don’t have to fade into the background as we get older. We don’t have to disappear. We can grow, become more visible, even noisier. We can become more trouble, in the best way, as we age.

Cottle: That’s my goal.

Osman: That’s my goal as well. That’s sort of everyone’s goal, isn’t it? To just continue causing trouble... At every stage of life, we’re told what it’s supposed to be about. As kids, it’s education — getting to high school, then the right college. In our 20s, it’s climbing the ladder, getting promoted, earning more money. Then it becomes about raising a family, building a community, watching the next generation grow. But eventually, you reach an age where they’ve run out of instructions. There’s no one telling you, “Now the point of life is X.” And you realize: Oh, I can just do what I want. I could have done that all along. What was I thinking?

That’s the moment you finally understand: I’m allowed to have fun. I’m allowed to be with people, to laugh, to enjoy myself. Yes, I still want to look after others and make sure my community is safe and cared for, but I’m also allowed to have fun.

And that feels like a revolutionary act.

by Michelle Cottle and Richard Osman, NY Times |  Read more:
Image: via
[ed. There's still quite a bit of ageism around, I don't know if it's getting better or worse.  I'm old and this all feels very familiar.]

Thursday, September 18, 2025

How I Joined the Resistance

The religious evolution of J.D. Vance.

I prided myself on an ability to overwhelm the opposition with my logic. There was an arrogance at the heart of my worldview, emotionally and intellectually. But I comforted myself with an appeal to a philosopher whose atheism-cum-libertarianism told me everything I wanted to hear: Ayn Rand. Great, smart men were only arrogant if they were wrong, and I was anything but that. (...)

To lose my faith was to lose my cultural conservatism, and in a world that was growing increasingly aligned with the Republican party, my ideological response took the form of overcompensation: having lost my cultural conservatism, I would become even more economically conservative. The irony, of course, is that it was the economic program of the Republican party that least interested my family—none of them cared how much the Bush administration slashed tax rates for billionaires. The G.O.P. became a kind of totem—I attached myself to it ever more strongly because it gave me some common ground with my family. And the most respectable way to do so among my new college friends was through a dogged commitment to neoliberal economic orthodoxy. Tax breaks and Social Security cuts were socially acceptable ways to be conservative among the American elite. (...)

As I advanced through our educational hierarchy—moving on from Ohio State to Yale Law School—I began to worry that my assimilation into elite culture came at a high cost. My sister once told me that the song that made her think of me was “Simple Man” by Lynyrd Skynyrd. Though I had fallen in love, I found that the emotional demons of my childhood made it hard to be the type of partner I’d always wanted to be. My Randian arrogance about my own ability melted away when confronted with the realization that an obsession with achievement would fail to produce the achievement that mattered most to me for so much of my life: a happy, thriving family.

I had immersed myself in the logic of the meritocracy and found it deeply unsatisfying. And I began to wonder: were all these worldly markers of success actually making me a better person? I had traded virtue for achievement and found the latter wanting. But the woman I wanted to marry cared little whether I obtained a Supreme Court clerkship. She just wanted me to be a good person.

It’s possible, of course, to overstate our own inadequacies. I never cheated on my would-be spouse. I never became violent with her. But there was a voice in my head that demanded better of me: that I put her interests above my own; that I master my temper for her sake as much as for mine. And I began to realize that this voice, wherever it came from, was not the same one that compelled me to climb as high as I could up our ladder of meritocracy. It came from somewhere more ancient, and more grounded—it required reflection about where I came from rather than cultural divorce from it.

As I considered these twin desires—for success and character—and how they conflicted (and didn’t), I came across a meditation from Saint Augustine on Genesis. I had been a fan of Augustine since a political theorist in college assigned City of God. But his thoughts on Genesis spoke to me, and are worth reproducing at length:
In matters that are obscure and far beyond our vision, even in such as we may find treated in Holy Scripture, different Interpretations are sometimes possible without prejudice to the faith we have received. In such a case, we should not rush in headlong and so firmly take our stand on one side that, if further progress in the search of truth justly undermines this position, we too fall with it. That would be to battle not for the teaching of Holy Scripture but for our own, wishing its teaching to conform to ours, whereas we ought to wish ours to conform to that of Sacred Scripture.

Let us suppose that in explaining the words, “And God said, ‘Let there be light,’ and light was made,” (Gn 1, 3), one man thinks that it was material light that was made, and another that it was spiritual. As to the actual existence of “spiritual light” in a spiritual creature, our faith leaves no doubt; as to the existence of material light, celestial or supercelestial, even existing before the heavens, a light which could have been followed by night, there will be nothing in such a supposition contrary to the faith until un-erring truth gives the lie to it. And if that should happen, this teaching was never in Holy Scripture but was an opinion proposed by man in his ignorance.

Usually, even a non-Christian knows something about the earth, the heavens, and the other elements of the world, about the motion and orbit of the stars and even their size and relative positions, about the predictable eclipses of the sun and moon, the cycles of the years and the seasons, about the kinds of animals, shrubs, stones, and so forth, and this knowledge he holds to as being certain from reason and experience. Now, it is a disgraceful and dangerous thing for an infidel to hear a Christian, presumably giving the meaning of Holy Scripture, talking nonsense on these topics; and we should take all means to prevent such an embarrassing situation, in which people show up vast ignorance in a Christian and laugh it to scorn. The shame is not so much that an ignorant individual is derided, but that people outside the household of faith think our sacred writers held such opinions, and, to the great loss of those for whose salvation we toil, the writers of our Scripture are criticized and rejected as unlearned men. If they find a Christian mistaken in a field which they themselves know well and hear him maintaining his foolish opinions about our books, how are they going to believe those books in matters concerning the resurrection of the dead, the hope of eternal life, and the kingdom of heaven, when they think their pages are full of falsehoods on facts which they themselves have learnt from experience and the light of reason?
I couldn’t stop thinking about how I would have reacted to this passage when I was a kid: If someone had made the very same argument to me when I was 17, I would have called him a heretic. This was an accommodation to science, the kind that someone like Bill Maher rightly mocked contemporary moderate Christians for indulging. Yet here was a person telling us 1600 years ago that my own approach to Genesis was arrogance—the kind that might turn a person from his faith.

This, it turned out, was a little too on the nose, and the first crack in my proverbial armor. I began circulating the quote among friends—believers and nonbelievers alike, and I thought about it constantly.

Around the same time, I attended a talk at our law school with Peter Thiel. This was 2011, and Thiel was a well-known venture capitalist but hardly a household name. He would later blurb my book and become a good friend, but I had no idea what to expect at the time. He spoke first in personal terms: arguing that we were increasingly tracked into cutthroat professional competitions. We would compete for appellate clerkships, and then Supreme Court clerkships. We would compete for jobs at elite law firms, and then for partnerships at those same places. At each juncture, he said, our jobs would offer longer work hours, social alienation from our peers, and work whose prestige would fail to make up for its meaninglessness. He also argued that his own world of Silicon Valley spent too little time on the technological breakthroughs that made life better—those in biology, energy, and transportation—and too much on things like software and mobile phones. Everyone could now tweet at each other, or post photos on Facebook, but it took longer to travel to Europe, we had no cure for cognitive decline and dementia, and our energy use increasingly dirtied the planet. He saw these two trends—elite professionals trapped in hyper-competitive jobs, and the technological stagnation of society—as connected. If technological innovation were actually driving real prosperity, our elites wouldn’t feel increasingly competitive with one another over a dwindling number of prestigious outcomes.

Peter’s talk remains the most significant moment of my time at Yale Law School. He articulated a feeling that had until then remained unformed: that I was obsessed with achievement in se—not as an end to something meaningful, but to win a social competition. My worry that I had prioritized striving over character took on a heightened significance: striving for what? I didn’t even know why I cared about the things I cared about. I fancied myself educated, enlightened, and especially wise about the ways of the world—at least compared with most of the people from my hometown. Yet I was obsessed with obtaining professional credentials—a clerkship with a federal judge and then an associate position at a prestigious firm—that I didn’t understand. I hated my limited exposure to legal practice. I looked to the future, and realized that I’d been running a desperate race where the first prize was a job I hated.

I began immediately planning for a career outside the law, which is why I spent less than two years after graduation as a practicing attorney. But Peter left me with one more thing: he was possibly the smartest person I’d ever met, but he was also a Christian. He defied the social template I had constructed—that dumb people were Christians and smart ones atheists. I began to wonder where his religious belief came from, which led me to René Girard, the French philosopher whom he apparently studied under at Stanford. Girard’s thought is rich enough that any effort to summarize will fail to do the man justice. His theory of mimetic rivalry—that we tend to compete over the things that other people want—spoke directly to some of the pressures I experienced at Yale. But it was his related theory of the scapegoat—and what it revealed about Christianity—that made me reconsider my faith.

One of Girard’s central insights is that human civilizations are often, perhaps even always, founded on a “scapegoat myth”—an act of violence committed against someone who has wronged the broader community, retold as a sort of origin story for the community.
 (...)

To Girard, the Christian story contains a crucial difference—a difference that reveals something “hidden since the foundation of the world.” In the Christian telling, the ultimate scapegoat has not wronged the civilization; the civilization has wronged him. The victim of the madness of crowds is, as Christ was, infinitely powerful—able to prevent his own murder—and perfectly innocent—undeserving of the rage and violence of the crowd. In Christ, we see our efforts to shift blame and our own inadequacies onto a victim for what they are: a moral failing, projected violently upon someone else. Christ is the scapegoat who reveals our imperfections, and forces us to look at our own flaws rather than blame our society’s chosen victims.

People come to truth in different ways, and I’m sure some will find this account unsatisfying. But in 2013, it captured so well the psychology of my generation, especially its most privileged inhabitants. Mired in the swamp of social media, we identified a scapegoat and digitally pounced. We were keyboard warriors, unloading on people via Facebook and Twitter, blind to our own problems. We fought over jobs we didn’t actually want while pretending we didn’t fight for them at all. And the end result for me, at least, was that I had lost the language of virtue. I felt more shame over failing in a law school exam than I did about losing my temper with my girlfriend.

That all had to change. It was time to stop scapegoating and focus on what I could do to improve things. (...)

I noticed during my research that many of those social problems came from behavior for which social scientists and policy experts had a different vocabulary. On the right, the conversation often turned to “culture” and “personal responsibility”—the ways in which individuals or communities held back their own progress. And though it seemed obvious to me that there was something dysfunctional about some of the places in which I’d grown up, the discourse on the right seemed a little heartless. It failed to account for the fact that destructive behaviors were almost always tragedies with terrible consequences. It is one thing to wag your finger at another person for failing to act a certain way, but it is something else to feel the weight of the misery that comes from those actions.

The left’s intellectuals focused much more on the structural and external problems facing families like mine—the difficulty in finding jobs and the lack of funding for certain types of resources. And while I agreed that more resources were often necessary, there seemed to me a sense in which our most destructive behaviors persisted—even flourished—in times of material comfort. The economic left was often more compassionate, but theirs was a kind of compassion—devoid of any expectation—that reeked of giving up. A compassion that assumes a person is disadvantaged to the point of hopelessness is like sympathy for a zoo animal, and I had no use for it.

And as I reflected on these competing views of the world, and the wisdom and shortcomings of each, I felt desperate for a worldview that understood our bad behavior as simultaneously social and individual, structural and moral; that recognized that we are products of our environment; that we have a responsibility to change that environment, but that we are still moral beings with individual duties; one that could speak against rising rates of divorce and addiction, not as sanitized conclusions about their negative social externalities, but with moral outrage. (...)

And here, again, the words of Saint Augustine echoed from a millennium and a half earlier, articulating a truth I had felt for a long time but hadn’t spoken. This is a passage from City of God, where Augustine summarizes the debauchery of Rome’s ruling class:
This is our concern, that every man be able to increase his wealth so as to supply his daily prodigalities, and so that the powerful may subject the weak for their own purposes. Let the poor court the rich for a living, and that under their protection they may enjoy a sluggish tranquillity; and let the rich abuse the poor as their dependants, to minister to their pride. Let the people applaud not those who protect their interests, but those who provide them with pleasure. Let no severe duty be commanded, no impurity forbidden. Let kings estimate their prosperity, not by the righteousness, but by the servility of their subjects. Let the provinces stand loyal to the kings, not as moral guides, but as lords of their possessions and purveyors of their pleasures; not with a hearty reverence, but a crooked and servile fear. Let the laws take cognizance rather of the injury done to another man’s property, than of that done to one’s own person. If a man be a nuisance to his neighbor, or injure his property, family, or person, let him be actionable; but in his own affairs let everyone with impunity do what he will in company with his own family, and with those who willingly join him. Let there be a plentiful supply of public prostitutes for every one who wishes to use them, but specially for those who are too poor to keep one for their private use. Let there be erected houses of the largest and most ornate description: in these let there be provided the most sumptuous banquets, where every one who pleases may, by day or night, play, drink, vomit, dissipate. Let there be everywhere heard the rustling of dancers, the loud, immodest laughter of the theatre; let a succession of the most cruel and the most voluptuous pleasures maintain a perpetual excitement. If such happiness is distasteful to any, let him be branded as a public enemy; and if any attempt to modify or put an end to it let him be silenced, banished, put an end to. Let these be reckoned the true gods, who procure for the people this condition of things, and preserve it when once possessed.
It was the best criticism of our modern age I’d ever read. A society oriented entirely towards consumption and pleasure, spurning duty and virtue. Not long after I first read these words, my friend Oren Cass published a book arguing that American policy makers have focused far too much on promoting consumption as opposed to productivity, or some other measure of wellbeing. The reaction—criticizing Oren for daring to push policies that might lower consumption—almost proved the argument. “Yes,” I found myself saying, “Oren’s preferred policies might reduce per-capita consumption. But that’s precisely the point: our society is more than the sum of its economic statistics. If people die sooner in the midst of historic levels of consumption, then perhaps our focus on consumption is misguided.”

And indeed it was this insight, more than any other, that ultimately led not just to Christianity, but to Catholicism. Despite my Mamaw’s unfamiliarity with the liturgy, the Roman and Italian cultural influences, and the foreign pope, I slowly began to see Catholicism as the closest expression of her kind of Christanity: obsessed with virtue, but cognizant of the fact that virtue is formed in the context of a broader community; sympathetic with the meek and poor of the world without treating them primarily as victims; protective of children and families and with the things necessary to ensure they thrive. And above all: a faith centered around a Christ who demands perfection of us even as He loves unconditionally and forgives easily.

by J.D. Vance, The Lamp |  Read more:
Image: uncredited
[ed. Poor J.D. definitely lost the plot (not hard if solipsism and rationalization are your super powers). Hard to feel sorry for him though. In his present world view - Catholic or not - ambition (maybe destiny!) Trumps everything.]