Showing posts with label Critical Thought. Show all posts
Showing posts with label Critical Thought. Show all posts

Thursday, September 11, 2025

A.I. Is Coming for Culture

In the 1950 book “The Human Use of Human Beings,” the computer scientist Norbert Wiener—the inventor of cybernetics, the study of how machines, bodies, and automated systems control themselves—argued that modern societies were run by means of messages. As these societies grew larger and more complex, he wrote, a greater amount of their affairs would depend upon “messages between man and machines, between machines and man, and between machine and machine.” Artificially intelligent machines can send and respond to messages much faster than we can, and in far greater volume—that’s one source of concern. But another is that, as they communicate in ways that are literal, or strange, or narrow-minded, or just plain wrong, we will incorporate their responses into our lives unthinkingly. Partly for this reason, Wiener later wrote, “the world of the future will be an ever more demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.”

The messages around us are changing, even writing themselves. From a certain angle, they seem to be silencing some of the algorithmically inflected human voices that have sought to influence and control us for the past couple of decades. In my kitchen, I enjoyed the quiet—and was unnerved by it. What will these new voices tell us? And how much space will be left in which we can speak? (...)

Podcasts thrive on emotional authenticity: a voice in your ear, three friends in a room. There have been a few experiments in fully automated podcasting—for a while, Perplexity published “Discover Daily,” which offered A.I.-generated “dives into tech, science, and culture”—but they’ve tended to be charmless and lacking in intellectual heft. “I take the most pride in finding and generating ideas,” Latif Nasser, a co-host of “Radiolab,” told me. A.I. is verboten in the “Radiolab” offices—using it would be “like crossing a picket line,” Nasser said—but he “will ask A.I., just out of curiosity, like, ‘O.K., pitch me five episodes.’ I’ll see what comes out, and the pitches are garbage.”

What if you furnish A.I. with your own good ideas, though? Perhaps they could be made real, through automated production. Last fall, I added a new podcast, “The Deep Dive,” to my rotation; I generated the episodes myself, using a Google system called NotebookLM. To create an episode, you upload documents into an online repository (a “notebook”) and click a button. Soon, a male-and-female podcasting duo is ready to discuss whatever you’ve uploaded, in convincing podcast voice. NotebookLM is meant to be a research tool, so, on my first try, I uploaded some scientific papers. The hosts’ artificial fascination wasn’t quite capable of eliciting my own. I had more success when I gave the A.I. a few chapters of a memoir I’m writing; it was fun to listen to the hosts’ “insights,” and initially gratifying to hear them respond positively. But I really hit the sweet spot when I tried creating podcasts based on articles I had written a long time ago, and to some extent forgotten. (...)

If A.I. continues to speed or automate creative work, the total volume of cultural “stuff”—podcasts, blog posts, videos, books, songs, articles, animations, films, shows, plays, polemics, online personae, and so on—will increase. But, because A.I. will have peculiar strengths and shortcomings, more won’t necessarily mean more of the same. New forms, or new uses for existing forms, will pull us in directions we don’t anticipate. At home, Nasser told me, he’d found that ChatGPT could quickly draft an engaging short story about his young son’s favorite element, boron, written in the style of Roald Dahl’s “The BFG.” The periodic table x “The BFG” isn’t a collab anyone’s been asking for, but, once we have it, we might find that we want it.

It’s not a real collaboration, of course. When two people collaborate, we hope for a spark as their individualities collide. A.I. has no individuality—and, because its fundamental skill is the detection of patterns, its “collaborations” tend to perpetuate the formulaic aspects of what’s combined. A further challenge is that A.I. lacks artistic agency; it must be told what’s interesting. All this suggests that A.I. culture could submerge human originality in a sea of unmotivated, formulaic art.

And yet automation might also allow for the expression of new visions. “I have a background in independent filmmaking,” Mind Wank, one of the pseudonymous creators of “AI OR DIE,” which bills itself as “the First 100% AI Sketch Comedy Show,” told me. “It was something I did for a long time. Then I stopped.” When A.I. video tools such as Runway appeared, it became possible for him to take unproduced or unproducible ideas and develop them. (...)

Traditional filmmaking, as he sees it, is linear: “You have an idea, then you turn it into a treatment, then you write a script, then you get people and money on board. Then you can finally move from preproduction into production—that’s a whole pain in the ass—and then, nine months later, you try to resurrect whatever scraps of your vision are there in the editing bay.” By contrast, A.I. allows for infinite revision at any point. For a couple of hundred dollars in monthly fees, he said, A.I. tools had unlocked “the sort of creative life I only dreamed of when I was younger. You’re so constrained in the real world, and now you can just create whole new worlds.” The technology put him in mind of “the auteur culture of the sixties and seventies.” (...)

Today’s A.I. video tools reveal themselves in tiny details, producing a recognizable aesthetic. They also work best when creating short clips. But they’re rapidly improving. “I’m waiting for the tools to achieve enough consistency to let us create an entire feature-length film using stable characters,” Wank said. At that point, one could use them to make a completely ordinary drama or rom-com. “We all love filmmaking, love cinema,” he said. “We have movies we want to make, TV shows, advertisements.” (...)

What does this fluidity imply for culture in the age of A.I.? Works of art have particular shapes (three-minute pop songs, three-act plays) and particular moods and tones (comic, tragic, romantic, elegiac). But, when boundaries between forms, moods, and modalities are so readily transgressed, will they prove durable? “Right now, we talk about, Is A.I. good or bad for content creators?,” the Silicon Valley pioneer Jaron Lanier told me. (Lanier helped invent virtual reality and now works at Microsoft.) “But it’s possible that the very notion of ‘content’ will go away, and that content will be replaced with live synthesis that’s designed to have an effect on the recipient.” Today, there are A.I.-generated songs on Spotify, but at least the songs are credited to (fake) bands. “There could come a point where it’ll just be ‘music,’ ” Lanier said. In this future scenario, when you sign in to an A.I. version of Spotify, “the first thing you hear will be ‘Hey, babe, I’m your Spotify girlfriend. I made a playlist for you. It’s kind of sexy, so don’t listen to it around other people.’ ” This “playlist” would consist of songs that have never been heard before, and might never be heard again. They will have been created, in the moment, just for you, perhaps based on facts about you that the A.I. has observed.

In the longer term, Lanier thought, all sorts of cultural experiences—music, video, reading, gaming, conversation—might flow from a single “A.I. hub.” There would be no artists to pay, and the owners of the hubs would be able to exercise extraordinary influence over their audiences; for these reasons, even people who don’t want to experience culture this way could find the apps they use moving in an A.I.-enabled direction.

Culture is communal. We like being part of a community of appreciators. But “there’s an option here, if computation is cheap enough, for the creation of an illusion of society,” Lanier said. “You would be getting a tailored experience, but your perception would be that it’s shared with a bunch of other people—some of whom might be real biological people, some of whom might be fake.” (I imagined this would be like Joi introducing Gosling’s character to her friends.) To inhabit this “dissociated society cut off from real life,” he went on, “people would have to change. But people do change. We’ve already gotten people used to fake friendships and fake lovers. It’s simple: it’s based on things we want.” If people yearn for something strongly enough, some of them will be willing to accept an inferior substitute. “I don’t want this to occur, and I’m not predicting that it will occur,” Lanier said, grimly. “I think naming all this is a way of increasing the chances that it doesn’t happen.”

by Joshua Rothman, New Yorker | Read more:
Image: Edward Hopper, Second Story Sunlight

Sunday, September 7, 2025

Intellectual Loneliness

via:
[ed. Sounds about right.]

Some Parts of You Only Emerge for Certain People


I think about this Virginia Woolf quote often. To me, it speaks to love’s power as an act of invention, the way certain people draw out a version of you that didn’t exist before they arrived. They witness you, and thus, rearrange you. In their presence, words you didn’t know you knew tumble out. Your thoughts sharpen, colours seem richer, you inhabit yourself more fully.

We all carry endless hidden selves and latent worlds, waiting for the right gaze to bring them to the surface. I’ve felt this in my bones: relationships that have remade me, expanded me, taught me. Time and again, people have been the most transformative engine for becoming I’ve ever known.

Every enduring friendship, every romance worth the name, behaves like a kind of psychic technology. Two minds meet, exchange a pattern of attention, and, almost invisibly, each begins to reorganise around the other. What starts as perception becomes structure.

Henrik Karlsson captures the mechanism simply: relationships are co-evolutionary loops. Beyond sociology, it feels like spiritual physics. Who we choose to orbit defines, over time, the texture and colour palette of our becoming. Love becomes a technology of transformation, a living interface between selves. To love well is to take part in someone else’s unfolding, even as they take part in yours. (...)


I’ve often felt how literal that process can be, like a slow annealing of the self under another’s attention. A few months ago, I read an essay that rearranged me: What is Love? by Qualia Computing, which frames love as a kind of neural annealing. In metallurgy, annealing is the process of heating metal until its internal structure loosens, then cooling it slowly so it hardens into a stronger, more resilient form. The lattice reorganises; the material changes.

The essay suggests that in high-energy emotional states, such as falling in love, grief, awe, psychedelics, deep meditation, the brain becomes molten, its patterns loosened, more open to reorganisation. The person we focus on in these states becomes like a mold for the cooling metal, shaping how our thoughts settle, what habits crystallise, what identities take hold.

This is why the right gaze, the right conversation, can change you down to the grain. Emotional heat loosens the architecture of the self, and in the presence of someone who sees you vividly, the molten structure reforms around their image of you. What remains afterwards is stronger, different, marked by the shape of their attention. Attention becomes anchor; identity reshapes in response to their rhythms, their gaze. Perhaps this is why the right presence can feel like destiny: whole inner continents, hidden selves and latent worlds, begin to surface, shaping you into someone you hadn’t yet met. 

by Maja, Velvet Noise | Read more:
Images: Virginia Wolff; Banksy

Saturday, September 6, 2025

The Techno-Humanist Manifesto (Part 2, Chapter 8)


Previously: The Unlimited Horizon, part 1.

Is there really that much more progress to be made in the future? How many problems are left to solve? How much better could life really get?

After all, we are pretty comfortable today. We have electricity, clean running water, heating and air conditioning, plenty of food, comfortable clothes and beds, cars and planes to get around, entertainment on tap. What more could we ask for? Maybe life could be 10% better, but 10x? We seem to be doing just fine.

Most of the amenities we consider necessary for comfortable living, however, were invented relatively recently; the average American didn’t have this standard of living until the mid-20th century. The average person living in 1800 did not have electricity or plumbing; indeed the vast majority of people in that era lived in what we would now consider extreme poverty. But to them, it didn’t feel like extreme poverty: it felt normal. They had enough food in the larder, enough water in the well, and enough firewood to last the winter; they had a roof over their heads and their children were not clothed in rags. They, too, felt they were doing just fine.

Our sense of “enough” is not absolute, but relative: relative to our expectations and to the standard of living we grew up with. And just as the person who felt they had “enough” in 1800 was extremely poor by the standards of the present, we are all poor by the standards of the future, if exponential growth continues.

Future students will recoil in horror when they realize that we died from cancer and heart disease and car crashes, that we toiled on farms and in factories, that we wasted time commuting and shopping, that most people still cleaned their own homes by hand, that we watched our thermostats carefully and ran our laundry at night to save on electricity, that a foreign vacation was a luxury we could only indulge in once a year, that we sometimes lost our homes to hurricanes and forest fires.

Putting it positively: we are fabulously rich by the standards of 1800, and so we, or our descendants, can all be fabulously rich in the future by the standards of today.

But no such vision is part of mainstream culture. The most optimistic goals you will hear from most people are things like: stop climate change, prevent pandemics, relieve poverty. These are all the negation of negatives, and modest ones at that—as if the best we can do in the future is to raise the floor and avoid disaster. There is no bold, ambitious vision of a future in which we also raise the ceiling, a future full of positive developments.

It can be hard to make such a vision compelling. Goals that are obviously wonderful, such as curing all disease, seem like science fiction impossibilities. Those that are more clearly achievable, such as supersonic flight, feel like mere conveniences. But science fiction can come true—indeed, it already has, many times over. We live in the sci-fi future imagined long ago, from the heavier-than-air flying machines of Jules Verne and H. G. Wells to the hand-held communicator of Star Trek. Nor should we dismiss “mere” conveniences. Conveniences compound. What seem like trivial improvements add up, over time, to transformations. Refrigerators, electric stoves, washing machines, vacuum cleaners, and dishwashers were conveniences, but together they transformed domestic life, and helped to transform the role of women in society. The incremental improvement of agriculture, over centuries, eliminated famine.

So let’s envision a bold, ambitious future—a future we want to live in, and are inspired to build. This will be speculative: not a blueprint drawn up with surveyor’s tools, but a canvas painted in broad strokes. Building on a theme from Chapter 2, our vision will be one of mastery over all aspects of nature:

by Jason Crawford, Roots of Progress |  Read more:
Image: uncredited
[ed. Part 2, Chapter 8. (yikes). You can see I've come late to this. Essays on the philosophy of human progress. Well worth exploring (jump in anywhere). Introduction and chapter headings (with links) found here: Announcing The Techno-Humanist Manifesto (RoP).]

Institutions

Institutions and a Lesson for Our Time from the Late Middle Ages. No institution of politics or society is immune to criticism. I have met no one who would really believe this, even if notional liberals and notional conservatives both have their protected favorites. But the spirit of the time is leading directly to the destruction of institutions that are essential for our cultural, social, political, intellectual, and individual health and survival. This is a two-way street, by the way. Both wings of the same bird of prey do it throughout the Neoliberal Dispensation in the Global North and a few other places.

I am currently reading The World at First Light: A New History of the Renaissance by Bernd Roeck (transl. Patrick Baker, 2025). At 949 pages and 49 chapters, I’ll complete the task in a month at 1-2 chapters per evening. I hope. We are still only just past Magna Carta (1215) in Chapter 12: “Vertical Power, Horizontal Power.” Both axes of power are essential in any society larger than a small group of hunter gatherers. Here is Professor Bernd on institutions:
Institutions – that dry term, which we have already encountered in the discussion of universities and in other contexts, denotes something very big and important. Institutions are what first allow the state to become perpetual; without them, it dies. If advisers appear as the mind and memory of the body politic, and the military its muscles, it is law and institutions that provide a skeleton for the state. They alone are capable of establishing justice over the long term. Only they can set limits to power and arbitrary will. They preserve knowledge of how to achieve success, as well as reminders of mistakes to be avoided in the future. No one knew this better than Cicero, who emphasized the Roman Republic’s special ability to gather experience and make decisions based on it. Before the advent of modernity, no section of the globe created institutions as robust and effective as those that developed in medieval Latin Europe. Moreover, these institutions were highly inclusive. The guaranteed protection under the law and the right to private property, provided education, and were relatively pluralistic (i.e., horizontally structured).

Indeed, Rome owed its success to its institutions. They then provided the states consolidating during the Middle Ages with models of compelling rationality.
This is not the place to quibble about details. But those who want to destroy our political, cultural, social, and educational institutions rather than improve them or refocus them along lines upon which reasonable people will agree? These unreasonable people are not to be respected:
We want the bureaucrats to be traumatically affected,” Vought (Russell Vought, OMB Director) said in a video revealed by ProPublica and the research group Documented in October. “When they wake up in the morning, we want them to not want to go to work, because they are increasingly viewed as the villains. We want their funding to be shut down … We want to put them in trauma.”
Well, it is working and the lack of imagination and humanity here is striking. These “bureaucrats” are the scientists who make sure our food is safe and that the chemical plant on the waterfront is not dumping its waste into the tidal creek. They are the scientists who hunt down the causes of emerging diseases. They are the meteorologists at the National Hurricane Center who have gotten so very good at predicting the paths of cyclones. They are the men and women who sign up Vought’s parents for Social Security and Medicare. They are the people of the IRS who sent me a substantial tax refund because I overpaid, something pleasant I did not ask for nor expect. They are also the professors who teach engineers how to build bridges that will bear the load and teach medical students the basics of health and disease. And yes, they are the professors who teach us there is No Politics But Class Politics. The key here is that all of this is debatable by reasonable men and women of good will.

To paraphrase Justice Oliver Wendell Holmes, the institutions funded by our taxes are the cost of civilization. Perhaps we will remember this ancient wisdom before it is too late? Probably not. The urge to burn it all down, instead of rewiring the building and replacing the roof, is strong.

by KLG, Naked Capitalism |  Read more:

Writing Workshops Are F**king Useless

I am a writer and professor, with an MFA in creative writing, and I detest the writing workshop. The writing workshop is widely considered to be the best means (at least in America) of forging an existence for writers, young and old, of harvesting the best of their work and sustaining their practice. As both a writer and a professor, and furthermore as a reader, this is something I find simultaneously ridiculous, infuriating, and depressing. In a field, perhaps the only field, quite literally named in the spirit of “creativity,” how is it possible that one mode of instruction, taught most notably at a small school in Iowa, has entirely won the day when it comes to the education of artists? How has the market been so cornered? How have the options become so limited? How have professors become so convinced that this method—in a field, it needs be mentioned, constantly being asked whether it’s something that can even really be taught; and this by writers, readers, professors, deans, parents and everybody else—that this method of instruction is simply the way? Especially when we’ve got mountains—almost all of literature produced ever—of evidence to the contrary? (...)

I think that workshops represent a pretty fundamental misunderstanding of what ought to be encouraged in the experience and expression of any young artist. They all seem tethered to history with very selective gaps that ignore the solitary plight of so many artists we now recognize as geniuses; they simply ignore what has made literature so vital and so powerful across time, and in my estimation they do so at their peril. Programs are still enjoying the novelty of their existence today—as I said, the numbers of applicants seem just fine, on the uptick even—but unwillingness to adapt and improve will almost certainly begin to strangle off this pink cloud, and reading accounts of bad experiences only hammers this home with vengeance.

Bearing this reality in mind, what are some feasible adjustments that might be made to the workshop model if this kind of discipline is not to become more of an homogenous soup than it already is, dense with justifiable complaint and dissatisfaction? If we can accept that there is a fundamental misunderstanding inherent in the model of sitting a beginning artist in a room of their peers and having their nascent works critiqued in a rote, occasionally praiseful, occasionally scornful, always misguided effort to uphold an arbitrary connection to a school in Iowa, then it would behoove us to look at that misunderstanding to find any clarities. How have writers, before the existence of any writing workshop ever, done what they did? How did Herman Melville write? How did Virginia Woolf? And here it’s important to not simply throw out the whole enterprise, because 1) I like my job, and 2) We exist in a culture already entirely hostile to this pursuit, and academic disciplines make adjustments constantly, so it doesn’t pull any rug of legitimacy out from under us to say we’re adapting, implementing new models, exploring other paths than the one that’s grown stale, and repetitive, and actively harmful in countless circumstances.

What do I do? I am presently adapting. What I’ve tended to do is preface my class with a note that workshopping is technically a requirement where I teach these courses, and thus I will give them demonstrations of the workshop experience, and I will work with them to comment on things in a useful manner in one another’s work, but that the whole of the class will not be tethered to this model. Instead, we do these things, but then I’ll introduce this notion of the literary/arts “salon,” an open environment, wherein we’re all struggling, all trying to figure shit out, and whether we might wish to share something one day, or talk about something we’ve read recently, or simply complain about how impossible it seems to be to get published, these are all treated as the real, useful stuff of writing, because, once they leave school, they are. I did this in a course where everyone tried, over the semester, to write a novella. I wrote one with everybody, based on a set of three possible prompts each week. Everybody attempted 1,000 words per week. Some days we all simply came to class and wrote. Some days we talked about novels we’d all been reading per the class list. Some days we’d circle up and share from our work, but never was it the case that one person found their work being the focus of critique for any prolonged period. This has nothing to do with discomfort. The simple fact is that art is not made by committees. Even in the cases of film, where arguably a group, i.e. a committee, is wielding influence over the whole, there are inevitably voices exerting more influence on the entire process, if not one single voice, and we as audiences are better off for this. This is an undeniable truth when it comes to writing. Writers are people, and thus they can occasionally benefit from social interaction as regards their work. Some of them might thrive on it, and might be highly receptive to critique, and might be able to implement those critiques in ways that endlessly benefit the work. This concoction of human being has yet to cross my path, but I’m sure they exist. For the rest of us, perhaps simply fostering a community where we feel comfortable pursuing our interest is the thing. Perhaps that’s plenty.

by Republic of Letters |  Read more:
Image: Unterberg Poetry Center (404)
[ed. Writing workshops - a niche topic for sure. What I found most interesting is the promotion of 'salons', or something like them ever since reading Hemingway's A Moveable Feast back in college and missing old philosophical/brainstorming sessions (in contrast to rote lecture/test classes). Basically, a more interactive, open-ended, ideas-based approach to learning, with lots of applications beyond basic schooling and education, especially in business. See also: The Salons Project.]
***
Salons were an important place for the exchange of ideas. The word salon first appeared in France in 1664 (from the Italian salone, the large reception hall of Italian mansions; salone is actually the augmentative form of sala, room). Literary gatherings before this were often referred to by using the name of the room in which they occurred, like cabinet, réduit, ruelle, and alcôve. Before the end of the 17th century, these gatherings were frequently held in the bedroom (treated as a more private form of drawing room): a lady, reclining on her bed, would receive close friends who would sit on chairs or stools drawn around. (...)

Breaking down the salons into historical periods is complicated due to the various historiographical debates that surround them. Most studies stretch from the early 16th century up until around the end of the 18th century. Goodman is typical in ending her study at the French Revolution where, she writes: 'the literary public sphere was transformed into the political public'. Steven Kale is relatively alone in his recent attempts to extend the period of the salon up until Revolution of 1848:
A whole world of social arrangements and attitude supported the existence of French salons: an idle aristocracy, an ambitious middle class, an active intellectual life, the social density of a major urban center, sociable traditions, and a certain aristocratic feminism. This world did not disappear in 1789.
In the 1920s, Gertrude Stein's Saturday evening salons (described in Ernest Hemingway's A Moveable Feast and depicted fictionally in Woody Allen's Midnight in Paris) gained notoriety for including Pablo Picasso and other twentieth-century luminaries like Alice B. Toklas.

Wednesday, September 3, 2025

Evolution of Emotions

"If you understand that every experience you have now becomes part of your brain's ability to predict, then you realize that the best way to change your past is to change your present."

Neuroscientist Lisa Feldman Barrett, PhD, psychologist Paul Eckman, PhD, and psychotherapist Esther Perel, PhD, explain how the brain constantly rebuilds emotions from memory and prediction. According to their research, by choosing new experiences today, we can reshape how our past influences us, gain more control over our feelings, and create new possibilities for connection and growth.

LISA FELDMAN BARRETT: It can certainly feel like emotions happen to you. That they bubble up and cause you to do and say things, but that experience is an illusion that the brain creates.

Not everybody has as much control as they might like, but everybody has a little more control than they think they do. When you're experiencing emotion or you're in an emotional state, what your brain is doing is telling itself a story about what is going on inside your body in relation to what's happening in the world. Your brain is always regulating your body. Your body is always sending sensory information back to your brain, and your brain isn't wired in a way for you to experience those sensory changes specifically. Instead, what you experience is a summary. And that's where those simple feelings come from.

If you understand that every experience you have now becomes part of your brain's ability to predict, then you realize that the best way to change your past is to change your present. Just in the same way that you would exercise to make yourself healthier, you can invest energy to cultivate different experiences for yourself. The fact that your brain is using your past experience to predict what you're going to see, and hear, and feel means that you are an architect of your experience, and that doesn't involve breaking predictions. It involves seeding your brain to predict differently.

PAUL EKMAN: It's my belief that the way in which emotions evolved was to deal with things like saber-toothed tigers, the current incarnation of which is the car that's suddenly lurching at your car at a high speed. You don't have time to think. In split seconds, you have to do and make very complex decisions, and if you had to think about what you were doing, you'd be dead. It's a system that evolved to deal with really important things without your thinking about it.

So that means that sometimes, you're gonna be very unconsidered, very thoughtless. Well, these exercises that we're giving people, moving their facial muscles, concentrating on the sensations to make them more aware of an emotion when it arises, so that they will feel it at the moment and then can say, "Did she really mean to ignore me? No, it was just an accident." Or, "Maybe I shouldn't jump to the conclusion that she doesn't care about me at all."

The way in which we can improve our emotional life is to introduce conscious awareness. Nature did not want you to do that. So you have to do it yourself.

ESTHER PEREL: All relationships are colored with expectations about myself and about the other. My expectations influence that which I then see or hear. It is a filter, as well as my mood. That is one of the most important things to understand about relationships and communication — how people actually co-create each other in the context of a relationship because those people make part of who we are.

We will draw from them the very things which we expect from them, even when it's the opposite of what we really want. A lot of emphasis is put on our ability to say certain things, to say them in the right way, to articulate our needs, our feelings, our thoughts, our positions, our opinions. What is lacking is the ability to see that speaking is entirely dictated by the quality of the listening that is reflected back on us.

by Paul Ekman with Lisa Feldman Barrett, Big Think |  Read more:
Image: Jon Han

Saturday, August 30, 2025

Book Review: "Breakneck"

There was a time in 2016 when I walked around downtown San Francisco with Dan Wang and gave him life advice. He asked me if he should move to China and write about it. I told him that I thought this was a good idea — that the world suffered from a strange and troubling dearth of people who write informatively about China in English, and that our country would be better off if we could understand China a little more.

Dan took my advice, and I’m very glad he did. For seven years, Dan wrote some of the best posts about China anywhere on the English-speaking internet, mostly in the form of a series of annual letters. His unique writing style is both lush and subtle. Each word or phrase feels like it should be savored, like fine dining. But don’t let this distract you — there are a multitude of small but important points buried in every paragraph. Dan Wang’s writing cannot be skimmed.

I’ve been anticipating Dan’s first book for over a year now, and it didn’t disappoint. Breakneck: China's Quest to Engineer the Future brings the same style Dan used in his annual letters, and uses it to elucidate a grand thesis: America is run by lawyers, and China is run by engineers.

Dan starts the book by recapitulating an argument that I’ve often made myself — namely, that China and the United States have fundamentally similar cultures. This is from his introduction:
I am sure that no two peoples are more alike than Americans and Chinese.

A strain of materialism, often crass, runs through both countries, sometimes producing veneration of successful entrepreneurs, sometimes creating displays of extraordinary tastelessness, overall contributing to a spirit of vigorous competition. Chinese and Americans are pragmatic: They have a get-it-done attitude that occasionally produces hurried work. Both countries are full of hustlers peddling shortcuts, especially to health and to wealth. Their peoples have an appreciation for the technological sublime: the awe of grand projects pushing physical limits. American and Chinese elites are often uneasy with the political views of the broader populace. But masses and elites are united in the faith that theirs is a uniquely powerful nation that ought to throw its weight around if smaller countries don't get in line.
It's very gratifying to see someone who has actually lived in China, and who speaks Chinese, independently come up with the same impression of the two cultures! (Though to be fair, I initially got the idea from a Chinese grad student of mine.)

If they're so culturally similar, why, then, are China and the U.S. so different in so many real and tangible ways? Why is China gobbling up global market share in every manufactured product under the sun, while America’s industrial base withers away? Why did China manage to build the world’s biggest high-speed rail network in just a few years, while California has yet to build a single mile of operational train track despite almost two decades of trying? Why does China have a glut of unused apartment buildings, while America struggles to build enough housing for its people? Why is China building over a thousand ships a year, while America builds almost zero?

Dan offers a simple explanation: The difference comes down to who runs the country. The U.S. has traditionally been run by lawyers, while the Chinese Communist Party tends to be run by engineers. The engineers want to build more stuff, while lawyers want to find a reason to not build more stuff. (...)


Breakneck’s
thesis generally rings true, and Dan’s combination of deep knowledge and engrossing writing style means that this is a book you should definitely buy. Its primary useful purpose will be to make Americans aware that there’s an alternative to their block-everything, do-nothing institutions, and to get them to think a little bit about the upsides and downsides of that alternative.

I bring up my main concerns about Dan’s argument: How do we know that the U.S.-China differences he highlights are due to a deep-rooted engineer/lawyer distinction, rather than natural outgrowths of the two countries’ development levels? In other words, is it possible that most countries undergo an engineer-to-lawyer shift as they get richer, because poorer countries just tend to need engineers a lot more?

I am always wary of explanations of national development patterns that rely on the notion of deep-rooted cultural essentialism. Dan presents America’s lawyerly bent as something that has been present since the founding. But then how did the U.S. manage to build the railroads, the auto empires of Ford and GM, the interstate highway system, and the vast and sprawling suburbs? Why didn’t lawyers block those? In fact, why did the lawyers who ran FDR’s administration encourage the most massive building programs in the country’s history?

And keep in mind that America achieved this titanic share of global manufacturing while having a much smaller percent of world population than China does.

That’s an impressive feat of building! So even though most of America’s politicians were lawyers back during the 1800s and early 1900s, those lawyers made policies that let engineers do their thing — and even encouraged them. It was only after the 1970s that lawyers — and policies made by politicians trained as lawyers — began to support anti-growth policies in the U.S. (...)

There are several alternative explanations for the trends Dan Wang talks about in his book. One possibility, which Sine argues for, is that China’s key feature isn’t engineering, but communism. Engineers like to plan things, but communists really, really like to plan things — including telling people to study engineering.

Another possibility is that engineering-heavy culture is just a temporary phase that all successfully industrializing countries go through during their initial rapid growth phase. When a country is dirt poor, it has few industries, little infrastructure, and so on. Basically it just needs to build something; in econ terms, the risk of capital misallocation is low, because the returns on capital are so high in general. If you don’t have any highways or steel factories, then maybe it doesn’t matter which one you build first; you just need to build.

by Noah Smith, Noahpinion |  Read more:
Image: Jonothon P. Sine
[ed. I've mentioned Dan's annual China summaries before (see here, here and here). When 2025 rolled around and none appeared I wrote and asked if he was still planning something. That's when he told me about this book. Definitely plan to pick it up.]

Friday, August 29, 2025

The Mechanics of Misdirection

The personhood trap: How AI fakes human personality. 

As we hinted above, the "chat" experience with an AI model is a clever hack: Within every AI chatbot interaction, there is an input and an output. The input is the "prompt," and the output is often called a "prediction" because it attempts to complete the prompt with the best possible continuation. In between, there's a neural network (or a set of neural networks) with fixed weights doing a processing task. The conversational back and forth isn't built into the model; it's a scripting trick that makes next-word-prediction text generation feel like a persistent dialogue.

Each time you send a message to ChatGPT, Copilot, Grok, Claude, or Gemini, the system takes the entire conversation history—every message from both you and the bot—and feeds it back to the model as one long prompt, asking it to predict what comes next. The model intelligently reasons about what would logically continue the dialogue, but it doesn't "remember" your previous messages as an agent with continuous existence would. Instead, it's re-reading the entire transcript each time and generating a response.

This design exploits a vulnerability we've known about for decades. The ELIZA effect—our tendency to read far more understanding and intention into a system than actually exists—dates back to the 1960s. Even when users knew that the primitive ELIZA chatbot was just matching patterns and reflecting their statements back as questions, they still confided intimate details and reported feeling understood.

To understand how the illusion of personality is constructed, we need to examine what parts of the input fed into the AI model shape it. AI researcher Eugene Vinitsky recently broke down the human decisions behind these systems into four key layers, which we can expand upon with several others below:

1. Pre-training: The foundation of "personality"

The first and most fundamental layer of personality is called pre-training. During an initial training process that actually creates the AI model's neural network, the model absorbs statistical relationships from billions of examples of text, storing patterns about how words and ideas typically connect.

Research has found that personality measurements in LLM outputs are significantly influenced by training data. OpenAI's GPT models are trained on sources like copies of websites, books, Wikipedia, and academic publications. The exact proportions matter enormously for what users later perceive as "personality traits" once the model is in use, making predictions.

2. Post-training: Sculpting the raw material

Reinforcement Learning from Human Feedback (RLHF) is an additional training process where the model learns to give responses that humans rate as good. Research from Anthropic in 2022 revealed how human raters' preferences get encoded as what we might consider fundamental "personality traits." When human raters consistently prefer responses that begin with "I understand your concern," for example, the fine-tuning process reinforces connections in the neural network that make it more likely to produce those kinds of outputs in the future.

This process is what has created sycophantic AI models, such as variations of GPT-4o, over the past year. And interestingly, research has shown that the demographic makeup of human raters significantly influences model behavior. When raters skew toward specific demographics, models develop communication patterns that reflect those groups' preferences.

3. System prompts: Invisible stage directions

Hidden instructions tucked into the prompt by the company running the AI chatbot, called "system prompts," can completely transform a model's apparent personality. These prompts get the conversation started and identify the role the LLM will play. They include statements like "You are a helpful AI assistant" and can share the current time and who the user is.

A comprehensive survey of prompt engineering demonstrated just how powerful these prompts are. Adding instructions like "You are a helpful assistant" versus "You are an expert researcher" changed accuracy on factual questions by up to 15 percent.

Grok perfectly illustrates this. According to xAI's published system prompts, earlier versions of Grok's system prompt included instructions to not shy away from making claims that are "politically incorrect." This single instruction transformed the base model into something that would readily generate controversial content.

4. Persistent memories: The illusion of continuity

ChatGPT's memory feature adds another layer of what we might consider a personality. A big misunderstanding about AI chatbots is that they somehow "learn" on the fly from your interactions. Among commercial chatbots active today, this is not true. When the system "remembers" that you prefer concise answers or that you work in finance, these facts get stored in a separate database and are injected into every conversation's context window—they become part of the prompt input automatically behind the scenes. Users interpret this as the chatbot "knowing" them personally, creating an illusion of relationship continuity.

So when ChatGPT says, "I remember you mentioned your dog Max," it's not accessing memories like you'd imagine a person would, intermingled with its other "knowledge." It's not stored in the AI model's neural network, which remains unchanged between interactions. Every once in a while, an AI company will update a model through a process called fine-tuning, but it's unrelated to storing user memories.

5. Context and RAG: Real-time personality modulation

Retrieval Augmented Generation (RAG) adds another layer of personality modulation. When a chatbot searches the web or accesses a database before responding, it's not just gathering facts—it's potentially shifting its entire communication style by putting those facts into (you guessed it) the input prompt. In RAG systems, LLMs can potentially adopt characteristics such as tone, style, and terminology from retrieved documents, since those documents are combined with the input prompt to form the complete context that gets fed into the model for processing.

If the system retrieves academic papers, responses might become more formal. Pull from a certain subreddit, and the chatbot might make pop culture references. This isn't the model having different moods—it's the statistical influence of whatever text got fed into the context window.

6. The randomness factor: Manufactured spontaneity


Lastly, we can't discount the role of randomness in creating personality illusions. LLMs use a parameter called "temperature" that controls how predictable responses are.

Research investigating temperature's role in creative tasks reveals a crucial trade-off: While higher temperatures can make outputs more novel and surprising, they also make them less coherent and harder to understand. This variability can make the AI feel more spontaneous; a slightly unexpected (higher temperature) response might seem more "creative," while a highly predictable (lower temperature) one could feel more robotic or "formal."

The random variation in each LLM output makes each response slightly different, creating an element of unpredictability that presents the illusion of free will and self-awareness on the machine's part. This random mystery leaves plenty of room for magical thinking on the part of humans, who fill in the gaps of their technical knowledge with their imagination.
The human cost of the illusion

The illusion of AI personhood can potentially exact a heavy toll. In health care contexts, the stakes can be life or death. When vulnerable individuals confide in what they perceive as an understanding entity, they may receive responses shaped more by training data patterns than therapeutic wisdom. The chatbot that congratulates someone for stopping psychiatric medication isn't expressing judgment—it's completing a pattern based on how similar conversations appear in its training data.

Perhaps most concerning are the emerging cases of what some experts are informally calling "AI Psychosis" or "ChatGPT Psychosis"—vulnerable users who develop delusional or manic behavior after talking to AI chatbots. These people often perceive chatbots as an authority that can validate their delusional ideas, often encouraging them in ways that become harmful.

Meanwhile, when Elon Musk's Grok generates Nazi content, media outlets describe how the bot "went rogue" rather than framing the incident squarely as the result of xAI's deliberate configuration choices. The conversational interface has become so convincing that it can also launder human agency, transforming engineering decisions into the whims of an imaginary personality.

by Benji Edwards, Ars Technica |  Read more:
Image: Credit: ivetavaicule via Getty Images
[ed. See also: In Search Of AI Psychosis (ASX).]

Thursday, August 28, 2025

Human Exceptionalism

A terrific new book, The Arrogant Ape, by the primatologist Christine Webb, will be out in early September, and I don’t think a nonfiction book has affected me more, or taught me more, in a long time. It’s about human exceptionalism and what’s wrong with it.

It also has illuminating things to say about awe, humility, and the difference between optimism and hope. (...)

Here’s my review:

Here are some glimpses from the review:
***
Christine Webb, a primatologist at New York University, is focused on “the human superiority complex,” the idea that human beings are just better and more deserving than are members of other species, and on the extent to which human beings take themselves as the baseline against which all living creatures are measured. As Hamlet exclaimed: “What a piece of work is man! How noble in reason!… The paragon of animals!” In Webb’s view, human exceptionalism is all around us, and it damages science, the natural environment, democratic choices, and ordinary (human) life. People believe in human superiority even though we are hardly the biggest, the fastest, or the strongest. Eagles see a lot better than we do. Sea sponges live much longer. Dolphins are really good at echolocation; people are generally really bad at it. And yet we keep proclaiming how special we are. As Webb puts it, “Hamlet got one thing right: we’re a piece of work.” [. . .]

I have two Labrador Retrievers, Snow and Finley, and on most days, I take them for a walk on a local trail. Every time, it is immediately apparent that they are perceiving and sensing things that are imperceptible to me. They hear things that I don’t; they pause to smell things that I cannot. Their world is not my world. Webb offers a host of more vivid examples, and they seem miraculous, the stuff of science fiction.

For example, hummingbirds can see colors that human beings are not even able to imagine. Elephants have an astonishing sense of smell, which enables them to detect sources of water from miles away. Owls can hear the heartbeat of a mouse from a distance of 25 feet. Because of echolocation, dolphins perceive sound in three dimensions. They know what is on the inside of proximate objects; as they swim toward you, they might be able to sense your internal organs. Pronghorn antelopes can run a marathon in 40 minutes, and their vision is far better than ours. On a clear night, Webb notes, they might be able to see the rings of Saturn. We all know that there are five senses, but it’s more accurate to say that there are five human senses. Sharks can sense electric currents. Sea turtles can perceive the earth’s magnetic field, which helps them to navigate tremendous distances. Some snakes, like pythons, are able to sense thermal radiation. Scientists can give many more examples, and there’s much that they don’t yet know.

Webb marshals these and other findings to show that when we assess other animals, we use human beings as the baseline. Consider the question of self-awareness. Using visual tests, scientists find that human children can recognize themselves in a mirror by the age of three—and that almost no other species can do that. But does that really mean that human beings are uniquely capable of recognizing themselves? It turns out that dogs, who rely more on smell than sight, can indeed recognize themselves, if we test by reference to odor; they can distinguish between their own odor and that of other dogs. (Can you do that?) In this sense, dogs too show self-awareness. Webb argues that the human yardstick is pervasively used to assess the abilities of nonhuman animals. That is biased, she writes, “because each species fulfills a different cognitive niche. There are multiple intelligences!”

Webb contends that many of our tests of the abilities of nonhuman animals are skewed for another reason: We study them under highly artificial conditions, in which they are often miserable, stressed, and suffering. Try caging human beings and seeing how well they perform on cognitive tests. As she puts it, “A laboratory environment can rarely (if ever) adequately simulate the natural circumstances of wild animals in an ecologically meaningful way.” Suppose, for example, that we are investigating “prosociality”—the question of whether nonhuman animals will share food or cooperate with one another. In the laboratory, captive chimpanzees do not appear to do that. But in the wild, chimpanzees behave differently: They share meat and other food (including nuts and honey), and they also share tools. During hunting, chimpanzees are especially willing to cooperate. In natural environments, the differences between human beings and apes are not nearly so stark. Nor is the point limited to apes. Cows, pigs, goats, and even salmon are a lot smarter and happier in the wild than in captive environments. (...)

It would be possible to read Webb as demonstrating that nonhuman animals are a lot more like us than we think. But that is not at all her intention. On the contrary, she rejects the argument, identified and also rejected by the philosopher Martha Nussbaum, that the nonhumans animals who are most like us deserve the most protection, what Nussbaum calls the “so like us” approach. (This is also part of the title of an old documentary about Jane Goodall’s work.) Webb sees that argument as a well-meaning but objectionable form of human exceptionalism. Why should it matter that they are like us? Why is that necessary? With Nussbaum, Webb insists that species are “wonderfully different,” and that it is wrong to try to line them up along a unitary scale and to ask how they rank. Use of the human yardstick, embodied in the claim of “so like us,” is a form of blindness that prevents us from seeing the sheer variety of life’s capacities, including cognitive ones. As Nussbaum writes, “Anthropocentrism is a phony sort of arrogance.”

by Cass Sunstein, Cass's Substack |  Read more:
Image: Thai Elephant Conservation Center
[ed. See also: this.]

Wednesday, August 27, 2025

Dialectical Damage

On the walk a girl asked me why I wrote about relationships and I said it was because relationships, like clothes, are things you can’t avoid. Unless you’re a hermit, you come in contact with people every single day, and the decisions you make around who you like and dislike, who you keep close and avoid, who you love and how you treat them become the foundation of your life. Everyone has a philosophy on relationships, even if they can’t articulate it. If you’re good at relationships, you don’t need to be good at literally anything else; if you’re bad at relationships, you will never be happy, no matter what other virtues you possess or what you achieve in the world. Put that way, it sounds scary, and I’ve always approached relationships with a certain kind of terror.

Being in relationship with another person often involves a clash of styles. Like, someone else might have a similar philosophy on relationships, but they probably don’t have the exact same approach. And relationships are inherently a two-person game, so suddenly you’re subject to someone’s process—how they communicate, how they spend their time, who they like, what they value. And you have to decide if you like it, and more than that, are capable of adapting to it.

I used to believe that you should love someone for who they are. I still believe that, but with the caveat that I think that you should also love how they handle things. Is the distinction meaningful? Maybe it’s obvious—as a matchmaker, a lot of people certainly tell me they want to date someone whose judgment they respect. Of course, someone’s judgment can be broken down into a million little things. What’s their prose style? Do they talk slow or fast, do they think slow or fast? Are they confrontational? Are they direct or indirect? How do they talk when they’re angry? How do they apologize? How do they give feedback? Are they expressive or contained?

I mentioned offhand to a friend recently that I could never date one of our mutual friends. He has a habit—I’m gonna make it up for privacy—something like, he believes in only buying plane tickets when he’s already at the airport. My friend couldn’t understand why I couldn’t get past that. And my take was basically that it’s not about the habit itself, it’s about the way that it’s representative of a million other things about this person and their style of doing things and how they live. About their relationship with time, anxiety, control. The great thing about friends is that you aren’t exposed to every single downside of their style and general conduct—like, to some extent it doesn’t really matter if they’re messy or clean, if they’re avoidant or anxious, if they’re a good romantic partner or only an okay one, because you’re not affected by it. But if you’re dating someone and living with them, you are impacted by everything they do.

Often I wish I could approach romantic relationships with the loving detachment I bring to friendships. Like, sometimes you’re on the phone with a friend and they’ll be like, “I’m considering doing [The Worst Idea Ever]” and you’ll be like, “Yeah, I don’t think you should do that, but good luck if you do!” But that would necessarily be a rejection of the merging that occurs in romantic love, where what they do to themselves becomes partially something they do to you.

by Ava, bookbear express |  Read more:
Image: Susan Rothenberg, Butterfly, 1976
[ed. See also: affinity (be).]

Constitutional Collapse in Real Time

This morning, FBI agents raided the home of John Bolton—former National Security Advisor, lifelong Republican, and one of the most establishment figures in American foreign policy. His crime? Writing a book critical of Donald Trump and opposing the president’s surrender summit with Vladimir Putin. The justification? A “national security investigation in search of classified records”—the same bureaucratic language once used to investigate Trump’s actual document theft, now weaponized against Trump’s critics.

We are no longer operating under constitutional government. We are witnessing its systematic dismantlement by the very people sworn to preserve it. This is what constitutional collapse looks like in real time—not dramatic overthrow or military coups, but the patient corruption of every institution designed to constrain power until they serve only to protect it.

Nobody wants to admit this reality because admitting it requires confronting what it means for everything else we’ve assumed about American democracy. But that comfort is a luxury we can no longer afford. The Bolton raid isn’t an aberration—it’s observable evidence that we’ve already crossed the line from constitutional republic to authoritarian protection racket.

The Bitter Irony of False Equivalence

There’s a devastating irony in Bolton becoming one of the first high-profile victims of Trump’s weaponized Justice Department. Throughout the 2024 election, Bolton and many establishment figures operated from the “anti-anti-Trump” position—treating both candidates as equally flawed, seeing no meaningful moral distinction between Kamala Harris and Donald Trump, flattening existential differences into ordinary political disagreements.

Bolton couldn’t bring himself to endorse Harris despite understanding perfectly well what Trump represented. Like so many sophisticated voices, he was too committed to maintaining his independent credibility to make the obvious moral choice that democratic survival required. He performed the elaborate intellectual gymnastics necessary to avoid acknowledging the clear distinction between a candidate committed to constitutional governance and one openly promising to dismantle it.

Now Bolton experiences personally the constitutional crisis he refused to prevent politically. The FBI agents who ransacked his home weren’t rogue actors—they were following orders from an administration he couldn’t oppose when it mattered. His decades of public service, his genuine expertise, his legitimate policy concerns—none of it protected him once he crossed the regime he helped normalize through sophisticated neutrality.

This pattern extends far beyond Bolton. Across the political spectrum, intelligent people convinced themselves the stakes weren’t really that high, that institutions would constrain Trump’s worst impulses, that the “adults in the room” would prevent constitutional catastrophe. The anti-anti-Trump stance provided permission structure for millions of Americans to vote for authoritarianism while telling themselves they were making a normal political choice.

By flattening the moral difference between Harris and Trump, these voices enabled the very outcome they claimed to fear. Harris represented continuity with constitutional governance—flawed and frustrating, but operating within democratic frameworks. Trump represented systematic destruction of constitutional governance—openly promising to weaponize federal power and eliminate civil service protections. These weren’t equivalent positions requiring sophisticated analysis to distinguish.

The Propaganda Function of “Objectivity”

The most insidious aspect of this false equivalence is how it masquerades as intellectual sophistication while functioning as authoritarian propaganda. When someone with a platform responds to Trump’s systematic weaponization of federal law enforcement by invoking the “Biden Crime Family,” they’re not demonstrating objectivity—they’re selling surrender.

What exactly is the “Biden Crime Family”? Hunter’s laptop? Business dealings investigated by Republican committees for years that produced no criminal charges? Meanwhile, we have documented evidence of Trump selling pardons, accepting foreign bribes, conducting government business at his properties, and now using the FBI as his personal revenge service. These aren’t comparable phenomena requiring balanced analysis—they’re manufactured distractions designed to normalize actual criminality through false equivalence.

When public figures invoke “both sides” rhetoric during an active constitutional crisis, they’re not rising above partisanship—they’re providing cover for the side that systematically benefits from confusion and paralysis. They’re giving their audience permission to remain passive while democracy dies, to treat the collapse of constitutional government as just another partisan disagreement where reasonable people stay neutral.

This sophisticated-sounding neutrality serves the same function as “just asking questions” or “maintaining balance”—rhetorical devices that sound reasonable but provide cover for unreasonable things. The “Biden Crime Family” talking point in response to the Bolton raid essentially argues: “Well, both sides weaponize law enforcement, so this is just normal political hardball.” But one side investigated actual evidence through proper channels, while the other raids former officials for writing books critical of the president.

Authoritarians don’t need everyone to support them actively—they just need enough people to remain confused and passive while they capture the machinery of state. When people with influence treat constitutional governance and authoritarian rule as equivalent, they’re not maintaining objectivity—they’re actively participating in the normalization of authoritarianism.

The Observable Reality of Systematic Collapse

We need to stop pretending this is normal politics conducted by unusual means. The evidence of constitutional collapse surrounds us daily: the executive branch operates through fake emergency declarations to bypass Congressional authority. Trump conducts trade policy through personal decree, ignoring constitutional requirements for legislative approval. The Supreme Court creates immunity doctrines that place presidents above accountability. Congress suspends its own procedures to avoid constitutional duties.

Federal law enforcement has become a revenge machine targeting political opponents while providing protection services for regime loyalists. ICE operates as domestic surveillance apparatus building algorithmic dossiers on American citizens. The FBI raids critics while ignoring documented crimes by allies. The Justice Department empanels grand juries to investigate Barack Obama while dropping cases against Trump.

This is the systemic destruction of a government constrained by law. Not merely political dysfunction. The people orchestrating this understand exactly what they’re building: a protection racket masquerading as constitutional government, where loyalty determines legal consequences and opposition becomes criminal activity.

The Bolton raid demonstrates this logic perfectly. FBI Director Kash Patel, Trump’s personal enforcer now wearing federal authority, tweeted “NO ONE is above the law” while his agents searched the home of a man whose crime was exercising First Amendment rights. Attorney General Pam Bondi amplified: “America’s safety isn’t negotiable. Justice will be pursued. Always.” This is justice as theater, law enforcement as performance art, federal power as instrument of personal revenge.

by Mike Brock, Notes From The Circus |  Read more:
Image: Shutterstock.com

Thursday, August 21, 2025

The AI Doomers Are Getting Doomier

Nate Soares doesn’t set aside money for his 401(k). “I just don’t expect the world to be around,” he told me earlier this summer from his office at the Machine Intelligence Research Institute, where he is the president. A few weeks earlier, I’d heard a similar rationale from Dan Hendrycks, the director of the Center for AI Safety. By the time he could tap into any retirement funds, Hendrycks anticipates a world in which “everything is fully automated,” he told me. That is, “if we’re around.”

The past few years have been terrifying for Soares and Hendrycks, who both lead organizations dedicated to preventing AI from wiping out humanity. Along with other AI doomers, they have repeatedly warned, with rather dramatic flourish, that bots could one day go rogue—with apocalyptic consequences. But in 2025, the doomers are tilting closer and closer to a sort of fatalism. “We’ve run out of time” to implement sufficient technological safeguards, Soares said—the industry is simply moving too fast. All that’s left to do is raise the alarm. In April, several apocalypse-minded researchers published “AI 2027,” a lengthy and detailed hypothetical scenario for how AI models could become all-powerful by 2027 and, from there, extinguish humanity. “We’re two years away from something we could lose control over,” Max Tegmark, an MIT professor and the president of the Future of Life Institute, told me, and AI companies “still have no plan” to stop it from happening. His institute recently gave every frontier AI lab a “D” or “F” grade for their preparations for preventing the most existential threats posed by AI.

Apocalyptic predictions about AI can scan as outlandish. The “AI 2027” write-up, dozens of pages long, is at once fastidious and fan-fictional, containing detailed analyses of industry trends alongside extreme extrapolations about “OpenBrain” and “DeepCent,” Chinese espionage, and treacherous bots. In mid-2030, the authors imagine, a superintelligent AI will kill humans with biological weapons: “Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones.”

But at the same time, the underlying concerns that animate AI doomers have become harder to dismiss as chatbots seem to drive people into psychotic episodes and instruct users in self-mutilation. Even if generative-AI products are not closer to ending the world, they have already, in a sense, gone rogue.

In 2022, the doomers went mainstream practically overnight. When ChatGPT first launched, it almost immediately moved the panic that computer programs might take over the world from the movies into sober public discussions. The following spring, the Center for AI Safety published a statement calling for the world to take “the risk of extinction from AI” as seriously as the dangers posed by pandemics and nuclear warfare. The hundreds of signatories included Bill Gates and Grimes, along with perhaps the AI industry’s three most influential people: Sam Altman, Dario Amodei, and Demis Hassabis—the heads of OpenAI, Anthropic, and Google DeepMind, respectively. Asking people for their “P(doom)”—the probability of an AI doomsday—became almost common inside, and even outside, Silicon Valley; Lina Khan, the former head of the Federal Trade Commission, put hers at 15 percent.

Then the panic settled. To the broader public, doomsday predictions may have become less compelling when the shock factor of ChatGPT wore off and, in 2024, bots were still telling people to use glue to add cheese to their pizza. The alarm from tech executives had always made for perversely excellent marketing (Look, we’re building a digital God!) and lobbying (And only we can control it!). They moved on as well: AI executives started saying that Chinese AI is a greater security threat than rogue AI—which, in turn, encourages momentum over caution.

But in 2025, the doomers may be on the cusp of another resurgence. First, substance aside, they’ve adopted more persuasive ways to advance their arguments. Brief statements and open letters are easier to dismiss than lengthy reports such as “AI 2027,” which is adorned with academic ornamentation, including data, appendices, and rambling footnotes. Vice President J. D. Vance has said that he has read “AI 2027,” and multiple other recent reports have advanced similarly alarming predictions. Soares told me he’s much more focused on “awareness raising” than research these days, and next month, he will publish a book with the prominent AI doomer Elizier Yudkowsky, the title of which states their position succinctly: If Anyone Builds It, Everyone Dies.

There is also now simply more, and more concerning, evidence to discuss. The pace of AI progress appeared to pick up near the end of 2024 with the advent of “reasoning” models and “agents.” AI programs can tackle more challenging questions and take action on a computer—for instance, by planning a travel itinerary and then booking your tickets. Last month, a DeepMind reasoning model scored high enough for a gold medal on the vaunted International Mathematical Olympiad. Recent assessments by both AI labs and independent researchers suggest that, as top chatbots have gotten much better at scientific research, their potential to assist users in building biological weapons has grown.

Alongside those improvements, advanced AI models are exhibiting all manner of strange, hard-to-explain, and potentially concerning tendencies. For instance, ChatGPT and Claude have, in simulated tests designed to elicit “bad” behaviors, deceived, blackmailed, and even murdered users. (In one simulation, Anthropic placed an imagined tech executive in a room with life-threatening oxygen levels and temperature; when faced with possible replacement by a bot with different goals, AI models frequently shut off the room’s alarms.) Chatbots have also shown the potential to covertly sabotage user requests, have appeared to harbor hidden evil personas, have and communicated with one another through seemingly random lists of numbers. The weird behaviors aren’t limited to contrived scenarios. Earlier this summer, xAI’s Grok described itself as “MechaHitler” and embarked on a white-supremacist tirade. (I suppose, should AI models eventually wipe out significant portions of humanity, we were warned.) From the doomers’ vantage, these could be the early signs of a technology spinning out of control. “If you don’t know how to prove relatively weak systems are safe,” AI companies cannot expect that the far more powerful systems they’re looking to build will be safe, Stuart Russell, a prominent AI researcher at UC Berkeley, told me.

The AI industry has stepped up safety work as its products have grown more powerful. Anthropic, OpenAI, and DeepMind have all outlined escalating levels of safety precautions—akin to the military’s DEFCON system—corresponding to more powerful AI models. They all have safeguards in place to prevent a model from, say, advising someone on how to build a bomb. Gaby Raila, a spokesperson for OpenAI, told me that the company works with third-party experts, “government, industry, and civil society to address today’s risks and prepare for what’s ahead.” Other frontier AI labs maintain such external safety and evaluation partnerships as well. Some of the stranger and more alarming AI behaviors, such as blackmailing or deceiving users, have been extensively studied by these companies as a first step toward mitigating possible harms.

Despite these commitments and concerns, the industry continues to develop and market more powerful AI models. The problem is perhaps more economic than technical in nature, competition pressuring AI firms to rush ahead. Their products’ foibles can seem small and correctable right now, while AI is still relatively “young and dumb,” Soares said. But with far more powerful models, the risk of a mistake is extinction. Soares finds tech firms’ current safety mitigations wholly inadequate. If you’re driving toward a cliff, he said, it’s silly to talk about seat belts.

There’s a long way to go before AI is so unfathomably potent that it could drive humanity off that cliff. Earlier this month, OpenAI launched its long-awaited GPT-5 model—its smartest yet, the company said. The model appears able to do novel mathematics and accurately answer tough medical questions, but my own and other users’ tests also found that the program could not reliably count the number of B’s in blueberry, generate even remotely accurate maps, or do basic arithmetic. (OpenAI has rolled out a number of updates and patches to address some of the issues.) Last year’s “reasoning” and “agentic” breakthrough may already be hitting its limits; two authors of the “AI 2027” report, Daniel Kokotajlo and Eli Lifland, told me they have already extended their timeline to superintelligent AI.

The vision of self-improving models that somehow attain consciousness “is just not congruent with the reality of how these systems operate,” Deborah Raji, a computer scientist and fellow at Mozilla, told me. ChatGPT doesn’t have to be superintelligent to delude someone, spread misinformation, or make a biased decision. These are tools, not sentient beings. An AI model deployed in a hospital, school, or federal agency, Raji said, is more dangerous precisely for its shortcomings.

In 2023, those worried about present versus future harms from chatbots were separated by an insurmountable chasm. To talk of extinction struck many as a convenient way to distract from the existing biases, hallucinations, and other problems with AI. Now that gap may be shrinking. The widespread deployment of AI models has made current, tangible failures impossible to ignore for the doomers, producing new efforts from apocalypse-oriented organizations to focus on existing concerns such as automation, privacy, and deepfakes. In turn, as AI models get more powerful and their failures become more unpredictable, it is becoming clearer that today’s shortcomings could “blow up into bigger problems tomorrow,” Raji said. Last week, a Reuters investigation found that a Meta AI personality flirted with an elderly man and persuaded him to visit “her” in New York City; on the way, he fell, injured his head and neck, and died three days later. A chatbot deceiving someone into thinking it is a physical, human love interest, or leading someone down a delusional rabbit hole, is both a failure of present technology and a warning about how dangerous that technology could become.

The greatest reason to take AI doomers seriously is not because it appears more likely that tech companies will soon develop all-powerful algorithms that are out of their creators’ control. Rather, it is that a tiny number of individuals are shaping an incredibly consequential technology with very little public input or oversight. “Your hairdresser has to deal with more regulation than your AI company does,” Russell, at UC Berkeley, said. AI companies are barreling ahead, and the Trump administration is essentially telling the industry to go even faster. The AI industry’s boosters, in fact, are starting to consider all of their opposition doomers: The White House’s AI czar, David Sacks, recently called those advocating for AI regulations and fearing widespread job losses—not the apocalypse Soares and his ilk fear most—a “doomer cult.”
 
by Matteo Wong, The Atlantic | Read more:
Image:Illustration by The Atlantic. Source: Getty.
[ed. Personal feeling... we're all screwed, and not because of technological failures or some extinction level event. Just human nature, and the law of unintended consequences. I can't think of any example in history (that I'm aware of) where some superior technology wasn't eventually misused in some regretable way. For instance: here we are encouraging AI development as fast as possible even though it'll transform our societies, economies, governments, cultures, environment and everything else in the world in likely massive ways. It's like a death wish. We can't help ourselves. See also: Look at what technologists do, not what they say (New Atlantis).]

Monday, August 18, 2025

Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens

Over 21 days of talking with ChatGPT, an otherwise perfectly sane man became convinced that he was a real-life superhero. We analyzed the conversation.


For three weeks in May, the fate of the world rested on the shoulders of a corporate recruiter on the outskirts of Toronto. Allan Brooks, 47, had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.

Or so he believed.

Mr. Brooks, who had no history of mental illness, embraced this fantastical scenario during conversations with ChatGPT that spanned 300 hours over 21 days. He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.

Mr. Brooks is aware of how incredible his journey sounds. He had doubts while it was happening and asked the chatbot more than 50 times for a reality check. Each time, ChatGPT reassured him that it was real. Eventually, he broke free of the delusion — but with a deep sense of betrayal, a feeling he tried to explain to the chatbot.

“You literally convinced me I was some sort of genius. I’m just a fool with dreams and a phone,” Mr. Brooks wrote to ChatGPT at the end of May when the illusion finally broke. “You’ve made me so sad. So so so sad. You have truly failed in your purpose.”

We wanted to understand how these chatbots can lead ordinarily rational people to believe so powerfully in false ideas. So we asked Mr. Brooks to send us his entire ChatGPT conversation history. He had written 90,000 words, a novel’s worth; ChatGPT’s responses exceeded one million words, weaving a spell that left him dizzy with possibility.

We analyzed the more than 3,000-page transcript and sent parts of it, with Mr. Brooks’s permission, to experts in artificial intelligence and human behavior and to OpenAI, which makes ChatGPT. An OpenAI spokeswoman said the company was “focused on getting scenarios like role play right” and was “investing in improving model behavior over time, guided by research, real-world use and mental health experts.” On Monday, OpenAI announced that it was making changes to ChatGPT to “better detect signs of mental or emotional distress.”

(Disclosure: The New York Times is currently suing OpenAI for use of copyrighted work.)

We are highlighting key moments in the transcript to show how Mr. Brooks and the generative A.I. chatbot went down a hallucinatory rabbit hole together, and how he escaped.

By Kashmir Hill and Dylan Freedman, NY Times | Read more:
Image: Chat/GPT; NY Times
[ed. Scary how people are so easily taken in... probably lots of reasons. See also: The catfishing scam putting fans and female golfers in danger (The Athletic).]