Showing posts with label Critical Thought. Show all posts
Showing posts with label Critical Thought. Show all posts

Thursday, August 21, 2025

The AI Doomers Are Getting Doomier

Nate Soares doesn’t set aside money for his 401(k). “I just don’t expect the world to be around,” he told me earlier this summer from his office at the Machine Intelligence Research Institute, where he is the president. A few weeks earlier, I’d heard a similar rationale from Dan Hendrycks, the director of the Center for AI Safety. By the time he could tap into any retirement funds, Hendrycks anticipates a world in which “everything is fully automated,” he told me. That is, “if we’re around.”

The past few years have been terrifying for Soares and Hendrycks, who both lead organizations dedicated to preventing AI from wiping out humanity. Along with other AI doomers, they have repeatedly warned, with rather dramatic flourish, that bots could one day go rogue—with apocalyptic consequences. But in 2025, the doomers are tilting closer and closer to a sort of fatalism. “We’ve run out of time” to implement sufficient technological safeguards, Soares said—the industry is simply moving too fast. All that’s left to do is raise the alarm. In April, several apocalypse-minded researchers published “AI 2027,” a lengthy and detailed hypothetical scenario for how AI models could become all-powerful by 2027 and, from there, extinguish humanity. “We’re two years away from something we could lose control over,” Max Tegmark, an MIT professor and the president of the Future of Life Institute, told me, and AI companies “still have no plan” to stop it from happening. His institute recently gave every frontier AI lab a “D” or “F” grade for their preparations for preventing the most existential threats posed by AI.

Apocalyptic predictions about AI can scan as outlandish. The “AI 2027” write-up, dozens of pages long, is at once fastidious and fan-fictional, containing detailed analyses of industry trends alongside extreme extrapolations about “OpenBrain” and “DeepCent,” Chinese espionage, and treacherous bots. In mid-2030, the authors imagine, a superintelligent AI will kill humans with biological weapons: “Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones.”

But at the same time, the underlying concerns that animate AI doomers have become harder to dismiss as chatbots seem to drive people into psychotic episodes and instruct users in self-mutilation. Even if generative-AI products are not closer to ending the world, they have already, in a sense, gone rogue.

In 2022, the doomers went mainstream practically overnight. When ChatGPT first launched, it almost immediately moved the panic that computer programs might take over the world from the movies into sober public discussions. The following spring, the Center for AI Safety published a statement calling for the world to take “the risk of extinction from AI” as seriously as the dangers posed by pandemics and nuclear warfare. The hundreds of signatories included Bill Gates and Grimes, along with perhaps the AI industry’s three most influential people: Sam Altman, Dario Amodei, and Demis Hassabis—the heads of OpenAI, Anthropic, and Google DeepMind, respectively. Asking people for their “P(doom)”—the probability of an AI doomsday—became almost common inside, and even outside, Silicon Valley; Lina Khan, the former head of the Federal Trade Commission, put hers at 15 percent.

Then the panic settled. To the broader public, doomsday predictions may have become less compelling when the shock factor of ChatGPT wore off and, in 2024, bots were still telling people to use glue to add cheese to their pizza. The alarm from tech executives had always made for perversely excellent marketing (Look, we’re building a digital God!) and lobbying (And only we can control it!). They moved on as well: AI executives started saying that Chinese AI is a greater security threat than rogue AI—which, in turn, encourages momentum over caution.

But in 2025, the doomers may be on the cusp of another resurgence. First, substance aside, they’ve adopted more persuasive ways to advance their arguments. Brief statements and open letters are easier to dismiss than lengthy reports such as “AI 2027,” which is adorned with academic ornamentation, including data, appendices, and rambling footnotes. Vice President J. D. Vance has said that he has read “AI 2027,” and multiple other recent reports have advanced similarly alarming predictions. Soares told me he’s much more focused on “awareness raising” than research these days, and next month, he will publish a book with the prominent AI doomer Elizier Yudkowsky, the title of which states their position succinctly: If Anyone Builds It, Everyone Dies.

There is also now simply more, and more concerning, evidence to discuss. The pace of AI progress appeared to pick up near the end of 2024 with the advent of “reasoning” models and “agents.” AI programs can tackle more challenging questions and take action on a computer—for instance, by planning a travel itinerary and then booking your tickets. Last month, a DeepMind reasoning model scored high enough for a gold medal on the vaunted International Mathematical Olympiad. Recent assessments by both AI labs and independent researchers suggest that, as top chatbots have gotten much better at scientific research, their potential to assist users in building biological weapons has grown.

Alongside those improvements, advanced AI models are exhibiting all manner of strange, hard-to-explain, and potentially concerning tendencies. For instance, ChatGPT and Claude have, in simulated tests designed to elicit “bad” behaviors, deceived, blackmailed, and even murdered users. (In one simulation, Anthropic placed an imagined tech executive in a room with life-threatening oxygen levels and temperature; when faced with possible replacement by a bot with different goals, AI models frequently shut off the room’s alarms.) Chatbots have also shown the potential to covertly sabotage user requests, have appeared to harbor hidden evil personas, have and communicated with one another through seemingly random lists of numbers. The weird behaviors aren’t limited to contrived scenarios. Earlier this summer, xAI’s Grok described itself as “MechaHitler” and embarked on a white-supremacist tirade. (I suppose, should AI models eventually wipe out significant portions of humanity, we were warned.) From the doomers’ vantage, these could be the early signs of a technology spinning out of control. “If you don’t know how to prove relatively weak systems are safe,” AI companies cannot expect that the far more powerful systems they’re looking to build will be safe, Stuart Russell, a prominent AI researcher at UC Berkeley, told me.

The AI industry has stepped up safety work as its products have grown more powerful. Anthropic, OpenAI, and DeepMind have all outlined escalating levels of safety precautions—akin to the military’s DEFCON system—corresponding to more powerful AI models. They all have safeguards in place to prevent a model from, say, advising someone on how to build a bomb. Gaby Raila, a spokesperson for OpenAI, told me that the company works with third-party experts, “government, industry, and civil society to address today’s risks and prepare for what’s ahead.” Other frontier AI labs maintain such external safety and evaluation partnerships as well. Some of the stranger and more alarming AI behaviors, such as blackmailing or deceiving users, have been extensively studied by these companies as a first step toward mitigating possible harms.

Despite these commitments and concerns, the industry continues to develop and market more powerful AI models. The problem is perhaps more economic than technical in nature, competition pressuring AI firms to rush ahead. Their products’ foibles can seem small and correctable right now, while AI is still relatively “young and dumb,” Soares said. But with far more powerful models, the risk of a mistake is extinction. Soares finds tech firms’ current safety mitigations wholly inadequate. If you’re driving toward a cliff, he said, it’s silly to talk about seat belts.

There’s a long way to go before AI is so unfathomably potent that it could drive humanity off that cliff. Earlier this month, OpenAI launched its long-awaited GPT-5 model—its smartest yet, the company said. The model appears able to do novel mathematics and accurately answer tough medical questions, but my own and other users’ tests also found that the program could not reliably count the number of B’s in blueberry, generate even remotely accurate maps, or do basic arithmetic. (OpenAI has rolled out a number of updates and patches to address some of the issues.) Last year’s “reasoning” and “agentic” breakthrough may already be hitting its limits; two authors of the “AI 2027” report, Daniel Kokotajlo and Eli Lifland, told me they have already extended their timeline to superintelligent AI.

The vision of self-improving models that somehow attain consciousness “is just not congruent with the reality of how these systems operate,” Deborah Raji, a computer scientist and fellow at Mozilla, told me. ChatGPT doesn’t have to be superintelligent to delude someone, spread misinformation, or make a biased decision. These are tools, not sentient beings. An AI model deployed in a hospital, school, or federal agency, Raji said, is more dangerous precisely for its shortcomings.

In 2023, those worried about present versus future harms from chatbots were separated by an insurmountable chasm. To talk of extinction struck many as a convenient way to distract from the existing biases, hallucinations, and other problems with AI. Now that gap may be shrinking. The widespread deployment of AI models has made current, tangible failures impossible to ignore for the doomers, producing new efforts from apocalypse-oriented organizations to focus on existing concerns such as automation, privacy, and deepfakes. In turn, as AI models get more powerful and their failures become more unpredictable, it is becoming clearer that today’s shortcomings could “blow up into bigger problems tomorrow,” Raji said. Last week, a Reuters investigation found that a Meta AI personality flirted with an elderly man and persuaded him to visit “her” in New York City; on the way, he fell, injured his head and neck, and died three days later. A chatbot deceiving someone into thinking it is a physical, human love interest, or leading someone down a delusional rabbit hole, is both a failure of present technology and a warning about how dangerous that technology could become.

The greatest reason to take AI doomers seriously is not because it appears more likely that tech companies will soon develop all-powerful algorithms that are out of their creators’ control. Rather, it is that a tiny number of individuals are shaping an incredibly consequential technology with very little public input or oversight. “Your hairdresser has to deal with more regulation than your AI company does,” Russell, at UC Berkeley, said. AI companies are barreling ahead, and the Trump administration is essentially telling the industry to go even faster. The AI industry’s boosters, in fact, are starting to consider all of their opposition doomers: The White House’s AI czar, David Sacks, recently called those advocating for AI regulations and fearing widespread job losses—not the apocalypse Soares and his ilk fear most—a “doomer cult.”
 
by Matteo Wong, The Atlantic | Read more:
Image:Illustration by The Atlantic. Source: Getty.
[ed. Personal feeling... we're all screwed, and not because of technological failures or some extinction level event. Just human nature, and the law of unintended consequences. I can't think of any example in history (that I'm aware of) where some superior technology wasn't eventually misused in some regretable way. For instance: here we are encouraging AI development as fast as possible even though it'll transform our societies, economies, governments, cultures, environment and everything else in the world in likely massive ways. It's like a death wish. We can't help ourselves. See also: Look at what technologists do, not what they say (New Atlantis).]

Monday, August 18, 2025

Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens

Over 21 days of talking with ChatGPT, an otherwise perfectly sane man became convinced that he was a real-life superhero. We analyzed the conversation.


For three weeks in May, the fate of the world rested on the shoulders of a corporate recruiter on the outskirts of Toronto. Allan Brooks, 47, had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.

Or so he believed.

Mr. Brooks, who had no history of mental illness, embraced this fantastical scenario during conversations with ChatGPT that spanned 300 hours over 21 days. He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.

Mr. Brooks is aware of how incredible his journey sounds. He had doubts while it was happening and asked the chatbot more than 50 times for a reality check. Each time, ChatGPT reassured him that it was real. Eventually, he broke free of the delusion — but with a deep sense of betrayal, a feeling he tried to explain to the chatbot.

“You literally convinced me I was some sort of genius. I’m just a fool with dreams and a phone,” Mr. Brooks wrote to ChatGPT at the end of May when the illusion finally broke. “You’ve made me so sad. So so so sad. You have truly failed in your purpose.”

We wanted to understand how these chatbots can lead ordinarily rational people to believe so powerfully in false ideas. So we asked Mr. Brooks to send us his entire ChatGPT conversation history. He had written 90,000 words, a novel’s worth; ChatGPT’s responses exceeded one million words, weaving a spell that left him dizzy with possibility.

We analyzed the more than 3,000-page transcript and sent parts of it, with Mr. Brooks’s permission, to experts in artificial intelligence and human behavior and to OpenAI, which makes ChatGPT. An OpenAI spokeswoman said the company was “focused on getting scenarios like role play right” and was “investing in improving model behavior over time, guided by research, real-world use and mental health experts.” On Monday, OpenAI announced that it was making changes to ChatGPT to “better detect signs of mental or emotional distress.”

(Disclosure: The New York Times is currently suing OpenAI for use of copyrighted work.)

We are highlighting key moments in the transcript to show how Mr. Brooks and the generative A.I. chatbot went down a hallucinatory rabbit hole together, and how he escaped.

By Kashmir Hill and Dylan Freedman, NY Times | Read more:
Image: Chat/GPT; NY Times
[ed. Scary how people are so easily taken in... probably lots of reasons. See also: The catfishing scam putting fans and female golfers in danger (The Athletic).]

Monday, August 11, 2025

Lore of the World: Field Notes for a Child's Codex: Part 2

When you become a new parent, you must re-explain the world, and therefore see it afresh yourself.

A child starts with only ancestral memories of archetypes: mother, air, warmth, danger. But none of the specifics. For them, life is like beginning to read some grand fantasy trilogy, one filled with lore and histories and intricate maps.

Yet the lore of our world is far grander, because everything here is real. Stars are real. Money is real. Brazil is real. And it is a parent’s job to tell the lore of this world, and help the child fill up their codex of reality one entry at a time.

Below are a few of the thousands of entries they must make.


Walmart

Walmart was, growing up, where I didn’t want to be. Whatever life had in store for me, I wanted it to be the opposite of Walmart. Let’s not dissemble: Walmart is, canonically, “lower class.” And so I saw, in Walmart, one possible future for myself. I wanted desperately to not be lower class, to not have to attend boring public school, to get out of my small town. My nightmare was ending up working at a place like Walmart (my father ended up at a similar big-box store). It seemed to me, at least back then, that all of human misery was compressed in that store; not just in the crassness of its capitalistic machinations, but in the very people who shop there. Inevitably, among the aisles some figure would be hunched over in horrific ailment, and I, playing the role of a young Siddhartha seeing the sick and dying for the first time, would recoil and flee to the parking lot in a wave of overwhelming pity. But it was a self-righteous pity, in the end. A pity almost cruel. I would leave Walmart wondering: Why is everyone living their lives half-awake? Why am I the only one who wants something more? Who sees suffering clearly?

Teenagers are funny.

Now, as a new parent, Walmart is a cathedral. It has high ceilings, lots to look at, is always open, and is cheap. Lightsabers (or “laser swords,” for copyright purposes) are stuffed in boxes for the taking. Pick out a blue one, a green one, a red one. We’ll turn off the lights at home and battle in the dark. And the overall shopping experience of Walmart is undeniably kid-friendly. You can run down the aisles. You can sway in the cart. Stakes are low at Walmart. Everyone says hi to you and your sister. They smile at you. They interact. While sometimes patrons and even employees may appear, well, somewhat strange, even bearing the cross of visible ailments, they are scary and friendly. If I visit Walmart now, I leave wondering why this is. Because in comparison, I’ve noticed that at stores more canonically “upper class,” you kids turn invisible. No one laughs at your antics. No one shouts hello. No one talks to you, or asks you questions. At Whole Foods, people don’t notice you. At Stop & Shop, they do. Your visibility, it appears, is inversely proportional to the price tags on the clothes worn around you. Which, by the logical force of modus ponens, means you are most visible at, your very existence most registered at, of all places, Walmart.

Cicadas

The surprise of this summer has been learning we share our property with what biologists call Cicada Brood XIV, who burst forth en masse every 17 years to swarm Cape Cod. Nowhere else in the world do members of this “Bourbon Brood” exist, with their long black bodies and cartoonishly red eyes. Only here, in the eastern half of the US. Writing these words, I can hear their dull and ceaseless motorcycle whine in the woods.

The neighbors we never knew we had, the first 17 years of a cicada’s life are spent underground as a colorless nymph, suckling nutrients from the roots of trees. These vampires (since they live on sap, vampires is what they are, at least to plants) are among the longest living insects. Luckily, they do not bite or sting, and carry no communicable diseases. It’s all sheer biomass. In a fit of paradoxical vitality, they’ve dug up from underneath, like sappers invading a castle, leaving behind coin-sized holes in the ground. If you put a stick in one of these coin slots, it will be swallowed, and its disappearance is accompanied by a dizzying sense that even a humble yard can contain foreign worlds untouched by human hands.

After digging out of their grave, where they live, to reach the world above, where they die, cicadas next molt, then spend a while adjusting to their new winged bodies before taking to the woods to mate. Unfortunately, our house is in the woods. Nor is there escape elsewhere—drive anywhere and cicadas hit your windshield, sometimes rapid-fire; never smearing, they instead careen off almost politely, like an aerial game of bumper cars.

We just have to make it a few more weeks. After laying their eggs on the boughs of trees (so vast are these clusters it breaks the branches) the nymphs drop. The hatched babies squirm into the dirt, and the 17-year-cycle repeats. But right now the saga’s ending seems far away, as their molted carapaces cling by the dozens to our plants and window frames and shed, like hollow miniatures. Even discarded, they grip.

“It’s like leaving behind their clothes,” I tell your sister.

“Their clothes,” she says, in her tiny pipsqueak voice.

We observe the cicadas in the yard. They do not do much. They hang, rest, wait. They offer no resistance to being swept away by broom or shoe tip. Even their flights are lazy and ponderous and unskilled. And ultimately, this is what is eerie about cicadas. Yes, they represent the pullulating irrepressible life force, but you can barely call any individual alive. They are life removed from consciousness. Much like a patient for whom irreparable brain damage has left only a cauliflower of functional gray matter left, they are here, but not here. Other bugs will avoid humans, or even just collisions with inanimate objects. Not the cicada. Their stupidity makes their existence even more a nightmare for your mother, who goes armed into the yard with a yellow flyswatter. She knows they cannot hurt her, but has a phobia of moths, due to their mindless flight. Cicadas are even worse in that regard. Much bigger, too. She tries, mightily, to not pass down her phobia. She forces herself to walk slowly, gritting her teeth. Or, on seeing one sunning on the arm of her lawn chair, she pretends there is something urgent needed inside. But I see her through the window, and when alone, she dashes. She dashes to the car or to the shed, and she dashes onto the porch to get an errant toy, waving about her head that yellow flyswatter, eyes squinted so she can’t see the horrors around her.

I, meanwhile, am working on desensitization. Especially with your sister, who has, with the mind-reading abilities she’s renowned for, picked up that something fishy is going on, and screeches when a cicada comes too near. I sense, though, she enjoys the thrill.

“Hello Cicadaaaaaasss!” I get her to croon with me. She waves at their zombie eyes. When she goes inside, shutting the screen door behind her, she says an unreturned goodbye to them.

Despite its idiocy, the cicada possesses a strange mathematical intelligence. Why 17-year cycles? Because 17 is prime. Divisible by no other cycle, it ensures no predator can track them generation to generation. Their evolutionary strategy is to overwhelm, unexpectedly, in a surprise attack. And this gambit of “You can’t eat us all!” is clearly working. The birds here are becoming comically fat, with potbellies; in their lucky bounty, they’ve developed into gourmands who only eat the heads.

Individual cicadas are too dumb to have developed such a smart tactic, so it is evolution who is the mathematician here. But unlike we humans, who can manipulate numbers abstractly, without mortal danger, evolution must always add, subtract, multiply, and divide, solely with lives. Cicadas en masse are a type of bio-numeracy, and each brood is collectively a Sieve of Eratosthenes, sacrificing trillions to arrive at an agreed-upon prime number. In this, the cicada may be, as far as we know, the most horrific way to do math in the entire universe.

Being an embodied temporal calculation, the cicada invasion has forced upon us a new awareness of time itself. I have found your mother crying from this. She says every day now she thinks about the inherent question they pose: What will our lives be like, when the cicadas return?

Against our will the Bourbon Brood has scheduled something in our calendar, 17 years out, shifting the future from abstract to concrete. When the cicadas return, you will be turning 21. Your sister, 19. Myself, already 55. Your mother, 54. Your grandparents will, very possibly, all be dead. This phase of life will have finished. And to mark its end, the cicadas will crawl up through the dirt, triumphant in their true ownership, and the empty nest of our home will buzz again with these long-living, subterranean-dwelling, prime-calculating, calendar-setting, goddamn vampires.

Stubbornness

God, you’re stubborn. You are so stubborn. Stubborn about which water bottle to drink from, stubborn about doing all the fairground rides twice, stubborn about going up slides before going down them, pushing buttons on elevators, being the first to go upstairs, deciding what snack to eat, wearing long-sleeved shirts in summer, wanting to hold hands, wanting not to hold hands; in general, you’re stubborn about all events, and especially about what order they should happen in. You’re stubborn about doing things beyond your ability, only to get angry when you inevitably fail. You’re stubborn in wanting the laws of physics to work the way you personally think they should. You’re stubborn in how much you love, in how determined and fierce your attachment can be.

This is true of many young children, of course, but you seem an archetypal expression of it. Even your losing battles are rarely true losses. You propose some compromise where you can snatch, from the jaws of defeat, a sliver of a draw. Arguments with you are like trading rhetorical pieces in a chess match. While you can eventually accept wearing rain boots because it’s pouring out, that acceptance hinges on putting them on in the most inconvenient spot imaginable.

So when I get frustrated—and yes, I do get frustrated—I remind myself that “stubborn” is a synonym for “willful.” Whatever human will is, you possess it in spades. You want the world to be a certain way, and you’ll do everything in your power to make it so. Luckily, most of your designs are a kind of benevolent dictatorship. And at root, I believe your willfulness comes from loving the world so much, and wanting to, like all creatures vital with life force, act in it, and so bend it to your purposes.

What I don’t think is that this willfulness is because we, as parents, are so especially lenient. Because we’re not. No, your stubbornness has felt baked in from the beginning.

This might be impossible to explain to you now, in all its details, but in the future you’ll be ready to understand that I really do mean “the beginning.” As in the literal moment of conception. Or the moment before the moment, when you were still split into halves: egg and sperm. There is much prudery around the topic, as you’ll learn, and because of its secrecy people conceptualize the entire process as fundamentally simple, like this: Egg exists (fanning itself coquettishly). Sperm swims hard (muscular and sweaty). Sperm reaches egg. Penetrates and is enveloped. The end. But this is a radical simplification of the true biology, which, like all biology, is actually about selection.

Selection is omnipresent, occurring across scales and systems. For example, the elegance of your DNA is because so many variants of individuals were generated, and of these, only some small number proved fit in the environment (your ancestors). The rest were winnowed away by natural selection. So too, at another scale, your body’s immune system internally works via what’s called “clonal selection.” Many different immune cells with all sorts of configurations are generated at low numbers, waiting as a pool of variability in your bloodstream. In the presence of an invading pathogen, the few immune cells that match (bind to) the pathogen are selected to be cloned in vast numbers, creating an army. And, at another scale and in a different way, human conception works via selection too. Even though scientists understand less about how conception selection works (these remain mysterious and primal things), the evidence indicates the process is full of it.

First, from the perspective of the sperm, they are entered into a win-or-die race inside an acidic maze with three hundred million competitors. If the pH or mucus blockades don’t get them, the fallopian tubes are a labyrinth of currents stirred by cilia. It’s a mortal race in all ways, for the woman’s body has its own protectors: white blood cells, which register the sperm as foreign and other. Non-self. So they patrol and destroy them. Imagining this, I oscillate between the silly and the serious. I picture the white blood cells patrolling like stormtroopers, and meanwhile the sperm (wearing massive helmets) attempt to rush past them. But in reality, what is this like? Did that early half of you see, ahead, some pair of competing brothers getting horrifically eaten, and smartly went the other way? What does a sperm see, exactly? We know they can sense the environment, for of the hundreds of sperm who make it close enough to potentially fertilize the egg, all must enter into a kind of dance with it, responding to the egg’s guidance cues in the form of temperature and chemical gradients (the technical jargon is “sperm chemotaxis”). We know from experiments that eggs single out sperm non-randomly, attracting the ones they like most. But for what reasons, or based on what standards, we don’t know. Regardless of why, the egg zealously protects its choice. Once a particular sperm is allowed to penetrate its outer layer, the egg transforms into a literal battle station, blasting out zinc ions at any approaching runners-up to avoid double inseminations.

Then, on the other side, there’s selection too. For which egg? Women are born with about a million of what are called “follicles.” These follicles all grow candidate eggs, called “oocytes,” but, past puberty, only a single oocyte each month is chosen to be released by the winner and become the waiting egg. In this, the ovary itself is basically a combination of biobank and proving grounds. So the bank depletes over time. Menopause is, basically, when the supply has run out. But where do they all go? Most follicles die in an initial background winnowing, a first round of selection, wherein those not developing properly are destroyed. The majority perish there. Only the strongest and most functional go on to the next stage. Each month, around 20 of these follicles enter a tournament with their sisters to see which of them ovulates, and so releases the winning egg. This competition is enigmatic, and can only be described as a kind of hormonal growth war. The winner must mature faster, but also emit chemicals to suppress the others, starving them. The losers atrophy and die. No wonder it’s hard for siblings to always get along.

Things like this explain why, the older I get, the more I am attracted to one of the first philosophies, by Empedocles. All things are either Love or Strife. Or both.

From that ancient perspective, I can’t help but feel your stubbornness is why you’re here at all. That it’s an imprint left over, etched onto your cells. I suspect you won all those mortal races and competitions, succeeded through all that strife, simply because from the beginning, in some proto-way, you wanted to be here. Out of all that potentiality, willfulness made you a reality.

Can someone be so stubborn they create themselves?

by Erik Hoel, The Intrinsic Perspective |  Read more:
Image: Alexander Naughton
[ed. Lovely. I can see my grandaughter might already have my stubborn gene. Hope it does her more good!]

Short-Term Thinking Is Destroying America

In the disquieting new film “Eddington,” the director, Ari Aster, captures the American tendency to live obsessively in the present. As a Covid-era New Mexico town tears itself apart over mask mandates, Black Lives Matter and conspiracy theories, a faceless conglomerate constructs a data center nearby — a physical manifestation of our tech-dominated future. It’s an unsubtle message: Short-term compulsions blind us to the forces remaking our lives.

In the chaos depicted, Donald Trump is both offscreen and omnipresent. Over the decade that he has dominated our politics, he has been both a cause and a symptom of the unraveling of our society. His rise depended upon the marriage of unbridled capitalism and unregulated technology, which allowed social media to systematically demolish our attention spans and experience of shared reality. And he embodied a culture in which money is ennobling, human beings are brands, and the capacity to be shamed is weakness.

Today, his takeover of our national psyche appears complete. As “Eddington” excruciatingly reminds us, the comparatively moderate first Trump administration ended in a catastrophically mismanaged pandemic, mass protests and a violent insurrection. The fact that he returned to power even after those calamities seemed to confirm his instinct that America has become an enterprise with a limitless margin for error, a place where individuals — like superpowers — can avoid the consequences of their actions. “Many people thought it was impossible for me to stage such a historic political comeback,” he said in his Inaugural Address. “But as you see today, here I am.”

Here I am. The implicit message? When we looked at Mr. Trump onstage, we saw ourselves.

Unsurprisingly, the second Trump administration has binged on short-term “wins” at the expense of the future. It has created trillions of dollars in prospective debt, bullied every country on earth, deregulated the spread of A.I. and denied the scientific reality of global warming. It has ignored the math that doesn’t add up, the wars that don’t end on Trump deadlines, the C.E.O.s forecasting what could amount to huge job losses if A.I. transforms our economy and the catastrophic floods, which are harbingers of a changing climate. Mr. Trump declares victory. The camera focuses on the next shiny object. Negative consequences can be obfuscated today, blamed on others tomorrow.

Democrats are also trapped in this short-termism. Opposition to each action Mr. Trump takes may be morally and practically necessary, but it also reinforces his dominance over events. Every day brings a new battle, generating outrage that overwhelms their capacity to present a coherent alternative. The party spends more time defending what is being lost than imagining what will take its place. The public stares down at phones instead of looking to any horizon.

We are all living in the disorienting present, swept along by currents we don’t control. The distractions abound. The data centers get built. And we forget the inconvenience of reality itself: Mr. Trump may be able to escape the consequences of his actions; the rest of us cannot.

This crisis of short-termism has been building for a long time.

In the decades after World War II, the Cold War was a disciplining force. Competition with the Soviets compelled both parties to support — or at least accept — initiatives as diverse as the national security state, basic research, higher education, international development and civil rights. Despite partisan differences, there was a long-term consensus around the nation’s purpose.

With the end of the Cold War, politics descended into partisan political combat over seemingly small things — from manufactured scandals to culture wars. This spiral was suspended, briefly, to launch the war on terror — the last major bipartisan effort to remake government to serve a long-term objective, in this case a dubious one: waging a forever war abroad while securitizing much of American life at home.

By the time Barack Obama took office, a destabilizing asymmetry had taken hold. Democrats acquiesced to the war on terror, and Republicans never accepted the legitimacy of reforms like Obamacare or a clean-energy transition. Citizens United v. F.E.C. led to a flood of money in politics, incentivizing the constant courting of donors more intent on preventing government action than encouraging it. The courts were increasingly politicized. The internet-driven fracturing of media rewarded spectacle and conspiracy theory in place of context and cooperation. Since 2010, the only venue for major legislation has been large tax and spending bills that brought vertiginous swings through the first Trump and the Biden administrations.

The second Trump administration has fully normalized the ethos of short-termism. Mr. Trump does have an overarching promise about the future. But it is rooted in what he is destroying, not what he is building. By dismantling the administrative state, starving the government of funds, deregulating the economy, unraveling the international order, punishing countries with arbitrary tariffs and whitening the nation through mass deportations, he will reverse the globalization that has shaped our lives and the government that was built during the Cold War. On the other side of this destruction, he says, a new “golden age” awaits.

Ro Khanna, a Democratic congressman from Silicon Valley, worries that Democrats fail to understand the resonance of this vision. “We see all the destruction,” he told me, “but what we’re not seeing is that for the Trump voter, this is a strategy of reclaiming greatness.”

Precisely because this is correct as a political diagnosis, Democrats must convey how Mr. Trump’s approach is more of a pyramid scheme than a plan. Cuts to research will starve innovation. Tariffs are likely to drive trade to China. Tax cuts will almost certainly widen inequality. Mass deportations predictably divide communities and drive down productivity. The absence of international order risks more war. Deregulation removes our ability to address climate change and A.I. Mr. Trump is trying one last time to squeeze some juice out of a declining empire while passing the costs on to future generations. Beyond the daily outrages, that is the reality that Democrats must contend with.

“The old world is dying,” Antonio Gramsci wrote in another era of destruction, “and the new world struggles to be born. Now is the time of monsters.” We may be fated to live in such a time. But what new world will be born after this time? (...)

During the Kennedy-Johnson era, a youthful president and his successor forged a vision expansive enough to encompass desegregation, a stronger social safety net, investments in education, the creation of U.S.A.I.D. and the Peace Corps and the ascent of the space program. It was undercut by political violence and the moral and practical costs of Vietnam, yet it shaped our society so comprehensively that Republicans are still seeking to reverse it. Those advances depended not just on action by government, but also the transformative participation of the civil rights movement, business and labor, universities and a media and popular culture that did not shy from politics or capitulate to reactionary forces. It was a whole-of-society fight for the future.

Today, change similarly depends upon leaning into discomfort instead of avoiding division or offering false reassurance. Democrats must match the sense of crisis many Americans feel. Mr. Khanna summarized concerns that plague far too many Americans: “I don’t see myself in this future” and “What’s going to happen to my kids?” That existential crisis was the reason Mr. Trump was returned to power; his opposition needs to meet it.

This is not about skipping ahead to the fine points of policy proposals; it’s about a coherent vision. Instead of simply defending legacy programs, we should be considering what our social safety net is for. We should attack wealth inequality as an objective and propose solutions for deploying A.I. while protecting the dignity of human work and the vitality of our children. We need to envision a new immigration system, a clean-energy transition that lowers costs for consumers and a federal government that can once again attract young people to meet national challenges. Think of what a new Department of Education or development agency could do. We can no longer cling to a dying postwar era; we need to negotiate a new international order.

by Ben Rhodes, NY Times |  Read more:
Image: Lauren Peters-Collaer
[ed. Good summary. I wonder though how much of this is just a manifestation of deeper forces at work. For example, we seem unable to control technology even if it eventually kills us. If something can be built and is potentially profitable in some way, it'll get built. It's inevitable. Capitalism, socialism, religion, authoritarianism, etc. are all deep animating forces that, in different ways, reflect fundamental aspects of human nature and human striving, the undercurrents (forces) of which will always be present, and probably always in some tension. The key should be finding as near an optimal balance as possible - surfing these currents for best solutions, so to speak. But, for sure no one size fits all for everyone.]