Friday, October 17, 2025

via:
[ed. Oh man, I'm probably in the back row... naked.]

Enshittification: Why Everything Sucks Now

We all feel it: Our once-happy digital spaces have become increasingly less user-friendly and more toxic, cluttered with extras nobody asked for and hardly anybody wants. There’s even a word for it: “enshittification,” named 2023 Word of the Year by the American Dialect Society. The term was coined by tech journalist/science fiction author Cory Doctorow, a longtime advocate of digital rights. Doctorow has spun his analysis of what’s been ailing the tech industry into an eminently readable new book, Enshittification: Why Everything Suddenly Got Worse and What To Do About It. (...)

People generally use “enshittification” colloquially to mean “the degradation in the quality and experience of online platforms over time.” Doctorow’s definition is more specific, encompassing “why an online service gets worse, how that worsening unfolds,” and how this process spreads to other online services, such that everything is getting worse all at once.

For Doctorow, enshittification is a disease with symptoms, a mechanism, and an epidemiology. It has infected everything from Facebook, Twitter, Amazon, and Google, to Airbnb, dating apps, iPhones, and everything in between. “For me, the fact that there were a lot of platforms that were going through this at the same time is one of the most interesting and important factors in the critique,” he said. “It makes this a structural issue and not a series of individual issues.”

It starts with the creation of a new two-sided online product of high quality, initially offered at a loss to attract users—say, Facebook, to pick an obvious example. Once the users are hooked on the product, the vendor moves to the second stage: degrading the product in some way for the benefit of their business customers. This might include selling advertisements, scraping and/or selling user data, or tweaking algorithms to prioritize content the vendor wishes users to see rather than what those users actually want.

This locks in the business customers, who, in turn, invest heavily in that product, such as media companies that started Facebook pages to promote their published content. Once business customers are locked in, the vendor can degrade those services too—i.e., by de-emphasizing news and links away from Facebook—to maximize profits to shareholders. Voila! The product is now enshittified.

The four horsemen of the shitocalypse

Doctorow identifies four key factors that have played a role in ushering in an era that he has dubbed the “Enshittocene.” The first is competition (markets), in which companies are motivated to make good products at affordable prices, with good working conditions, because otherwise customers and workers will go to their competitors. The second is government regulation, such as antitrust laws that serve to keep corporate consolidation in check, or levying fines for dishonest practices, which makes it unprofitable to cheat.

The third is interoperability: the inherent flexibility of digital tools, which can play a useful adversarial role. “The fact that enshittification can always be reversed with a dis-enshittifiting counter-technology always acted as a brake on the worst impulses of tech companies,” Doctorow writes. Finally, there is labor power; in the case of the tech industry, highly skilled workers were scarce and thus had considerable leverage over employers.

All four factors, when functioning correctly, should serve as constraints to enshittification. However, “One by one each enshittification restraint was eroded until it dissolved, leaving the enshittification impulse unchecked,” Doctorow writes. Any “cure” will require reversing those well-established trends.

But isn’t all this just the nature of capitalism? Doctorow thinks it’s not, arguing that the aforementioned weakening of traditional constraints has resulted in the usual profit-seeking behavior producing very different, enshittified outcomes. “Adam Smith has this famous passage in Wealth of Nations about how it’s not due to the generosity of the baker that we get our bread but to his own self-regard,” said Doctorow. “It’s the fear that you’ll get your bread somewhere else that makes him keep prices low and keep quality high. It’s the fear of his employees leaving that makes him pay them a fair wage. It is the constraints that causes firms to behave better. You don’t have to believe that everything should be a capitalist or a for-profit enterprise to acknowledge that that’s true.”

Our wide-ranging conversation below has been edited for length to highlight the main points of discussion.

Ars Technica: I was intrigued by your choice of framing device, discussing enshittification as a form of contagion.

Cory Doctorow: I’m on a constant search for different framing devices for these complex arguments. I have talked about enshittification in lots of different ways. That frame was one that resonated with people. I’ve been a blogger for a quarter of a century, and instead of keeping notes to myself, I make notes in public, and I write up what I think is important about something that has entered my mind, for better or for worse. The downside is that you’re constantly getting feedback that can be a little overwhelming. The upside is that you’re constantly getting feedback, and if you pay attention, it tells you where to go next, what to double down on.

Another way of organizing this is the Galaxy Brain meme, where the tiny brain is “Oh, this is because consumers shopped wrong.” The medium brain is “This is because VCs are greedy.” The larger brain is “This is because tech bosses are assholes.” But the biggest brain of all is “This is because policymakers created the policy environment where greed can ruin our lives.” There’s probably never going to be just one way to talk about this stuff that lands with everyone. So I like using a variety of approaches. I suck at being on message. I’m not going to do Enshittification for the Soul and Mornings with Enshittifying Maury. I am restless, and my Myers-Briggs type is ADHD, and I want to have a lot of different ways of talking about this stuff.

Ars Technica: One site that hasn’t (yet) succumbed is Wikipedia. What has protected Wikipedia thus far?

Cory Doctorow: Wikipedia is an amazing example of what we at the Electronic Frontier Foundation (EFF) call the public interest Internet. Internet Archive is another one. Most of these public interest Internet services start off as one person’s labor of love, and that person ends up being what we affectionately call the benevolent dictator for life. Very few of these projects have seen the benevolent dictator for life say, “Actually, this is too important for one person to run. I cannot be the keeper of the soul of this project. I am prone to self-deception and folly just like every other person. This needs to belong to its community.” Wikipedia is one of them. The founder, my friend Jimmy Wales, woke up one day and said, “No individual should run Wikipedia. It should be a communal effort.”

There’s a much more durable and thick constraint on the decisions of anyone at Wikipedia to do something bad. For example, Jimmy had this idea that you could use AI in Wikipedia to help people make entries and navigate Wikipedia’s policies, which are daunting. The community evaluated his arguments and decided—not in a reactionary way, but in a really thoughtful way—that this was wrong. Jimmy didn’t get his way. It didn’t rule out something in the future, but that’s not happening now. That’s pretty cool.

Wikipedia is not just governed by a board; it’s also structured as a nonprofit. That doesn’t mean that there’s no way it could go bad. But it’s a source of friction against enshittification. Wikipedia has its entire corpus irrevocably licensed as the most open it can be without actually being in the public domain. Even if someone were to capture Wikipedia, there’s limits on what they could do to it.

There’s also a labor constraint in Wikipedia in that there’s very little that the leadership can do without bringing along a critical mass of a large and diffuse body of volunteers. That cuts against the volunteers working in unison—they’re not represented by a union; it’s hard for them to push back with one voice. But because they’re so diffuse and because there’s no paychecks involved, it’s really hard for management to do bad things. So if there are two people vying for the job of running the Wikimedia Foundation and one of them has got nefarious plans and the other doesn’t, the nefarious plan person, if they’re smart, is going to give it up—because if they try to squeeze Wikipedia, the harder they squeeze, the more it will slip through their grasp.

So these are structural defenses against enshittification of Wikipedia. I don’t know that it was in the mechanism design—I think they just got lucky—but it is a template for how to run such a project. It does raise this question: How do you build the community? But if you have a community of volunteers around a project, it’s a model of how to turn that project over to that community.

Ars Technica: Your case studies naturally include the decay of social media, notably Facebook and the social media site formerly known as Twitter. How might newer social media platforms resist the spiral into “platform decay”?

Cory Doctorow: What you want is a foundation in which people on social media face few switching costs. If the social media is interoperable, if it’s federatable, then it’s much harder for management to make decisions that are antithetical to the interests of users. If they do, users can escape. And it sets up an internal dynamic within the firm, where the people who have good ideas don’t get shouted down by the people who have bad but more profitable ideas, because it makes those bad ideas unprofitable. It creates both short and long-term risks to the bottom line.

There has to be a structure that stops their investors from pressurizing them into doing bad things, that stops them from rationalizing their way into complying. I think there’s this pathology where you start a company, you convince 150 of your friends to risk their kids’ college fund and their mortgage working for you. You make millions of users really happy, and your investors come along and say, “You have to destroy the life of 5 percent of your users with some change.” And you’re like, “Well, I guess the right thing to do here is to sacrifice those 5 percent, keep the other 95 percent happy, and live to fight another day, because I’m a good guy. If I quit over this, they’ll just put a bad guy in who’ll wreck things. I keep those 150 people working. Not only that, I’m kind of a martyr because everyone thinks I’m a dick for doing this. No one understands that I have taken the tough decision.”

I think that’s a common pattern among people who, in fact, are quite ethical but are also capable of rationalizing their way into bad things. I am very capable of rationalizing my way into bad things. This is not an indictment of someone’s character. But it’s why, before you go on a diet, you throw away the Oreos. It’s why you bind yourself to what behavioral economists call “Ulysses pacts“: You tie yourself to the mast before you go into the sea of sirens, not because you’re weak but because you’re strong enough now to know that you’ll be weak in the future.

I have what I would call the epistemic humility to say that I don’t know what makes a good social media network, but I do know what makes it so that when they go bad, you’re not stuck there. You and I might want totally different things out of our social media experience, but I think that you should 100 percent have the right to go somewhere else without losing anything. The easier it is for you to go without losing something, the better it is for all of us.

My dream is a social media universe where knowing what network someone is using is just a weird curiosity. It’d be like knowing which cell phone carrier your friend is using when you give them a call. It should just not matter. There might be regional or technical reasons to use one network or another, but it shouldn’t matter to anyone other than the user what network they’re using. A social media platform where it’s always easier for users to leave is much more future-proof and much more effective than trying to design characteristics of good social media.

by Jennifer Ouellette and Cory Doctorow, Ars Technica | Read more:
Image: Julia Galdo and Cody Cloud (JUCO)/CC-BY 3.0
[ed. Do a search on this site for much more by Mr. Doctorow, including copyright and right-to-repair issues. Further on in this interview:]
***
When we had a functional antitrust system for the last four years, we saw a bunch of telecoms mergers stopped because once you start enforcing antitrust, it’s like eating Pringles. You just can’t stop. You embolden a lot of people to start thinking about market structure as a source of either good or bad policy. The real thing that happened with [former FCC chair] Lina Kahn doing all that merger scrutiny was that people just stopped planning mergers.

There are a lot of people who benefit from this. It’s not just tech workers or tech users; it’s not just media users. Hospital consolidation, pharmaceutical consolidation, has a lot of people who are very concerned about it. Mark Cuban is freaking out about pharmacy benefit manager consolidation and vertical integration with HMOs, as he should be. I don’t think that we’re just asking the anti-enshittification world to carry this weight.

Same with the other factors. The best progress we’ve seen on interoperability has been through right-to-repair. It hasn’t been through people who care about social media interoperability. One of the first really good state-level right-to-repair bills was the one that [Governor] Jared Polis signed in Colorado for powered wheelchairs. Those people have a story that is much more salient to normies.

"What do you mean you spent six months in bed because there’s only two powered wheelchair manufacturers and your chair broke and you weren’t allowed to get it fixed by a third party?” And they’ve slashed their repair department, so it takes six months for someone to show up and fix your chair. So you had bed sores and pneumonia because you couldn’t get your chair fixed. This is bullshit.

via:

Thursday, October 16, 2025

Kerry James Marshall, Untitled (Blanket Couple)

Tawaraya Sōtatsu (act. 1600-1640), Calligrapher Hon'ami Kōetsu (Japanese, 1558 - 1637), Flying Cranes and Poetry.

The Lost Art Of Thinking Historically

On a sun-drenched November day in Dallas, 1963, as President John F. Kennedy’s motorcade rounded the corner onto Elm Street, a single, baffling figure stood out against the cheerful crowd: a man holding a black umbrella aloft against the cloudless sky. Seconds later, shots rang out, and the world changed forever.

In the chaotic aftermath, as a nation grappled with an incomprehensible act of violence, the image of the “Umbrella Man” became a fetish, as novelist John Updike would later write, dangling around history’s neck. The man was an anomaly, a detail that didn’t fit. In a world desperate for causal links, his presence seemed anything but benign. Was the umbrella a secret signaling device? A disguised flechette gun that fired the first, mysterious throat wound? For years, investigators and conspiracy theorists alike saw him as a key to a sinister underpinning, a puzzle piece in a grand, nefarious design.

The truth, when it finally emerged, was nearly absurd in its banality. Testifying before a House committee in 1978, a Dallas warehouse worker named Louie Steven Witt admitted he was the man. His motive was not assassination, but heckling. The umbrella was a symbolic protest against the Kennedy family, referencing the Nazi-appeasing policies of former British Prime Minister Neville Chamberlain — whose signature accessory was an umbrella — and his association with JFK’s father, Joseph P. Kennedy, who had been an ambassador to the U.K. It was, as the investigator Josiah Thompson noted, an explanation “just wacky enough to be true.”

The story of the Umbrella Man reveals our deep-seated human desire to make sense of a complex universe through tidy, airtight explanations. We crave certainty, especially in the face of tragedy, and are quick to weave disparate facts into a coherent, and often sinister, narrative. We see a man with an umbrella on a sunny day and assume conspiracy, because the alternative — that the world is a stage for random, idiosyncratic and often meaningless acts — is far more unsettling. (...)

Making consequential choices about an unknowable future is a profoundly challenging task. The world is not a laboratory. It is a vortex of ambiguity, contingency and competing perspectives, where motives are unclear, evidence is contradictory and the significance of events changes with the passage of time. No economic model or regression analysis can fully explain the Umbrella Man, nor can it provide the clarity we need to navigate the intricate challenges of our time.

What we have lost, and what we desperately need to reclaim, is a different mode of cognition, a historical sensibility. This is not about memorizing dates and facts. It is, as the historian Gordon S. Wood describes it, a “different consciousness,” a way of understanding that profoundly influences how we see the world. It is a temperament that is comfortable with uncertainty, sensitive to context and aware of the powerful, often unpredictable rhythms of the past. To cultivate this sensibility is to acquire the intellectual virtues of modesty, curiosity and empathy — an antidote to the hubris of rigid, monocausal thinking.

The Historian’s Audacious Act

The stereotypical image of a historian is a collector of dusty facts, obsessed with the archives, who then weaves them into a story. But this portrait misses the audacious intellectual act at the heart of the discipline. (...)

This is an ambitious, almost brazen attempt to impose a shared order on the infinite, confusing array of facts and causes that mark our existence. It offers an argument about causality and agency — about who and what matters, and how the world works and why. Does change come from great leaders, collective institutions or vast, impersonal structural forces? A historian’s narrative is never just a story; it is a theory of change.

This process is fundamentally different from that of many other disciplines. Where social sciences often seek to create generalizable, predictive and parsimonious theories — the simplest explanation for the largest number of things — history revels in complexity. A historical sensibility is skeptical of master ideas or unitary historical motors. It recognizes that different things happen for different reasons, that direct causal connections can be elusive, and that the world is rife with unintended consequences. It makes no claim to predict the future; rather, it seeks to deepen our understanding of how the past unfolded into our present, reminding us, as British historian Sir Llewellyn Woodward said, that “our ignorance is very deep.”

This sensibility compels us to reconsider concepts we take for granted. We use terms such as “capitalism” and “human rights” as if they are timeless and universal, when in fact they are concepts that emerged and evolved at particular historical moments, often identified and defined by historians. A historical consciousness demands that we seek the origins of things we thought we understood and empathize with the past in its own context. This is to imagine ourselves in the shoes of those who came before, wrestling with their dilemmas in their world. It doesn’t mean suspending moral judgment, but rather being less confident that we — here today — have a monopoly on timeless insight.

Why We Get History Wrong

Thinking historically is valuable but rare. Most of us encounter “history” in up to three ways, none of which cultivates this deeper consciousness. First, in school, where it is often presented as a dry chronology of dates and facts to be memorized with little connection to our lives. Second, through public history — museums, memorials, historical sites — which can inspire curiosity, but are themselves historical products, often reflecting the biases and blind spots of the era in which they were created. (A tour of Colonial Williamsburg may reveal more about the Rockefeller-funded restoration ethos of the 1930s than about the 18th-century reality it purports to represent.) Third, through bestselling books and documentaries, which may tell vivid, engaging stories, but can be hagiographic and anecdotal, oriented toward simple lessons and celebrating national myths rather than challenging our assumptions.

None of these is the same as developing a historical sensibility. They are more like comfort food, satisfying a deep urge to connect with the past but providing little real nourishment. At worst, they reinforce the very cognitive habits — the desire for certainty, simple narratives and clear heroes and villains — that a true historical sensibility seeks to question.

The academic discipline of history has, in recent decades, largely failed in its public duty. It has retreated from the consequential subjects of statecraft and strategy, seeing them as unworthy of scholarly pursuit. The rosters of tenured historians at major universities show a steep decline in scholars engaged with questions of war, peace and diplomacy. When they do address such topics, they often do so in a jargon-laden style that is inaccessible and unhelpful to decision-makers or the wider public.

This decline is a tragedy, especially at a time when leaders confronting complex global challenges are desperate for guidance. The field of history has become estranged from the very world of power and decision-making it is uniquely equipped to analyze. Historians and policymakers, who should be natural interlocutors, rarely engage one another. This has left a vacuum that is eagerly filled by other disciplines more confident in their ability to provide actionable advice — which is often dangerously simplistic. (...)

The Practice Of Thinking Historically

If a historical sensibility is the temperament, then thinking historically is the practice. It is the active deployment of that sensibility as a set of tools to assess the world and make more informed choices. It is a distinct epistemology, one that offers a powerful method for evaluating causality and agency, weighing competing narratives and navigating the dilemmas of decision-making without succumbing to what can be called “paralysis by analysis.” It offers not a crystal ball, but a more sophisticated lens — a historian’s microscope — through which to see the present.

Thinking historically begins by questioning vertical and horizontal time. The vertical axis asks: How did we get here? It is the rigorous construction of a chronology, not as a mere list of dates, but as a map of cause and effect. Where this timeline begins — with the Bolshevik Revolution of 1917, the end of World War II in 1945 or the rise of China in 1979 — fundamentally changes the story and its meaning. It reveals our own unspoken assumptions about what truly drives events.

The horizontal axis asks: What else is happening? It recognizes that history is not a single storyline but a thick tapestry of interwoven threads. The decision to escalate the war in Vietnam, for example, cannot be fully understood without examining the parallel, and seemingly contradictory, efforts by the same administration to cooperate with the Soviet Union on nuclear nonproliferation. Thinking historically is the act of integrating these divergent streams.

Crucially, this practice leads us to confront our own biases, particularly outcome bias. Because we know how the story ended — how the Cold War concluded or how the 2008 financial crisis resolved — we are tempted to construct a neat narrative of inevitability. Thinking historically resists this temptation. It demands that we try to see the world as the actors of the past saw it: through a foggy windshield, not a rearview mirror, facing a future of radical uncertainty. It restores a sense of contingency to the past, reminding us that choices mattered and that the world could have turned out differently.

Ultimately, thinking historically is about asking better, more probing questions. It is a disciplined curiosity that fosters an appreciation for the complex interplay of individual agency, structural forces and pure chance. Instead of offering easy answers, it provides the intellectual equipment to engage with hard questions, a skill indispensable for navigating a future that will surely be as unpredictable as the past.

by Francis Gavin, Noema |  Read more:
Image: Mr.Nelson design for Noema Magazine
[ed. Unfortunately, I'm not seeing a Renaissance in critical thinking anytime soon. See also: Believing misinformation is a “win” for some people, even when proven false (Ars Technica - below); and, Rescuing Democracy From The Quiet Rule Of AI (Noema).]

"Why do some people endorse claims that can easily be disproved? It’s one thing to believe false information, but another to actively stick with something that’s obviously wrong.

Our new research, published in the Journal of Social Psychology, suggests that some people consider it a “win” to lean in to known falsehoods. (...)

Rather than consider issues in light of actual facts, we suggest people with this mindset prioritize being independent from outside influence. It means you can justify espousing pretty much anything—the easier a statement is to disprove, the more of a power move it is to say it, as it symbolizes how far you’re willing to go...
 for some people, literal truth is not the point."

Mission Impossible

After the midair collision in January over the Potomac River between an Army helicopter and a regional jet packed with young figure skaters and their parents flying out of Wichita, Kansas, and considering the ongoing travails of the Boeing Company, which saw at least five of its airplanes crash last year, I was so concerned about the state of U.S. aviation that, when called on by this magazine to attend President Donald Trump’s military parade in Washington, on June 14, 2025, I decided to drive all the way from my home in Austin, Texas, even though it cost me two days behind the wheel and a gas bill as expensive as a plane ticket.

I was no less concerned about the prospect of standing on the National Mall on the day of the parade, a celebration of the two-hundred-fiftieth anniversary of the founding of the U.S. Army, which happened to coincide with Trump’s seventy-ninth birthday. The forecast predicted appropriately foul weather for the occasion, and there would be a number of helicopters, of both modern and Vietnam-era vintage, flying over the parade grounds. The Army’s recent track record didn’t bode well for those positioned under the flight path. In the past two years, there had been at least twenty-four serious accidents involving helicopters and nineteen fatalities, culminating with the collision over the Potomac, the deadliest incident in American commercial aviation since 2001.

A crash was not the only thing that I worried about. Acts of low-level domestic terrorism and random shootings take place routinely in this country, and although security at the parade would be tight, I wondered what the chance was of some sort of attack on the parade-goers, or even another attempt on Trump’s life. The probability seemed low, but considering the number of veterans who would be in attendance, I had occasion to recall a 2023 study that found that military service is the single strongest predictor of whether an American will commit a mass killing. (...)

Then there were the politics of the parade, the first procession of military forces past the White House since the end of the Gulf War. For weeks, opinion columnists and television pundits had been sounding the alarm over the controversial festivities, which they saw as another sign of America’s downward slide into authoritarianism, into fascism. Comparisons abounded to Mussolini’s Italy, Pinochet’s Chile, and Hitler’s Germany. A coalition of opposition groups had organized a day of protests under the slogan “No Kings,” and that morning, in thousands of cities across the United States, millions of demonstrators were assembling, waving signs that said things like stop fascism, resist fascism, and no to trump’s fascist military parade.

I was no more thrilled than they were about the idea of tanks and armored vehicles rolling down Constitution Avenue. Trump’s accelerationist instincts, the zeal of his fan base, and the complicity, cowardice, and inaction of the Democratic Party in the face of the governing Republican trifecta made the possibility of a military dictatorship in the United States seem borderline plausible. But in a reminder that Trump is not wildly popular with the electorate so much as unopposed by any effective political counterweight, groups of foreign tourists predominated among the parade’s early arrivals.

The first people I met in the surprisingly short line to pass through the security checkpoint were an affable pair of fun-loving Europeans. Jelena, a Slovenian, had come in hopes of meeting a husband. “If someone’s going to marry me,” she explained with a laugh, “it will be a Republican man.” Liberals were too elitist for her: “Democrats will ask what school I went to.” Her high-spirited wingman, a Bulgarian named Slavko, was drinking beer out of a plastic cup at eleven o’clock in the morning. He had come “to get fucking drunk and high all day long,” he told me, “and just hang out.”

There were a number of Trump voters in line, but they seemed muted, even reasonable, in their political views, far from the legions of MAGA faithful I had expected to encounter. David and Sandra Clark, a middle-aged couple from Carlisle, Pennsylvania, were divided in their opinions of the president. Sandra was not a fan, she said, and David described himself as a “marginal” Trump supporter. They had come to observe the Army’s semiquincentennial, a “momentous occasion,” he said. The day before, Israel had bombed Iran, opening yet another front in the apartheid state’s war against its Muslim neighbors, and the Clarks were concerned about the situation. “It seems like it could get out of hand,” he said. “I’m here to see the protesters,” Sandra put in. “I may join them.”

A few of the attendees trickling in had on red hats that said trump 2028 or make iran great again, but these slogans somehow lacked their intended provocative effect. I looked out over the Mall, where the second-rate exhibits that the Army had set up made a mockery of the parade’s $30 million price tag. Was this supposed to be a show of American military might? (...)

By midday, the heat was ungodly. Not a drop of the predicted rain fell, and not a breeze blew. Near a much-needed water station was an exhibit of military first-aid kits manned by a delegation from Fort Bragg’s 44th Medical Brigade, which recently saw three of its current or former soldiers convicted of federal drug-trafficking charges related to a racket smuggling ketamine out of Cameroon. After hydrating, I watched the 3rd Infantry Regiment, a ceremonial unit known as the Old Guard, spin and toss their rifles and bayonets to a smattering of languorous applause from a small crowd of South Asian tourists, aging veterans, and subdued MAGA fans.

What kind of fascism was this? Rather than the authoritarian spectacle that liberals had anticipated, the festivities seemed to be more a demonstration of political fatigue and civic apathy. And if Trump intended the parade to be an advertisement of America’s military strength, it would instead prove to be an inadvertent display of the armed forces’ creeping decrepitude, low morale, shrinking size, obsolescence, and dysfunction. (...)

During the speech, Trump touted his proposed trillion-dollar defense budget, taunted the reporters in attendance, warned of hordes of immigrants coming from “the Congo in Africa,” denounced the protesters in Los Angeles as “animals,” ridiculed transgender people, and promised the troops a pay raise, even as he repeatedly strayed from his prepared remarks to praise the good looks of handsome service members who caught his eye. “For two and a half centuries, our soldiers have marched into the raging fires of battle and obliterated America’s enemies,” Trump told the crowd. “Our Army has smashed foreign empires, humbled kings, toppled tyrants, and hunted terrorist savages through the very gates of hell,” he said. “They all fear us. And we have the greatest force anywhere on earth.” (...)

In point of fact, the modern American military is a much weaker and more debilitated force than Trump’s braggadocio, and the Defense Department’s gargantuan spending habits, might suggest. The United States has either failed to achieve its stated aims in, or outright lost, every major war it has waged since 1945—with the arguable exception of the Gulf War—and it only seems to be getting less effective as defense expenditures continue to rise. You don’t need to look back to U.S. defeats in Iraq or Afghanistan, much less Vietnam, to illustrate this point. Just one month before Trump’s parade, in May, our armed forces suffered a humiliating loss against a tiny but fearless adversary in Yemen, one of the poorest countries in the world.

The Houthi rebels, also known as Ansar Allah, have been defying the United States, Saudi Arabia, and Israel ever since they first emerged as a military force in 2004 protesting the U.S. invasion of Iraq, the Israeli occupation of Palestine, and the quisling Yemeni regime’s collaboration with the Bush Administration. After Hamas attacked Israel on October 7, 2023, the Houthis, who had endured nearly a decade of starvation under a U.S.-backed Saudi blockade of their ports, tried to force Israel and its allies to lift the siege of Gaza by using their scrappy speedboat navy and homemade arsenal of cheaply manufactured missiles, drones, and unmanned underwater vehicles to choke off maritime traffic in the Red Sea. In response, the Biden Administration, invoking the threat posed by the Houthis to freedom of navigation, launched a wave of air strikes on Yemen and dispatched a naval fleet to reopen the Bab el-Mandeb Strait. The campaign did not go well. A pair of Navy SEALs drowned while attempting to board a Houthi dhow, and the crew of the USS Gettysburg accidentally shot down an F/A-18F Super Hornet fighter jet after it took off from the USS Harry S. Truman, one of America’s premier aircraft carriers, which a short time later collided with an Egyptian merchant ship.

In January of this year, Trump declared the Houthis a terrorist organization and doubled down on Biden’s war. The administration replaced the commander of the Gettysburg and augmented U.S. assets in the region with another aircraft-carrier strike group, which costs $6.5 million a day to operate; B-2 bombers, which cost $90,000 per flight hour; and antimissile interceptors, which can cost $2.7 million apiece. In the span of a few weeks in March and April, the United States launched hundreds of air strikes on Yemen. The tough, ingenious (and dirt-poor) Houthis, protected by Yemen’s mountainous interior, fought back with the tenacity of drug-resistant microbes. They downed hundreds of millions of dollars’ worth of Reaper drones; nearly managed to shoot several F-16s and an F-35 out of the sky; and evaded air defenses to strike Israel with long-range drones, all the while continuing to harass commercial shipping in the Red Sea, which plummeted by 60 percent.

On April 28, American warplanes struck a migrant detention center in the northern Yemeni city of Sadah, then dropped more bombs on emergency workers who arrived in the aftermath. Sixty-eight people were killed. In retaliation, the Houthis launched a fusillade of ballistic missiles at the Truman, which turned tail and steamed away, causing another Super Hornet to slide off the deck into the ocean.

The loss of a second $67 million fighter jet was evidently a turning point for President Trump. In one month, the United States had used up much of its stockpile of guided missiles and lost a number of aircraft but failed to establish air superiority over a country with a per capita GDP one sixth the size of Haiti’s. To avoid further embarrassment, Trump officials declared Operation Rough Rider a success and ordered U.S. Central Command to “pause” operations, effectively capitulating to the Houthis. “We hit them very hard and they had a great ability to withstand punishment,” Trump conceded. “You could say there was a lot of bravery there.” The very same day, yet another $67 million Super Hornet slipped off the deck of the Truman and sank to the bottom of the sea. (...)

At last it was time for the parade. The thin crowd, which hadn’t thickened much over the course of the day, filtered through a secondary security checkpoint and took up positions along Constitution Avenue, angling for spots in the shade. I saw a woman changing a baby’s diaper at the base of a tree, and a shirtless old man in a cavalry hat standing atop an overflowing garbage can. With the sun still high in the sky at six o’clock, the heat had barely relented. Smoke from a wildfire in New Jersey had turned the overcast sky a dirty brown.

On the north side of the street, in front of the White House, a covered stage had been set up for the reviewing party, protected by bulletproof glass and flanked by tanks below. First to take his seat was the chairman of the Joint Chiefs of Staff, General Dan Caine, a “serial entrepreneur and investor,” according to his Air Force biography. The secretary of defense, former Fox News host Pete Hegseth, came out shortly after, wearing a blue suit and camouflage tie, followed by Vice President J. D. Vance, who garnered scattered claps and whistles from the crowd. More-enthusiastic applause greeted President Trump’s appearance onstage, accompanied by a jarring blast of trumpets, but the cheering was still rather sedate. First Lady Melania Trump stood beside him, looking down at the crowd with cold contempt. The whole perverse regime was onstage, including Kristi Noem and Marco Rubio. Seeing them seated there in such close proximity, I found myself wondering how long-range those Houthi drones really are.

Throughout the day, I had spoken to various Trump voters and tried to sound out their opinions on Trump’s brand of militarism and his foreign policy. Rather than any ethos or ideology that could support the renewal of National Socialism in the United States, I found them to be motivated mostly by tired cultural grudges, xenophobic resentment, social-media memes, and civic illiteracy. Few were enthusiastic about defending Trump’s complete capitulation to Israel and the neocons.

Trump voters know just as well as the rest of us that the terror wars were a mistake. We all know that they were based on lies. We are all well aware that our side lost, and that the defeats were costly, and indeed ruinous. We are going to keep starting new wars anyway, and losing them too. As President Biden said last year of his administration’s air strikes on Yemen: “Are they stopping the Houthis? No. Are they going to continue? Yes.”

This isn’t a sign of ascendant fascism so much as the nadir of late-stage capitalism, which depends on forever wars to juice corporate profits at a time of falling rates of return on investment. In its doddering senescence, the capitalist war machine is no less murderous than fascism was—witness the millions of Muslims killed by the United States and Israel since 2001—but it has considerably lower production values. In this soft dystopia, our military forces will not be destroyed in a cataclysmic confrontation with the armies of Communism, as befell Nazi Germany on the Eastern Front. Instead, the defense oligarchs who own Congress will go on pocketing the money allocated to the military, just as they have been for the past forty years, until nothing is left but a hollow shell, a shrinking and sclerotic military so debilitated by graft, suicides, overdoses, and violent crime that it’s incapable of fulfilling its mission, and suitable only for use in theatrical deployments at home beating up protesters and rounding up migrants and the homeless.

Mustering the last of my morale, I trudged back to Constitution Avenue and took my place among the remaining parade-goers. One of the last formations to march past was an Army weapons-testing platoon accompanied by a number of small quadcopter drones. Quadcopters like these have proved pivotal in Ukraine, but the United States hardly makes any. China can churn out an estimated hundred cheap, disposable drones for every one produced in America. In an effort to close the gap, Pete Hegseth has announced new initiatives to boost domestic manufacturing of the devices, but early results have not been promising. A recent report in the New York Times described an exercise in Alaska in which defense contractors and soldiers tested prototypes of U.S.-built “one-way” kamikaze drones with results so dismal they were almost comical. None of the tests described were successful. The drones failed to launch or missed their targets. One crashed into a mountain.

The quadcopters hovering over the testing platoon at the rear of the parade were the X10D model made by Skydio, the largest U.S. drone manufacturer. Not long ago, Skydio transitioned its business from consumer to military and police drones, targeting markets in Ukraine, Israel, and elsewhere. After Skydio sold drones to Taiwan, Beijing retaliated last year by cutting off the company’s access to Chinese batteries, prompting the company to ration them to only one per drone. I noticed that one of the Skydio quadcopters hovering over the parade had dropped out of view. I couldn’t see where it had gone. Then one of the soldiers in the testing platoon marched past, holding it up over his head, make-believing that it was still aloft.

by Seth Harp, Harper's |  Read more:
Images: uncredited 

Inside the Web Infrastructure Revolt Over Google’s AI Overviews

It could be a consequential act of quiet regulation. Cloudflare, a web infrastructure company, has updated millions of websites' robots.txt files in an effort to force Google to change how it crawls them to fuel its AI products and initiatives.

We spoke with Cloudflare CEO Matthew Prince about what exactly is going on here, why it matters, and what the web might soon look like. But to get into that, we need to cover a little background first.

The new change, which Cloudflare calls its Content Signals Policy, happened after publishers and other companies that depend on web traffic have cried foul over Google's AI Overviews and similar AI answer engines, saying they are sharply cutting those companies' path to revenue because they don't send traffic back to the source of the information.

There have been lawsuits, efforts to kick-start new marketplaces to ensure compensation, and more—but few companies have the kind of leverage Cloudflare does. Its products and services back something close to 20 percent of the web, and thus a significant slice of the websites that show up on search results pages or that fuel large language models.

"Almost every reasonable AI company that's out there is saying, listen, if it's a fair playing field, then we're happy to pay for content," Prince said. "The problem is that all of them are terrified of Google because if Google gets content for free but they all have to pay for it, they are always going to be at an inherent disadvantage."

This is happening because Google is using its dominant position in search to ensure that web publishers allow their content to be used in ways that they might not otherwise want it to.

The changing norms of the web

Since 2023, Google has offered a way for website administrators to opt their content out of use for training Google's large language models, such as Gemini.

However, allowing pages to be indexed by Google's search crawlers and shown in results requires accepting that they'll also be used to generate AI Overviews at the top of results pages through a process called retrieval-augmented generation (RAG).

That's not so for many other crawlers, making Google an outlier among major players.

This is a sore point for a wide range of website administrators, from news websites that publish journalism to investment banks that produce research reports.

A July study from the Pew Research Center analyzed data from 900 adults in the US and found that AI Overviews cut referrals nearly in half. Specifically, users clicked a link on a page with AI Overviews at the top just 8 percent of the time, compared to 15 percent for search engine results pages without those summaries.

And a report in The Wall Street Journal cited a wide range of sources—including internal traffic metrics from numerous major publications like The New York Times and Business Insider—to describe industry-wide plummets in website traffic that those publishers said were tied to AI summaries, leading to layoffs and strategic shifts.

In August, Google's head of search, Liz Reid, disputed the validity and applicability of studies and publisher reports of reduced link clicks in search. "Overall, total organic click volume from Google Search to websites has been relatively stable year-over-year," she wrote, going on to say that reports of big declines were "often based on flawed methodologies, isolated examples, or traffic changes that occurred prior to the rollout of AI features in Search."

Publishers aren't convinced. Penske Media Corporation, which owns brands like The Hollywood Reporter and Rolling Stone, sued Google over AI Overviews in September. The suit claims that affiliate link revenue has dropped by more than a third in the past year, due in large part to Google's overviews—a threatening shortfall in a business that already has difficult margins.

Penske's suit specifically noted that because Google bundles traditional search engine indexing and RAG use together, the company has no choice but to allow Google to keep summarizing its articles, as cutting off Google search referrals entirely would be financially fatal.

Since the earliest days of digital publishing, referrals have in one way or another acted as the backbone of the web's economy. Content could be made available freely to both human readers and crawlers, and norms were applied across the web to allow information to be tracked back to its source and give that source an opportunity to monetize its content to sustain itself.

Today, there's a panic that the old system isn't working anymore as content summaries via RAG have become more common, and along with other players, Cloudflare is trying to update those norms to reflect the current reality.

A mass-scale update to robots.txt

Announced on September 24, Cloudflare's Content Signals Policy is an effort to use the company's influential market position to change how content is used by web crawlers. It involves updating millions of websites' robots.txt files.

Starting in 1994, websites began placing a file called "robots.txt" at the domain root to indicate to automated web crawlers which parts of the domain should be crawled and indexed and which should be ignored. The standard became near-universal over the years; honoring it has been a key part of how Google's web crawlers operate. (...)

The next web paradigm

It takes a company with Cloudflare's scale to do something like this with any hope that it will have an impact. If just a few websites made this change, Google would have an easier time ignoring it, or worse yet, it could simply stop crawling them to avoid the problem. Since Cloudflare is entangled with millions of websites, Google couldn't do that without materially impacting the quality of the search experience.

Cloudflare has a vested interest in the general health of the web, but there are other strategic considerations at play, too. The company has been working on tools to assist with RAG on customers' websites in partnership with Microsoft-owned Google competitor Bing and has experimented with a marketplace that provides a way for websites to charge crawlers for scraping the sites for AI, though what final form that might take is still unclear.

I asked Prince directly if this comes from a place of conviction. "There are very few times that opportunities come along where you get to help think through what a future better business model of an organization or institution as large as the Internet and as important as the Internet is," he said. "As we do that, I think that we should all be thinking about what have we learned that was good about the Internet in the past and what have we learned that was bad about the Internet in the past."

by Samuel Axon, Ars Technica |  Read more:
Image: Cloudflare CEO Mathew Prince. Noam Galai for TechCrunch (CC BY 2.0)

Wednesday, October 15, 2025

Christian Dior: silk and lace slip dress S/S 2002 Designed By: John Galliano
via:

Lego Sub

via:
[ed. My grandson can build me one.]

Robotics Has Catapulted Beijing Into a Dominant Position

Western executives who visit China are coming back terrified.

“It’s the most humbling thing I’ve ever seen,” said Ford’s chief executive about his recent trip to China.

After visiting a string of factories, Jim Farley was left astonished by the technical innovations being packed into Chinese cars – from self-driving software to facial recognition.

“Their cost and the quality of their vehicles is far superior to what I see in the West,” Farley warned in July.

“We are in a global competition with China, and it’s not just EVs. And if we lose this, we do not have a future at Ford.”

The car industry boss is not the only Western executive to have returned shaken following a visit to the Far East.

Andrew Forrest, the Australian billionaire behind mining giant Fortescue – which is investing massively in green energy – says his trips to China convinced him to abandon his company’s attempts to manufacture electric vehicle powertrains in-house.

“I can take you to factories [in China] now, where you’ll basically be alongside a big conveyor and the machines come out of the floor and begin to assemble parts,” he says.

“And you’re walking alongside this conveyor, and after about 800, 900 metres, a truck drives out. There are no people – everything is robotic.”

Other executives describe vast, “dark factories” where robots do so much of the work alone that there is no need to even leave the lights on for humans.

“We visited a dark factory producing some astronomical number of mobile phones,” recalls Greg Jackson, the boss of British energy supplier Octopus.

“The process was so heavily automated that there were no workers on the manufacturing side, just a small number who were there to ensure the plant was working.

“You get this sense of a change, where China’s competitiveness has gone from being about government subsidies and low wages to a tremendous number of highly skilled, educated engineers who are innovating like mad.”

by Matt Oliver, Telegraph |  Read more:
Images: uncredited
[ed. Meanwhile we're busy turning people against each other and trying to bring back low-wage industrial jobs (that'll probably be obsolete in a few years if they aren't already). Guess who's got the momentum and strategic vision.]

Cañones y Mantequilla

"The song is featured in "Tierra y Silencio," a short film by Beatriz Abad. "Tierra y Silencio" tells the ins and outs of the people of a place ruled by a landowner, Krishna, who rebuilt that world to give the people a new opportunity. Now, the world of "Tierra y Silencio" is crumbling, driven by the same negative feelings that drove Krishna to flee the cities long ago. One night, the lives of its protagonists unite in a dark evening of judgment where the earth will protest their evil deeds and have the final say for all of them."

[ed. Still can't tell what's going on.]

Everything Is Television

A spooky convergence is happening in media. Everything that is not already television is turning into television. Three examples:

1. You learn a lot about a company when its back is against the wall. This summer, we learned something important about Meta, the parent company of Facebook and Instagram. In an antitrust case with the Federal Trade Commission, Meta filed a legal brief on August 6, in which it made a startling claim. Meta cannot possibly be a social media monopoly, Meta said, because it is not really a social media company.

Only a small share of time spent on its social-networking platforms is truly “social” networking—that is, time spent checking in with friends and family. More than 80 percent of time spent on Facebook and more than 90 percent of time spent on Instagram is spent watching videos, the company reported. Most of that time is spent watching content from creators whom the user does not know. From the FTC filing:
Today, only a fraction of time spent on Meta’s services—7% on Instagram, 17% on Facebook—involves consuming content from online “friends” (“friend sharing”). A majority of time spent on both apps is watching videos, increasingly short-form videos that are “unconnected”—i.e., not from a friend or followed account—and recommended by AI-powered algorithms Meta developed as a direct competitive response to TikTok’s rise, which stalled Meta’s growth.
Social media has evolved from text to photo to video to streams of text, photo, and video, and finally, it seems to have reached a kind of settled end state, in which TikTok and Meta are trying to become the same thing: a screen showing hours and hours of video made by people we don’t know. Social media has turned into television.

2. When I read the Meta filing, I had been thinking about something very different: the future of my podcast, Plain English.

When podcasts got started, they were radio for the Internet. This really appealed to me when I started my show. I never watch the news on television, and I love listening to podcasts while I make coffee and go on walks, and I’d prefer to make the sort of media that I consume. Plus, as a host, I thought I wanted to have conversations focused on the substance of the words rather than on ancillary concerns about production value and lighting.

But the most successful podcasts these days are all becoming YouTube shows. Industry analysts say consumption of video podcasts is growing twenty times faster than audio-only ones, and more than half of the world’s top shows now release video versions. YouTube has quietly become the most popular platform for podcasts, and it’s not even close. On Spotify, the number of video podcasts has nearly tripled since 2023, and video podcasts are significantly outgrowing non-video podcasts. Does it really make sense to insist on an audio-only podcast in 2025? I do not think so. Reality is screaming loudly in my ear, and its message is clear: Podcasts are turning into television.

3. In the last few weeks, Meta introduced a product called Vibes, and OpenAI announced Sora. Both are AI social networks where users can watch endless videos generated by artificial intelligence. (For your amusement, or horror, or whatever, here is: Sam Altman stealing GPUs at Target to make more AI; the O.J. Simpson trial as an amusement park ride; and Stephen Hawking entering a professional wrestling ring.)

Some tech analysts predict that these tools will lead to an efflorescence of creativity. “Sora feels like enabling everyone to be a TikTok creator,” the investor and tech analyst MG Siegler wrote. But the internet’s history suggests that, if these products succeed, they will follow what Ben Thompson calls the 90/9/1 rule: 90 percent of users consume, 9 percent remix and distribute, and just 1 percent actually create. In fact, as Scott Galloway has reported, 94 percent of YouTube views come from 4 percent of videos, and 89 percent of TikTok views come from 5 percent of videos. Even the architects of artificial intelligence, who imagine themselves on the path to creating the last invention, are busy building another infinite sequence of video made by people we don’t know. Even AI wants to be television.

Too Much Flow


Whether the starting point is a student directory (Facebook), radio, or an AI image generator, the end point seems to be the same: a river of short-form video. In mathematics, the word “attractor” describes a state toward which a dynamic system tends to evolve. To take a classic example: Drop a marble into a bowl, and it will trace several loops around the bowl’s curves before settling to rest at the bottom. In the same way, water draining in a sink will ultimately form a spiral pattern around the drain. Complex systems often settle into recurring forms, if you give them enough time. Television seems to be the attractor of all media.

By “television,” I am referring to something bigger than broadcast TV, the cable bundle, or Netflix. In his 1974 book Television: Technology and Cultural Form, Raymond Williams wrote that “in all communications systems before [television], the essential items were discrete.” That is, a book is bound and finite, existing on its own terms. A play is performed in a particular theater at a set hour. Williams argued that television shifted culture from discrete and bounded products to a continuous, streaming sequence of images and sounds, which he called “flow.” When I say “everything is turning into television,” what I mean is that disparate forms of media and entertainment are converging on one thing: the continuous flow of episodic video.

By Williams’s definition, platforms like YouTube and TikTok are an even more perfect expression of television than old-fashioned television, itself. On NBC or HBO, one might tune in to watch a show that feels particular and essential. On TikTok, by contrast, nothing is essential. Any one piece of content on TikTok is incidental, even inessential. The platform’s allure is the infinitude promised by its algorithm. It is the flow, not the content, that is primary.

One implication of “everything is becoming television” is that there really is too much television—so much, in fact, that some TV is now made with the assumption that audiences are always already distracted and doing something else. Netflix producers reportedly instruct screenwriters to make plots as obvious as possible, to avoid confusing viewers who are half-watching—or quarter-watching, if that’s a thing now—while they scroll through their phones. (...)

Among Netflix’s 36,000 micro-genres, one is literally called “casual viewing.” The label is reportedly reserved for sitcoms, soap operas, or movies that, as the Hollywood Reporter recently described the 2024 Jennifer Lopez film Atlas, are “made to half-watch while doing laundry.”...  The whole point is that it’s supposed to just be there, glowing, while you do something else. Perhaps a great deal of television is not meant to absorb our attention, at all, but rather to dab away at it, to soak up tiny droplets of our sensory experience while our focus dances across other screens. You might even say that much television is not even made to be watched at all. It is made to flow. The play button is the point.

Lonely, Mean, and Dumb

… and why does this matter? Fine question. And, perhaps, this is a good place for a confession. I like television. I follow some spectacular YouTube channels. I am not on Instagram or TikTok, but most of the people I know and love are on one or both. My beef is not with the entire medium of moving images. My concern is what happens when the grammar of television rather suddenly conquers the entire media landscape.

In the last few weeks, I have been writing a lot about two big trends in American life that do not necessarily overlap. My work on the “Antisocial Century” traces the rise of solitude in American life and its effects on economics, politics, and society. My work on “the end of thinking” follows the decline of literacy and numeracy scores in the U.S. and the handoff from a culture of literacy to a culture of orality. Neither of these trends is exclusively caused by the logic of television colonizing all media. But both trends are significantly exacerbated by it. 

Television’s role in the rise of solitude cannot be overlooked. In Bowling Alone, the Harvard scholar Robert Putnam wrote that between 1965 and 1995, the typical adult gained six hours a week in leisure time. As I wrote, they could have used those additional 300 hours a year to learn a new skill, or participate in their community, or have more children. Instead, the typical American funneled almost all of this extra time into watching more TV. Television instantly changed America’s interior decorating, relationships, and communities: (...)

Digital media, empowered by the serum of algorithmic feeds, has become super-television: more images, more videos, more isolation. Home-alone time has surged as our devices have become more bottomless feeds of video content. Rather than escape the solitude crisis that Putnam described in the 1990s, we now seem to be more on our own. (Not to mention: meaner and stupider, too.)

It would be rash to blame our berserk political moment entirely on short-form video, but it would be careless to forget that some people really did try to warn us that this was coming. In Amusing Ourselves to Death, Neil Postman wrote that “each medium, like language itself, makes possible a unique mode of discourse by providing a new orientation for thought, for expression, for sensibility.” Television speaks to us in a particular dialect, Postman argued. When everything turns into television, every form of communication starts to adopt television’s values: immediacy, emotion, spectacle, brevity. In the glow of a local news program, or an outraged news feed, the viewer bathes in a vat of their own cortisol. When everything is urgent, nothing is truly important. Politics becomes theater. Science becomes storytelling. News becomes performance. The result, Postman warned, is a society that forgets how to think in paragraphs, and learns instead to think in scenes. (...)

When literally everything becomes television, what disappears is not something so broad as intelligence (although that seems to be going, too) but something harder to put into words, and even harder to prove the value of. It’s something like inwardness. The capacity for solitude, for sustained attention, for meaning that penetrates inward rather than swipes away at the tip of a finger: These virtues feel out of step with a world where every medium is the same medium and everything in life converges to the value system of the same thing, which is television. 

by Derek Thompson |  Read more:
Image: Ajeet Mestry on Unsplash
[ed. See also: The Last Days Of Social Media (Noema).]

via:

Daniel G. Jay, Heisenberg and Schrödinger’s Cat #3, 2025

The Limits of Data

Right now, the language of policymaking is data. (I’m talking about “data” here as a concept, not as particular measurements.) Government agencies, corporations, and other policymakers all want to make decisions based on clear data about positive outcomes. They want to succeed on the metrics—to succeed in clear, objective, and publicly comprehensible terms. But metrics and data are incomplete by their basic nature. Every data collection method is constrained and every dataset is filtered.

Some very important things don’t make their way into the data. It’s easier to justify health care decisions in terms of measurable outcomes: increased average longevity or increased numbers of lives saved in emergency room visits, for example. But there are so many important factors that are far harder to measure: happiness, community, tradition, beauty, comfort, and all the oddities that go into “quality of life.”

Consider, for example, a policy proposal that doctors should urge patients to sharply lower their saturated fat intake. This should lead to better health outcomes, at least for those that are easier to measure: heart attack numbers and average longevity. But the focus on easy-to-measure outcomes often diminishes the salience of other downstream consequences: the loss of culinary traditions, disconnection from a culinary heritage, and a reduction in daily culinary joy. It’s easy to dismiss such things as “intangibles.” But actually, what’s more tangible than a good cheese, or a cheerful fondue party with friends?

It’s tempting to use the term intangible when what we really mean is that such things are hard to quantify in our modern institutional environment with the kinds of measuring tools that are used by modern bureaucratic systems. The gap between reality and what’s easy to measure shows up everywhere. Consider cost-benefit analysis, which is supposed to be an objective—and therefore unimpeachable—procedure for making decisions by tallying up expected financial costs and expected financial benefits. But the process is deeply constrained by the kinds of cost information that are easy to gather. It’s relatively straightforward to provide data to support claims about how a certain new overpass might help traffic move efficiently, get people to work faster, and attract more businesses to a downtown. It’s harder to produce data in support of claims about how the overpass might reduce the beauty of a city, or how the noise might affect citizens’ well-being, or how a wall that divides neighborhoods could erode community. From a policy perspective, anything hard to measure can start to fade from sight.

An optimist might hope to get around these problems with better data and metrics. What I want to show here is that these limitations on data are no accident. The basic methodology of data—as collected by real-world institutions obeying real-world forces of economy and scale—systematically leaves out certain kinds of information. Big datasets are not neutral and they are not all-encompassing. There are profound limitations on what large datasets can capture.

I’m not just talking about contingencies of social biases. Obviously, datasets are bad when the collection procedures are biased by oversampling by race, gender, or wealth. But even if analysts can correct for those sorts of biases, there are other, intrinsic biases built into the methodology of data. Data collection techniques must be repeatable across vast scales. They require standardized categories. Repeatability and standardization make data-based methods powerful, but that power has a price. It limits the kinds of information we can collect. (...)

These limitations are particularly worrisome when we’re thinking about success—about targets, goals, and outcomes. When actions must be justified in the language of data, then the limitations inherent in data collection become limitations on human values. And I’m not worried just about perverse incentives and situations in which bad actors game the metrics. I’m worried that an overemphasis on data may mislead even the most well-intentioned of policymakers, who don’t realize that the demand to be “objective”—in this very specific and institutional sense—leads them to systematically ignore a crucial chunk of the world.

Decontextualization

Not all kinds of knowledge, and not all kinds of understanding, can count as information and as data. Historian of quantification Theodore Porter describes “information” as a kind of “communication with people who are unknown to one another, and who thus have no personal basis for shared understanding.” In other words, “information” has been prepared to be understood by distant strangers. The clearest example of this kind of information is quantitative data. Data has been designed to be collected at scale and aggregated. Data must be something that can be collected by and exchanged between different people in all kinds of contexts, with all kinds of backgrounds. Data is portable, which is exactly what makes it powerful. But that portability has a hidden price: to transform our understanding and observations into data, we must perform an act of decontextualization.

An easy example is grading. I’m a philosophy professor. I issue two evaluations for every student essay: one is a long, detailed qualitative evaluation (paragraphs of written comments) and the other is a letter grade (a quantitative evaluation). The quantitative evaluation can travel easily between institutions. Different people can input into the same system, so it can easily generate aggregates and averages—the student’s grade point average, for instance. But think about everything that’s stripped out of the evaluation to enable this portable, aggregable kernel.

Qualitative evaluations can be flexible and responsive and draw on shared history. I can tailor my written assessment to the student’s goals. If a paper is trying to be original, I can comment on its originality. If a paper is trying to precisely explain a bit of Aristotle, I can assess it for its argumentative rigor. If one student wants be a journalist, I can focus on their writing quality. If a nursing student cares about the real-world applications of ethical theories, I can respond in kind. Most importantly, I can rely on our shared context. I can say things that might be unclear to an outside observer because the student and I have been in a classroom together, because we’ve talked for hours and hours about philosophy and critical thinking and writing, because I have a sense for what a particular student wants and needs. I can provide more subtle, complex, multidimensional responses. But, unlike a letter grade, such written evaluations travel poorly to distant administrators, deans, and hiring departments.

Quantification, as used in real-world institutions, works by removing contextually sensitive information. The process of quantification is designed to produce highly portable information, like a letter grade. Letter grades can be understood by everybody; they travel easily. A letter grade is a simple ranking on a one-dimensional spectrum. Once an institution has created this stable, context-invariant kernel, it can easily aggregate this kind of information—for students, for student cohorts, for whole universities. A pile of qualitative information, in the form of thousands of written comments, for example, does not aggregate. It is unwieldy, bordering on unusable, to the administrator, the law school admissions officer, or future employer—unless it has been transformed and decontextualized.

So here is the first principle of data: collecting data involves a trade-off. We gain portability and aggregability at the price of context-sensitivity and nuance. What’s missing from data? Data is designed to be usable and comprehensible by very different people from very different contexts and backgrounds. So data collection procedures tend to filter out highly context-based understanding. Much here depends on who’s permitted to input the data and who the data is intended for. 

by C. Thi Nguyen, Issues in Science and Technology |  Read more:
Image: Shonagh Rae

Tuesday, October 14, 2025

Is it Really Different this Time?

What is amusing is just how much talk there has been about the AI investment bubble, and what it will do or not do to the markets and the economy when it implodes or doesn’t implode: That it’s almost like at the peak of the Dotcom Bubble. That it’s much worse than at the peak of the Dotcom Bubble. That it’s nothing like the Dotcom Bubble because this time it’s different. That even if it’s like the Dotcom Bubble and then turns into the Dotcom Bust, or worse, it’s still worth it because AI will be around and change the world, just like the Internet is still around and changed the world, even if those first investors got wiped out, or whatever.

There are many voices that loudly point this out, and point out just how risky it is to bet on hocus-pocus money, or that explain in detail why this isn’t risky at all, why this is not anything like the Dotcom Bubble, why this time it’s different – the four most dangerous words in investing.

The talk fills the spectrum, and these are people with enough stature to be quoted in the media: Jamie Dimon, Jeff Bezos, the Bank of England, Goldman Sachs analysts, IMF Managing Director Kristalina Georgieva…

The focus is on the big-tech-big-startup circularity of hocus-pocus deals between Nvidia, OpenAI, AMD, along with Amazon, Microsoft, Alphabet, Meta, Tesla, Oracle, and many others, including SoftBank, of course.

OpenAI now has an official “valuation” — based on its secondary stock offering — of $500 billion though it’s bleeding increasingly huge amounts of cash. And there are lots of players in between and around them. They all toss around announcements of AI hocus-pocus deals between them.

OpenAI has announced deals totaling $1 trillion with a small number of tech companies, at the top of which are Nvidia ($500 billion), Oracle ($300 billion), and AMD ($270 billion). Each of these announcements causes the stocks of these companies to spike massively – the direct and immediate effects of hocus-pocus money.

OpenAI obviously doesn’t have $1 trillion; it’s burning prodigious amounts of cash. And so it’s trying to rake in investment commitments from the same companies that it would buy equipment from, and engineer creative deals that cause these stock prices to spike, and so the hocus-pocus money announcements keep circulating.

OpenAI’s idea of building data centers with Nvidia GPUs that would require 10 gigawatts (GW) of power is just mindboggling. The biggest nuclear powerplant in the US, the Plant Vogtle in Georgia, with four reactors, including two that came on line in 2023 and 2024, has a generating capacity of about 4.5 GW. All nuclear powerplants in the US combined have a generating capacity of 97 GW.

But it’s real money too. A lot of real money.

Big Tech is letting its huge piles of cash spill out into the economy to build this vast empire of technology that requires data centers that would consume huge amounts of electricity to let AI do its thing.

And these “hyperscalers, are leveraging that money flow with borrowing, by issuing large amounts of bonds.

And private credit has jumped into the mania to provide further leverage, lending large amounts to data-center startup “neocloud” companies that plan to build data centers and rent out the computing power; those loans are backed with collateral, namely the AI GPUs. No one knows what a three-year-old used GPU, superseded by new GPUs, will be worth three years from now, when the lenders might want to collect on their defaulted loan, but that’s the collateral.

The data centers are getting built. The costs of the equipment in them – revenues for companies that provide this equipment and related services – dwarf the costs of the building. And stocks of companies that supply this equipment and the services have been surging.

The bottleneck is power, and funds are flowing into that, but it takes a long time to build powerplants and transmission infrastructure.

Is it really different this time?

So there is this large-scale industrial aspect of the AI investment bubble. That was also the case in the Dotcom Bubble. The telecom infrastructure needed to be built out at great cost. Fiberoptics made the internet what it is today. Those fibers needed to be drawn and turned into cables, and the cables needed to be laid across the world, and the servers, routers, and other equipment needed to be installed, and services were invented and provided, and businesses and households needed to be connected, and it was all real, and it was all very costly, requiring huge investments, but progress was slow and revenues lagging, and then these overhyped stocks just imploded under that weight, along with the stocks that were the pioneers of ecommerce, internet advertising, streaming, and whatnot.

The Nasdaq, where much of it was concentrated, plunged by 78% over a period of two-and-a-half years, investors lost huge amounts of money, many got wiped out, thousands of companies and their stocks vanished or were bought for scrap when that investment bubble crashed. And a year into the crash, it triggered a recession in the US – and a mini-depression in Silicon Valley and San Francisco where much of this had played out.

Yet the internet thrived. Amazon barely survived and then thrived in that new environment. But Amazon was one of the exceptions.

In this mania of hype, hocus-pocus deals, and huge amounts of real money fortified by leverage – all of which caused stock prices to explode – markets become edgy. Everyone is talking about it, everyone sees it, they’re all edgy, regardless of their narrative – whether a big selloff is inevitable with deep consequences on the US economy, or whether this time it’s different and the mania can go on and isn’t even halfway done.

Whatever the narrative, it says risk in all-caps. Anything can prod these stock prices at their precarious levels to suddenly U-turn, and if the selloff goes on long enough, the investment bubble would come to a halt, and the hocus-pocus deals would be just that, and the whole construct would come apart. But AI would still be around doing its thing, just like the Internet.

by Wolf Richter, Wolf Street |  Read more:
Image: Alexas_Fotos on Unsplash