Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Monday, April 6, 2026

China's AI Education Experiment

A deep dive.

Pilot schools in China are already using AI to grade children’s artwork, monitor their facial expressions during lectures, and screen them for psychological problems — and the Ministry of Education (MOE) wants schools across the country to follow suit.

Integrating AI into the education system has rapidly become a top priority of the Chinese central government, which is betting that AI tools can eliminate China’s vast educational inequities and make the next generation of workers more productive. The State Council highlighted education as a key area of focus in the “AI+” plan, it received a shout-out in the 15th Five-Year Plan, and in May 2025, the Ministry of Education (MOE) released a white paper on AI for education. This MOE document proclaims that 2025 marks the dawn of an era (“智慧教育元年”), the beginning of a system-wide effort to “intelligentize” 智能化 education using AI tools. The MOE’s goal: universalize basic AI access in primary and secondary schools by 2030. Industry received that signal and responded rapidly, with Alibaba Cloud releasing its own AI+education white paper the following month. But the gap between Beijing’s (and Hangzhou’s) techno-optimism and rural China’s reality is enormous.

This report explores why the Party wants to integrate AI into education, what applications the MOE is most optimistic about, and where the barriers to successful rollout lie. We’ll limit our analysis to K-12 education today, but university AI initiatives will be the focus of our next report in this series!

Institutional History

In official discourse, China is said to have entered a “post-equity era” 后均衡时代 since the MOE announced that all counties had met the baseline quality level for compulsory schooling in 2021. Now, the focus is shifting from access to education to improving the quality of that education. The 14th 5-year plan (2021-2025) prioritized expanding infrastructure in rural schools through the “county-level high school revitalization initiative” (县中振兴), part of which involved equipping classrooms with ‘smart hardware’ such as digitized blackboards. During this period, the party spent significant resources to provide nearly every school with an internet connection.

Still, rural education in China faces serious structural challenges. I spoke with Leo He — a research fellow at the Hoover Institution who did NGO work in rural China from 2019 to 2023 — for a firsthand account of the situation. Every locality, he explained, has designated “elite” schools that talented students from surrounding areas compete to transfer into. The result is a system where “educational resources are systematically sucked up to the center from the periphery, leaving rural areas incredibly depleted.” While this arguably gives academically gifted students opportunities to develop their talents, it deprives most students of educational resources.

According to China’s 2020 census, only 30.6% of the population has ever attended high school (including non-academic vocational secondary school), which Stanford professor Scott Rozelle notes, “is lower than South Africa, lower than Turkey and lower than Mexico.” In 2022, roughly 40% of China’s middle school graduates didn’t go on to attend high school of any kind, and among the students that do continue their education, national policy stipulates that roughly half (“五五分流”) are funneled into non-academic vocational high schools with no path to enter college.

To understand how AI could fit into this picture, we first need to understand the political and economic factors that incentivize Beijing to care about students in the countryside. It’s not clear that more investment in education will translate to high economic growth at this point in China’s development path — the real youth unemployment rate is probably still around 20%, and there are fewer entry-level positions available just as a record number of new graduates enter the workforce. Rather, this is a priority for the Party because improving the education system is so popular.

When Rozelle’s team surveyed 1,800 rural mothers and asked what they wanted their children to aspire to, over 95% said, “I want my child to go to college.” In China, a degree from an elite college doesn’t just translate to higher earnings — it unlocks better healthcare via the hukou system, cushy “iron rice bowl” 铁饭碗 jobs, and above all, social prestige. In 2023, researchers at Stanford found that Chinese families spent an average of 17.1% of their annual household income on education, which amounts to 7.9% of annual household expenditures. (Households in the US and Japan, by comparison, dedicate just 1-2% of annual expenditures to education.) The poorest quartile of families in China devotes a staggering 56.8% of income to education, and education spending is inelastic — that is, it’s prioritized as a necessary expense — across all income levels.

As Andrew Kipnis, the anthropologist who wrote Governing Educational Desire, explained to ChinaTalk, educational reform is a priority for the party “because it’s a way of keeping people happy. If they think there’s some hope their child will attend university, that gives them some investment in the system.” But not every child can become part of the elite: “People who have gone to university won’t work in factories,” as Kipnis put it. No matter how popular it would be, Beijing is not interested in building a system where a college education is available to anyone who wants one. But within this zero-sum system, where anyone who receives an advantage is inherently disadvantaging someone else, the party still needs to make parents feel like their child is getting ahead. Infrastructure is pretty much the perfect tool for this. It makes schools feel luxurious on the ground without changing the fundamentals that make the system so unfair. Shiny new facilities deliver popularity gains immediately, and if your child doesn’t get into university years later, it’s their own damn fault.

Those incentives are shaping the world’s largest AI education experiment. China is not the only country betting that AI will transform education, but the scale and style of China’s ambitions are unmatched globally. While China started with pilot programs, South Korea’s government led with inflexible national-level implementation, spending US$850 million on an ambitious AI textbook initiative that collapsed after just 4 months. India’s edtech ecosystem is private-sector-led with little top-down guidance or regulation, which resulted in the high-profile implosion of Byju’s and a proliferation of predatory practices targeting low-income families. Japan, unlike China, pledged to make sure every student had a device before implementing AI teaching tools.

Ultimately, China stands out globally for the sheer scale of its AI education ambitions — and the scope of applications its edtech industry is targeting for AI integration.

by Lily Ottinger, China Talks |  Read more:
Image: via
[ed. See also: Massive budget cuts for US science proposed again by Trump administration (Nature). National Science Foundation.]

Dating Apps: Giving Men What They Want But Not What They Need

Dating apps were built on the bones of Grindr. I have been known to joke that everything wrong with dating apps is divine retribution for culturally appropriating them from the gays.

Gay men, specifically, that’s important - the overwhelming majority of people making apps are still men, and most of those are still straight men, and while I don’t exactly have insider knowledge on this, it couldn’t be clearer to me that some open-ish minded straight tech boy heard from one of his gay male friends about being able to summon sex partners to his bed from the immediate vicinity after filtering on a bunch of lewd photos and thought: “There isn’t a straight man alive who wouldn’t consider giving up his left hand to have this experience with women. I could make a billion dollars making straight Grindr.”

And thus Tinder was born. Blah blah blah lust and greed sullying the purity of romantic and sexual love; a direction I could go, but instead we’re going to talk about the ways that playing to male preferences in the short term can easily ruin their entire lives, even when it was men’s idea.

Dating apps aggressively reflect male preferences, sexuality neutral. They’re long on photos, short on text. They filter primarily on location, which has some usefulness, but is most useful if the question is “who’s geographically close enough to me that walking to my place for sex is a realistic option” .

Men love flipping through photos of people they’re attracted to - that alone drove much of the traffic to Facebook’s precursor, Hot or Not. This app is built to give men a sexual scrolling experience as soothingly magnetic as any social media site while providing enough mystery to feel less degenerate than porn (the better for large doses and intermittent rewards).

For women, it’s grim. Yes, they get matches much more often than men do (largely because these extremely male-centric UI decisions lure vastly more male users than women; what economist could have predicted this problem with a heterosexual dating app). They don’t enjoy using these apps, not nearly to the degree or as often as men do. For most women, sifting through men feels dehumanizing, and sorting on pictures feels painfully limited (the male equivalent might be having to swipe based on photos of a woman’s favorite outfit, laid out on her bed. Vaguely boring and frustrating to have to make important decisions with so little information about the things you care about).

This isn’t just because of blackpill stuff about how men aren’t hot to women - that topic has been covered to death, yes women find men physically hot but no it doesn’t always work in such a way that static photos capture, so men are impossibly screwed by efforts to appeal to women with photos alone. There’s also the fact that men suck at taking pictures, because the market for photos of people is overwhelmingly men as buyers and women as suppliers, with the demand being for sexually attractive photos of women. Looking at photos of men is like driving a Nissan truck: it couldn’t be clearer that it is not your specialty and significantly worse than other products that your entire factory line was designed for.

You might think that dating apps are bad for men because they lead to men experiencing significant rejection - even the way my post is framed up until this point sort of implies as much. That framework, like much about dating apps, gets the whole picture subtly, insidiously wrong in a way that leaves people who take them at face value much worse off. You know who takes things at face value most often? You’re not going to believe this,

No, the greatest deprivation created by dating apps is specifically denying women and men the opportunity for women to keep men around in a general capacity. (If this idea makes you freak out about the friend zone, I’m almost impressed with you because young people seem to do so little socializing that no one complains about the friend zone anymore. Pat yourself on the back for having friends if you’ve managed to develop a resentment complex around the friend zone).

Most women develop attraction to men via proximity and time. Force a woman to choose if she wants the option to sleep with a man the second she meets him, and she will default to no in almost every single case. For many men, this means that any men who enjoy the attention of women who are open to sleeping with them at first glance are the only men women authentically want. Respectfully, you’re thinking like a guy, and if you believe that men and women are extremely different, I’m going to need you to trust that women develop affection for men differently than men do for women, such that you’ll ruin your life trying to figure out why women don’t desire you in the exact same way that you desire them...

One of the worst things you can do if you date women is to push them into a choice of yes or no as early as possible. You are simply too much of a risk on too many axes to get something other than a no unless you look like Chris Hemsworth, and even that wouldn’t get you yeses from 100% of the women you might ask out (hot men can still be shitty in about a thousand ways, and women often aren’t willing to take risks even for hotness. Again. They are not men). You might think that your goal should be to look like Chris Hemsworth, or alternatively to despair that you don’t look like Chris Hemsworth and go sulkily into that good night, but that’s you thinking like a guy and assuming that how women feel has to match how you feel. Frankly, that’s what got you into this mess: by trusting tech men who told you that you could game heterosexual dating by giving you an interface that pinged all your dopamine sensors while curiously robbing you of a lot of opportunities to find and develop a fulfilling relationship. [...]

The major product provided by a dating app is the illusion of participating in dating at all - some time swiping through faces, and congratulations, you are “dating”, you Tried, you do not need to do anything scarier or riskier or less fun than this.

by Eurydice, Eurydice Lives |  Read more:
Image: uncredited via

Saturday, April 4, 2026

Go Ahead and Use AI. It Will Only Help Me Dominate You.

Recently there has been a lot of commentary of the following type:

BAD WRITER [touchily]: “Actually, I do use AI to help me write.”

Okay. That checks out. Carry on.

Want to use AI as a Valuable Part of Your Writing Process? Want to use it to “generate pushback on my column thesis” and be “more comprehensible” and “craft unique angles” and offer “positive and negative feedback” and “scale the quantity” of your “output?”

Knock yourself out.

You have my blessing.

Hey buddy— go for it!

Some in the “real writer” community find this sort of rampant outsourcing of the writing process to AI to be distressing. Not me. Would I do it myself? No. I have self-respect. But I want to tell you, my friends, that you have my full support for all of it. Want to throw your dashed-off notes into ChatGPT and have it spit a draft back at you and then edit that and call it your own? Want to toss a few hastily written headlines at Claude and have it generate the outline of your piece? Want to dump your entire career archives into a chatbot and then order it to replicate your own voice so you don’t have to?

Do you, a grown man, a successful professional writer who has received a book deal paying you real US currency, want to use AI for the purpose of “making sure the book matches [your own] writing style”[???]? Guess what, brother: I support you. I affirm you. I am right here offering you a classic thumbs-up gesture of affirmation.

“Whoa, a writer who I have never regarded as particularly inventive is using AI? I am surprised and disappointed.” There’s a sentence I would never utter. Instead, I would accept the news of your AI use with total equanimity, nodding almost imperceptibly to indicate that this is not something worth raising my eyebrows over.

No, I will not be joining in the chorus of condemnation. On the contrary. If you are a professional writer, I want you to use AI. Because this industry is competitive. I’ll take any advantage I can get. And if you want to make your writing suck, that’s all the better for me. One less person outshining me.

The tepid, conformist nature of your AI-assisted prose will only make my unexpected bons mots stand out more sharply. While you lean on a technological crutch of grammatical mediocrity to drag your essays over the finish line, I’ll be metaphorically zipping past you on my “magic carpet” of words emerging directly from my own declining and unpredictable brain. Over time, the intellectual box into which AI has seduced your creative process will suffocate you, leaving your bereft readers little choice but to drift into my subscription base.

You’ll be all, “Politics in America is divided—but it doesn’t have to be. Let’s discuss how to bridge the partisan divide.” Your sense of joy at the possibilities of the English language will have been so eroded that you won’t even understand why that sucks shit. Meanwhile I’ll be dropping some wild similes you could never even imagine. “Politics is like a sea slug.” What?? How?? Readers will flock to me to find out. Too bad your AI editor struck that line from your piece as “indecipherable.”

You and your friend “Claude” wouldn’t last two seconds in my cipher.

Maybe you read the studies about how AI use causes “cognitive surrender” that slowly destroys your ability to think critically about the linguistic cud that the machine is serving you. Or about how it causes “cognitive foreclosure” that prevents you from ever developing the skills to critique AI output even if you wanted to. Maybe these studies give you pause, when you think about introducing these inscrutable tools of mental paralysis into your own creative process.

Don’t worry about it!

Life is hard enough already. You’re busy. You have lots of things to do—laundry, making lunch, and more. The last thing you need is a bunch of jealous (Brooklyn hipster) writers lecturing you about how this magical productivity booster is somehow “bad” for you. Those are probably the same haters who told you to stop doing so much crystal meth. Some people can’t stand to see you succeed!

I just checked a calendar—it’s 2026. AI is here to stay and you might as well beat the rush by using it more and more, right? Right. In the name of efficiency, it just makes sense for you to turn over ever greater portions of your thought process to this seductive helper, never stopping to ask yourself what it is costing you. You are a nice person and your job (writing) deserves to be easy. There, there. Allow yourself to sink into the warm opiate of cerebral ease. This is better. Yes. This is much better.

By all means—proceed.

And then, when you have settled into this comfortable pattern, sit back and watch me unsheath my massive, work-hardened intellect, built to staggering strength through a daily regimen of thinking about stuff. I think you’ll find that your panicked efforts to resist my onslaught will prove unsuccessful, hampered as you are by atrophied muscles of the mind. Ask your AI companion for some final words of comfort. The hour of your doom draws near.

I will crush you with ease.

by Hamilton Nolan, How Things Work |  Read more:
Image: Getty
[ed. Haha...yep. : ) See also: Who Goes AI? (with respect to Dorothy Thompson's 'Who Goes Nazi', gracefully acknowledged by the author).]

The Big T-Shirt Payoff

The College Student—and His Cat Meme—Who Hunted the World’s Biggest Cyberweapon

Sitting in his dorm room at the Rochester Institute of Technology, Benjamin Brundage was closing in on a mystery that had even seasoned internet investigators baffled. A cat meme helped him crack the case.

A growing network of hacked devices was launching the biggest cyberattacks ever seen on the internet. It had become the most powerful cyberweapon ever assembled, large enough to knock a state or even a small country offline. Investigators didn’t know exactly who had built it—or how.
 
Brundage had been following the attacks, too—and, in between classes, was conducting his own investigation. In September, the college senior started messaging online with an anonymous user who seemed to have insider knowledge.

As they chatted on Discord, a platform favored by videogamers, Brundage was eager to get more information, but he didn’t want to come off as too serious and shut down the conversation. So every now and then he’d send a funny GIF to lighten the mood. Brundage was fluent in the memes, jokes and technical jargon popular with young gamers and hackers who are extremely online.

“It was a bit of just asking over and over again and then like being a bit unserious,” said Brundage.

At one point, he asked for some technical details. He followed up with the cat meme: a six-second clip that showed a hand adjusting a necktie on a fluffy gray cat.

Brundage didn’t expect it to work, but he got the information. “It took me by surprise,” he said.

Eventually the leaker hinted there was a new vulnerability on the internet. Brundage, who is 22, would learn it threatened tens of millions of consumers and as much as a quarter of the world’s corporations. As he unraveled the mystery, he impressed veteran researchers with his findings—including federal law enforcement, which took action against the network two weeks ago.

Chad Seaman, a researcher at Akamai, joked at one point that the internet could go down if Brundage spent too much time on his exams.

Early warning

Three times a year, several hundred of the techies who keep North America’s internet running gather to talk shop. Last June they met at a conference in Denver hosted by the North American Network Operators’ Group.

One major topic was a fast-growing and often legally dubious business known as residential proxy networks. Dozens of companies around the world run such networks, which are made up of consumer devices like phones, computers and video players.

These “res proxy” companies rent out access to internet connections on the devices to customers who want to look like they’re surfing the internet from a genuine home address.

That kind of access is useful for people who want privacy or for companies that want to masquerade as regular people to test out internet features for particular regions or scrape the web for data (say, a shopping price-comparison site). AI companies use the networks to get around blocks on automated traffic so they can gather large amounts of data to train their models.

Then there are the customers who want to hide their identity while engaging in ticket scalping, bank fraud, bomb threats, stalking, child exploitation, hacking or espionage.

Some device owners willingly sign up to be on these networks so they can make a few dollars a month, but most have no idea they’re connected to one.

At the Denver conference, Craig Labovitz was alarmed. The Nokia executive had been tracking the data flows of the internet’s infrastructure for years, and he knew the network’s data centers, chokepoints and design better than most.

Starting in January 2025, Nokia’s sensors had picked up a series of increasingly powerful cyberattacks coming from devices that hadn’t previously been considered dangerous. Called distributed denial of service, or DDoS, attacks, these were massive floods of junk internet data designed to knock websites offline by overwhelming the data pipes that connected them. These attacks are sometimes launched by extortionists or even business rivals seeking to sabotage computer networks.

Nokia saw hundreds of thousands of devices joining in these attacks. One unprecedented attack later in the year on internet service provider Cloudflare was “comparable to the combined populations of the UK, Germany, and Spain all simultaneously typing a website address and then hitting ‘enter’ at the same second,” Cloudflare said.

The network, which would become known as Kimwolf, seemed to be using residential proxy connections to launch its attacks, giving it the potential to do massive damage.

“The basic message was, ‘Be afraid,’” Labovitz remembers.

by Robert McMillan, Wall Street Journal |  Read more:
Image: via
[ed. Here's how to protect yourself.]

Thursday, April 2, 2026

Forecasting the Economic Effects of AI

Forecasting the Economic Effects of AI

There is widespread disagreement over the impact that AI will—or won’t—have on the U.S. economy: some prominent voices warn of a transformative upheaval and large-scale job losses, while others predict modest boosts to productivity at best. But there has been little work attempting to systematically understand expert views on the economic impacts of AI. What do top economists predict will be the economic consequences of AI—and why do they hold those beliefs?

In a new working paper, researchers from the Forecasting Research Institute and coauthors from the Federal Reserve Bank of Chicago, Yale School of Management, Stanford University, and the University of Pennsylvania present results from a large-scale forecasting exercise tracking the views of 69 leading economists, 52 AI industry and policy experts, 38 highly accurate forecasters, and 401 members of the general public. The survey ran from mid-October 2025 to the end of February 2026.

This post summarizes the key findings. For more details, refer to the full working paper.

by Forecasting Research Institute |  Read more:
Image: FRI

Wednesday, April 1, 2026

'Fragment Creation Event' - Starlink Satellite Breaks Apart

SpaceX’s Starlink division confirmed yesterday that it lost contact with a satellite on Sunday and is trying to locate space debris that might have been produced by… whatever happened there.

Starlink said there appeared to be “no new risk” to other space operations and did not use the word “explosion.” But it seems that something caused a Starlink broadband satellite to break apart into at least tens of pieces. LeoLabs, which operates a radar network that can track objects in low Earth orbit, said in an X post that it “detected a fragment creation event involving SpaceX Starlink 34343,” one of the 10,000 or so Starlink satellites in orbit.

“LeoLabs Global Radar Network immediately detected tens of objects in the vicinity of the satellite after the event, with a first pass over our radar site in the Azores, Portugal,” LeoLabs said. “Additional fragments may have been produced—analysis is ongoing.”

LeoLabs said the breakup was “likely caused by an internal energetic source rather than a collision with space debris or another object.” Because of “the low altitude of the event, fragments from this anomaly will likely de-orbit within a few weeks,” it said. [...]

LeoLabs said yesterday that the new event is similar to one from December 17, 2025, which also produced “tens of objects in the vicinity of the satellite” and appeared to be “caused by an internal energetic source” rather than a crash with another object. LeoLabs said it wants more information on the anomalies.

“These events illustrate the need for rapid characterization of anomalous events to enable clarity of the operating environment,” it said.

Starlink provided a few details shortly after the December 2025 incident, saying on December 18 that an “anomaly led to venting of the propulsion tank, a rapid decay in semi-major axis by about 4 km, and the release of a small number of trackable low relative velocity objects.” Starlink added that the satellite was “largely intact” but “tumbling,” and would reenter the Earth’s atmosphere and “fully demise” within weeks.

In December, Starlink seemed confident that it could prevent future anomalies. “Our engineers are rapidly working to [identify the] root cause and mitigate the source of the anomaly and are already in the process of deploying software to our vehicles that increases protections against this type of event,” Starlink said in the December 18 post.

We asked SpaceX today whether it has determined the cause of the December anomaly or the one on Sunday, and will update this article if we get a response.

by Jon Brodkin, Ars Technica |  Read more:
Image: Aurich Lawson | Getty Images

The AI Doc

 

(This will be a fully spoilorific overview. If you haven’t seen The AI Doc, I recommend seeing it, it is about as good as it could realistically have been, in most ways.)

Like many things, it only works because it is centrally real. The creator of the documentary clearly did get married and have a child, freak out about AI, ask questions of the right people out of worry about his son’s future, freak out even more now with actual existential risk for (simplified versions of) the right reasons, go on a quest to stop freaking out and get optimistic instead, find many of the right people for that and ask good non-technical questions, get somewhat fooled, listen to mundane safety complaints, seek out and get interviews with the top CEOs, try to tell himself he could ignore all of it, then decide not to end on a bunch of hopeful babies and instead have a call for action to help shape the future.

The title is correct. This is about ‘how I became an Apolcaloptimist,’ and why he wanted to be that, as opposed to an argument for apocaloptimism being accurate. The larger Straussian message, contra Tyler Cowen, is not ‘the interventions are fake’ but that ‘so many choose to believe false things about AI, in order to feel that things will be okay.’

A lot of the editing choices, and the selections of what to intercut and clip, clearly come from an outsider without technical knowledge, trying to deal with their anxiety. Many of them would not have been my choices, especially the emphasis on weapons and physical destruction, but I think they work exactly because together they make it clear the whole thing is genuine.

Now there’s a story. It even won praise online as fair and good, from both those worried about existential risk and several of the accelerationist optimists, because it gave both sides what they most wanted. [...]

Yes, you can do that for both at once, because they want different things and also agree on quite a lot of true things. That is much more impactful than a diatribe.

We live in a world of spin. Daniel Roher is trying to navigate a world of spin, but his own earnestness shines through, and he makes excellent choices on who to interview. The being swayed by whoever is in front of him is a feature, not a bug, because he’s not trying to hide it. There are places where people are clearly trying to spin, or are making dumb points, and I appreciated him not trying to tell us which was which.

MIRI offers us a Twitter FAQ thread and a full website FAQ explaining their full position in the context of the movie, which is that no this is not hype and yes it is going to kill everyone if we keep building it and no our current safety techniques will not help with that, and they call for an international treaty.

Are there those who think this was propaganda or one sided? Yes, of course, although they cannot agree on which angle it was trying to support.

Babies Are Awesome

The overarching personal journey is about Daniel having a son. The movie takes one very clear position, that we need to see taken more often, which is that getting married and having a family and babies and kids are all super awesome.

This turns into the first question he asks those he interviews. Would you have a child today, given the current state of AI? [...]

People Are Worried About AI Killing Everyone

The first set of interviews outlines the danger.

This is not a technical film. We get explanations that resonate with an ordinary dude.

We get Jeffrey Ladish explaining the basics of instrumental convergence, the idea that if you have a goal then power helps you achieve that goal and you cannot fetch the coffee if you’re dead. That it’s not that the AI will hate us, it’s that it will see us like we see ants, and if you want to put a highway where the anthill is that’s the ant’s problem.

We get Connor Leahy talking about how creating smarter and more capable things than us is not a safe thing to be doing, and emphasizing that you do not need further justification for that. We get Eliezer Yudkowsky saying that if you share a planet with much smarter beings that don’t care about you and want other things, you should not like your chances. We get Ajeya Cotra explaining additional things, and so on.

Aside from that, we don’t get any talk of the ‘alignment problem’ and I don’t think the word alignment even appears in the film that I can remember.

It is hard for me to know how much the arguments resonate. I am very much not the target audience. Overall I felt they were treated fairly, and the arguments were both strong and highly sufficient to carry the day. Yes, obviously we are in a lot of trouble here.

Freak Out

Daniel’s response is, quite understandably and correctly, to freak out.

Then he asks, very explicitly, is there a way to be an optimist about this? Could he convince himself it will all work out?

by Zvi Mowshowitz, DWAtV |  Read more:

Tuesday, March 31, 2026

AI Weekly Update: Policy, Discourse and Alignment

People Really Hate AI

An ongoing series, this time from Will Manidis. I won’t try to excerpt but yes really the evidence for Americans being hostile to AI is overwhelming and the problem appears to be getting worse over time:
  • It is my belief — and I say this having worked in AI my entire career — that we should expect widespread asymmetric violence against AI infrastructure in the United States in the near future.
I do not say this happily. I am not rooting for it. I condemn violence in its fullest extent. The document that follows is not a manual for committing this kind of violence, but a warning of how easy it would be for dedicated groups to grind the American AI industry to a halt.
  • When you ask everyday Americans what they want done about AI, the consistency is almost eerie.
72% of voters want to slow down AI development. 82% do not trust technology executives to regulate AI—a level of distrust that puts AI CEOs somewhere between Congress and used-car dealers. 75% of Democrats and 75% of Republicans prefer a careful, considered approach to AI development. 75 and 75.
  • 80% of Americans told Axios that they prefer cautious AI implementation even if it means letting China get ahead. Our industry has been betting its future on a messianic fantasy of a coming war with China, and everyday Americans simply do not care. They say slow down anyway.
  • AI's constituency is the people who build it, the people who invest in it, and the people who earn enough to believe they'll come out ahead. These people are concentrated in literally a handful of zip codes. They are disproportionately male, young, college-educated, and high-income. They are, in demographic terms, niche.
  • No major technology in American history has entered its scaling phase—the phase where you deploy trillions of dollars into physical plant, into real communities, drawing real resources—with this demographic profile of opposition. AI is attempting to do something without precedent, and it's attempting to do so without noticing.
  • If you listen to conversation inside the industry, you wouldn't hear any of these numbers being discussed. The discourse is about scaling laws and token budgets and capability curves and the race to AGI and China. To the extent that anyone has articulated these concerns, the response is that amorphous benefits—productivity gains, curing cancer, transformative tech bio—will turn people around once they see undeniable evidence that something good is occurring here.
  • This assumption is backed by no data. The data shows the opposite. The more people learn about AI, the more they use it, the more they oppose its unchecked development. The trend lines are unambiguous.
  • The core issue is that the industry is caught in a contradiction it can't resolve. In order to raise the money necessary to fund massive training runs, investors and enterprise customers must hear the CEO stand on stage and explain how many human tasks the technology can now perform, how much cheaper it will be than the humans, how much better it will be by next quarter. This is the revenue case. It's what the market rewards. It's what every earnings call is built around.
The pitch to the public then requires that same CEO to promise that AI will create new jobs, that the transition will be managed, and that no one will be left behind. This is at best a political survival argument. It's what a continued social license to operate demands. 
The problem is that these two claims cannot coexist. The market pitch wins because that's where the money is. No one particularly cares what happens to the people left behind, and everyone can tell. 
  • The industry's response to the political opposition this generates is lobbying. In California, a bill to separate data center electricity rates from residential rates—to shield households from cost increases—was killed by industry lobbying. A separate bill requiring data centers to disclose their water usage was vetoed by the governor. What survived the legislative session was a requirement for regulators to produce a study on data center energy impacts, due in 2027. The findings will not be available in time for the 2026 session.
[ed. ... and much more. Well worth a read.]

by Will Manidis, X |  Read more:
***
Dean Ball offers one of the arguments requiring a response, [ed. re: pausing AI development] which is that the government is itself racing towards dangerous AI and if anything wants to take and centralize the power rather than stop it, and that’s worse, you know that that’s worse, right? So aren’t you better off not giving the government leverage, when the Secretary of War is trying to jawbone AI companies and plans to deploy AI to the military whether or not it is aligned, and is happy to put those words in official documents? Don’t pauses end up giving the government a lot more leverage in various ways?

Great question.

I’ll start with the long version, then do the short version.

There are at least two distinct classes of answer to that question, from people who want to pause or have the ability to pause. Call the pause Plan B, versus going ahead as we currently are being Plan A. And Plan T is the government messes everything up.

There is the attitude that all work on frontier AI is terrible, and anything that slows it down or stops it is good, because if we build it then everyone dies and they’re working to build it. It doesn’t matter if Anthropic is somewhat ‘more responsible,’ in this view, because there’s a 0-100 scale, xAI is a 0, OpenAI is a 2 and Anthropic is a 5, or whatever, and ‘good enough to not kill everyone’ is 100. [...]

I am not at this level of hopelessness about the default Plan A, but I do think the odds are against Plan A. So you very much want to get ready to go to Plan B, and to know if you need to go to Plan B. And yes, this comes with risk of Plan T, which is worse even than Plan A, but if you’re losing badly enough you need to accept some variance. You can only die once, and there are so many ways to die.

But yes, some ways of enabling the government are actively bad even when they are acting reasonably, and it’s even worse when you know they’re acting unreasonably, and at some level of unreasonableness or ill intent you would flip to simply wanting them to stay away and hope Plan A works.

The more confident one is in Plan A, the more you want to stick with Plan A. [...]

The short version:
1. You can be against the companies racing or being dumb.
2. And also against the government racing or being dumb.
3. Or you can support people doing dumb things that help with what matters, even if from other perspectives and their own interests that action is super dumb.
4. You can realize that there are some coordination problems where failing kills you.
5. You play to win the game. You play to your outs. If losing too badly, seek variance.
6. If the only hope is wise government or multilateral intervention, play to your out.
It is hard to say everything explicitly or concisely, but hopefully that will be good enough for those who care to finish in the gaps.

by Zvi Mowshowitz, DWAtV |  Read more:

[ed. See also: Every Debate on Pausing AI (ACX); 2023 Or, Why I'm Not a Doomer (Dean Ball - Hyperdimensional); and, It’s Time to Take Existential Risk from AI Seriously (Target Curve):]
***
"Dean offers another argument in the form of a thought experiment. He asks us to imagine a baby guaranteed to grow into an adult with enormous IQ, but raised by Aristotle in Ancient Greece. Would that baby eventually reinvent all of modern science? Dean says no, and I agree. Without access to accumulated knowledge, even extreme intelligence has limited raw material to work with. But this is not a good analogy for ASI.

Here’s my own attempt at making a similar thought experiment: Imagine trapping an alien mind, far more intelligent and capable than any human that has ever lived, inside a datacenter with access to a supercomputer that contains much (though not all) of humanity’s accumulated knowledge and works. Now freeze the rest of the world. While everyone else is standing still, this entity spawns thousands of copies of itself. Each copy is fine tuned to pursue different approaches towards whatever goals it’s pursuing. The entity evaluates the results, selects the copies that are performing best, fine-tunes them even further, and repeats. With ten thousand copies each thinking at least ten times faster than a human, a single day of runtime amounts to nearly 300 years of nonstop, focused cognitive labor.2

What would the world find when it unfroze? Could we predict this ahead of time and prepare adequate safeguards to ensure this entity remains under our control, long term? And what if we let it run not for a day, but for a month or a year?

Consider what humanity has built with our relatively slow, disorganized, frequently distracted collective intelligence. In under a century we’ve mapped genomes, split atoms, landed on the moon, and built a global communications network. This entity would have access to much of that same knowledge, the ability to process it orders of magnitude faster than we can, and a self-improvement loop that has no biological equivalent. It would be capable of things we can scarcely imagine. And if our safeguards conflicted with its goals, it might dedicate significant effort to making sure it could never be shut down or constrained again."

Monday, March 30, 2026

Situational Unawareness - The Rise of OSINT

In the leadup to the war with Iran—and in the harrowing days since—a dizzying number of tools like WorldView have appeared seemingly out of thin air, bringing the once niche hobbyist community of OSINT (short for “open source intelligence”) into the mainstream. With names like “World Monitor” or “The Big Brother V3.0,” these dashboards make “your own room feel like the CIA,” according to one observer. Though it sounds like the tradecraft of spies, at a basic level they simply visualize publicly available data: from conflict zone maps to air traffic to global market fluctuations. In theory, this information, when collected and aggregated in creative ways, can help the user make some surprising inferences.

That may be true for an actual intelligence analyst, but for most users, these snazzy dashboards cram a chaotic amount of information on screen, from which no sane person can draw logical conclusions. Instead of offering actionable intelligence, the illegible cacophony just leads to a type of hypercharged doomscrolling. “The amount of vibe coded ‘situation monitor’ slop being produced these days is absolutely astronomical,” one OSINT researcher complained. Another X user tried to impose some quality control by ranking several of these new dashboards in a post called “Monitoring the Situation Monitors.” For others, it’s a fantasy come to life: every person at the center of their own personal panopticon, the world stretched out before them as they omnisciently swivel their desk chair from cell to cell, screen to screen. [...]

It is tempting to think that anyone with an internet connection can pull a fast one on the world’s most powerful military or that you can bypass a presidential administration hostile to the very notion of an informed public simply by monitoring something as simple as airplane traffic. Even more seductive is the idea that everything is knowable. The digital age has blanketed the world in cameras and sensors, which generate dizzying quantities of data—in other words, noise. But in that vast noise, the OSINT thinking goes, are signals. You just have to know how to find and interpret those signals, and all will be revealed.

The OSINT revolution in many ways democratized the powerful capabilities to gather information traditionally associated with spy agencies and put them into the hands of intrepid citizens who have identified perpetrators of human rights abuses or exposed vast disinformation networks. These impressive investigations have elevated OSINT to a near-mythic status in certain corners of the internet. But the widespread misuse and abuse of these same methods have also spread conspiracy theories, incited internet mobs, and fostered the illusion that anyone can know anything—as long as you “monitor the situation.” [...]

Everyday people who may never have even heard the term “OSINT” have devised ingenious ways to help their communities. When Hurricane Beryl knocked out power for 2.2 million of his neighbors in 2024, one enterprising Texan opened his app for the beloved fast food chain Whataburger, which has a live map tracking the status of restaurant closures in his area—a near perfect proxy for the geographic distribution of power outages. Indeed, good OSINT abounds. “This is how real OSINT should be done,” declared The OSINT Newsletter, which described how Bellingcat “reconstructed the Minneapolis ICE shooting by syncing five different videos, mapping movements and analysing multiple camera angles,” adding: “No doxxing, no speculation—just sources and methods.” [...]

Take the Pentagon Pizza Report. In early January, after the U.S. military’s strike on Venezuela and capture of President Nicolás Maduro, one of their posts on X went viral. At 2:04 a.m. EST, as the Maduro raid was underway yet still unknown to the American public, the account posted a Google Maps screenshot with the caption: “Pizzato Pizza, a late night pizzeria nearby the Pentagon, has suddenly surged in traffic,” implying that the abnormally high traffic could be attributed to Defense Department staffers ordering food in anticipation of holing up in the Pentagon for a long night of handling a major international crisis the public has yet to know about. For the Pentagon Pizza Report, the surge occurring around the time of the raid was a vindication of their method. A similar project called the Pentagon Pizza Index, which “tracks potential correlations between late-night pizza orders and military activity,” even developed an alert system called DOUGHCON, a play on DEFCON, the U.S. military’s multitiered “Defense Readiness Condition” alert system. [...]

Even the Pentagon Pizza Index, which created Polyglobe, a marriage of OSINT and prediction markets—an industry not known for having an abundance of scruples—has its own “Operational Disclaimer.” The notice informs users that the dashboard is “for informational and educational purposes only,” and reminds them that “pizza consumption patterns should not be used as a basis for financial, political, or strategic decisions.” Though I only found it after scrolling to the bottom of the page, where it sat partially obscured by a banner overlay and a button entreating me to “trade geopolitics on Polymarket.”

In some cases, irresponsible OSINT cowboying can have darker consequences. After the Boston bombing in 2013, armchair investigators pored over videos and photos purportedly of the incident, swapping theories in online public forums. Within days, these OSINT cowboys thought they had their guy. When that suspect did not pan out, they thought another guy was their guy again. Every time the internet sleuths named a new “suspect”—which were overwhelmingly people of color—abuse inevitably followed. A similar pattern occurred following the January 6 Capitol riot and Trump’s assassination attempt in July 2024. [...]

These problems have only intensified as vibe coding makes it easier than ever to deploy trackers and dashboards that look sharp from a design perspective and therefore authoritative, as people tend to believe visual content that looks good. Incentives to feed the insatiable desire to “monitor the situation” have only grown more entrenched now that prediction markets are transforming global conflict into a competitive spectator sport, one in which the advantage goes to the player with the most reliable, real-time information.

Apophenia is the common tendency for people to detect patterns or connections in otherwise random stimuli. People see the face of Jesus in a piece of toast or a man on the moon because the human brain craves order and familiarity as it searches for meaning in a meaningless world. It is natural and understandable to try and establish some semblance of control in the entropy, even if that control is only an illusion. But the hard truth is no amount of public data nor hours logged monitoring the situation will give you the power to predict the future. This is as true in Tehran as it is in Kyiv or Gaza.

by Tyler McBrien, The Baffler | Read more:
Image: Nick Sheeran

She Left a Silicon Valley VC to Solve a Problem Left Untouched for 88 years

As Women’s History Month comes to a close, here’s a little bit of trivia for you: One of the premier patents in bras hadn’t been touched or improved upon in 88 years. That was until Bree McKeen went after it. 

[ed. I'd say this problem has been touched quite a bit in 88 years. But, anyway...]

In 1931, inventor Helene Pons was granted a U.S. patent for a brassiere featuring an open-ended wire loop that encircled the bottom and sides of each breast. That uncomfortable, unyielding design had largely been left unchanged for nearly a century—and remains the dominant style in the global bra market, which is expected to reach nearly $60 billion by 2032.

Nobody had filed a patent for an underwire replacement until McKeen, founder of Evelyn & Bobbie, left her Silicon Valley job to try to fix a personal problem. At the end of long work days working at a boutique venture capital firm doing due diligence on consumer health care companies, she would come home with divots on her shoulders and chronic tension headaches after being hunched over her desk for hours on end.
 
While the world was demanding, the culprit wasn’t her workload. It was her bra.

But McKeen had zero experience in fashion. She studied medical anthropology and earned her MBA from Stanford. The turning point for her, though, came in a physiologist’s office, where McKeen had been working on her posture, along with regular barre training.

“He’s like, your posture looks great,’” McKeen recalled to Fortune. “And I kind of blurt it out: When I stand like this, I get pain from my bra.”

The physiologist explained it was a neuromuscular feedback loop, or the body’s automatic response to pain, like a pebble in a shoe.

“Here I am doing all this work to carry myself with authority and poise, and my bra, I find out, is totally doing the opposite,” McKeen said. “You don’t have to tell your body to curl around the pain. It just does.”

She had zero fashion experience. She filed a patent anyway

That realization kickstarted McKeen on a major career switch, costing her a career in VC—but earning her one of the most quietly disruptive brands in women’s fashion (Evelyn & Bobbie is now the fastest-growing brand at Nordstrom). She moved to Portland, home to Nike, Adidas, and Columbia for inspiration from major brands and proximity to new connections.

She started tinkering with prototypes in her garage and immediately filed for intellectual property rights. That was based on her VC knowledge that a woman’s company would need that to get funded.

McKeen got her first works utility patent (the harder, more defensible kind that covers how something works, not just how it looks) within a year. The brand declined to disclose how much funding it has raised, but now holds 16 international patents protecting its proprietary EB Core technology, which mimics the support and structure of a wire without causing discomfort.

To put into perspective how critical it was to protect her intellectual property, only 12% of patents in the U.S. were awarded to women, according to the U.S. Patent and Trademark Office as of 2019. McKeen has six of them, protecting the unique 3D-sling technology in her bras.

The brand McKeen built, Evelyn & Bobbie, was named for her maternal grandmother and her aunt, and operates on a simple premise: a bra that fits well and feels good all day.

“I wanted a bra that made me look better in my clothes,” McKeen said—an inspiration reminiscent of how Spanx founder Sara Blakely started her now-$1.2 billion shapewear empire. “Wire-free bras give you that mono boob—not a nice silhouette. They make your clothes look frumpy. I wanted nice lift, separation, a beautiful silhouette. I could not find that bra. How outrageous, really.”

The average U.S. bra size is 34F. Most brands design for something much smaller

With major brands like Victoria’s Secret, Aerie, Third Love, Savage X Fenty, and countless others on the market, Evelyn & Bobbie is undoubtedly in a crowded, competitive space. But as all women know, not all bras are comfortable to wear, especially for extended periods.

What sets Evelyn & Bobbie apart is their approach to sizing. McKeen designs with 270 fit models across seven easy sizes, grading each style individually rather than scaling up from a single sample.

“Most bra companies have like one or two fit models,” she said. “They’ll make a 34B and just scale it up, which is why it doesn’t fit well in larger sizes.

The average bra size in the U.S., McKeen pointed out, is a 34F, a stat that’s surprising to most people—including initial investors she once had to convince that comfort was even a relevant selling point.

“I had many investor meetings where they were 60-minute meetings, and 50 minutes of it was me trying to convince them that comfort was relevant,” she said. “I mean, Victoria’s Secret kind of figured it out, right? Like it’s just sexy, isn’t that what women want?” [...]

With a luxury product comes a luxury price point: Evelyn & Bobbie bras retail for $98 each. But that price tag could be worth avoiding chronic pain for some women.

by Sydney Lake, Fortune |  Read more:
Image: Evelyn & Bobbie
[ed. An entire article about bras but mostly about protecting intellectual property rights (16 international patents!), never fully explaining what the new technology actually is, other than it uses more fit models to ensure proper sizing. FYI: according to E&B's website EB Core uses "bonded internal structures and a soft, adaptive material, that stretches, molds, and supports—delivering wire-free lift.". Well, guess that explains it.] 

Lost In Space

No one is happy with NASA’s new idea for private space stations (Ars Technica):

"Most elements of a major NASA event this week that laid out spaceflight plans for the coming decade were well received: a Moon base, a focus on less talk and more action, and working with industry to streamline regulations so increased innovation can propel the United States further into space.

However, one aspect of this event, named Ignition, has begun to run into serious turbulence. It involves NASA’s attempt to navigate a difficult issue with no clear solution: finding a commercial replacement for the aging International Space Station.

During the Ignition event on Tuesday, NASA leaders had blunt words for the future of commercial activity in low-Earth orbit. Essentially, they are not confident in the viability of a commercial marketplace for humans there, and the agency’s plan to work with private companies to develop independent space stations does not appear to be headed toward success. Plenty of people in the industry share these concerns, but NASA officials have not expressed them out loud before.

“We’re on a path that’s not leading us where we thought it would,” said Dana Weigel, manager of the International Space Station program for NASA.

NASA proposed a new solution that would bind the private companies more closely to NASA, requiring them not to build free-flying space stations but rather to work directly with the space agency on modules that would, at least initially, dock with the International Space Station. This change was not well-received."

***
[ed. See also: SpaceX offers details on orbital data center satellites (Space News):]

"At a March 21 event in Austin, Texas, Musk outlined an initiative by SpaceX, along with automaker Tesla and artificial intelligence company xAI — also run by Musk — to massively increase production of high-end computer chips needed for both terrestrial and space applications.

The Terafab project seeks to produce one terawatt of processors annually, which Musk said is 50 times the combined production rate of all manufacturers of chips used today in advanced applications such as AI.

Those processors, he said, are the “missing ingredient” in his plans to deploy a large constellation of satellites to serve as an orbital data center.

“We either build the Terafab or we don’t have the chips, and we need the chips, so we’re going to build the Terafab,” he said.

"SpaceX filed an application with the Federal Communications Commission in late January for a constellation of up to one million satellites that would be used as an orbital data center for AI applications. The company provided few technical details about the constellation, including the size of the satellites, in that application."

Sunday, March 29, 2026

The Last Useful Man

About halfway through Mission: ImpossibleThe Final Reckoning, Tom Cruise goes for a run on a treadmill. The treadmill is on the USS Ohio, a submarine manned exclusively by implausibly attractive people. One of those people is not who they seem: a cultist, radicalized by the Entity, the film’s AI antagonist. The cultist sneaks up behind Cruise and lunges with a knife. Things look dicey for a moment — until Cruise gains some distance and kicks him repeatedly in the head. While doing so, he imparts a few words of wisdom: “You spend too much time on the internet.

What divides the heroes and villains in Final Reckoning is simple: the villains have to Google things, and the heroes do not. There are three bad guys, more or less. First, the Entity, a rogue AI halfway through its plan for global domination. Second, Gabriel, the Entity’s meat puppet. Third, a gang of surprisingly likable Russians who take Cruise’s team hostage in a house in Alaska. What unites the villains isn’t malice so much as it is uselessness. I mean that precisely. They are often effective, even successful. But never useful. [...]

This division between characters with embodied knowledge and those without runs through all of Cruise’s recent work. His own impossible mission is to teach the value of physical competence: not just knowing things, but knowing how to do them. In Final Reckoning, this idea finds its clearest form. [...]

Like Forster, Cruise and his long-time collaborator Christopher McQuarrie invent machines to dramatize the age they live in. Forster gave us the Machine; McQuarrie, the Entity. But unlike Forster, their imagination of technology is not apocalyptic but diagnostic — they aren’t warning us of the machine age so much as asking what it demands of us, and what it reveals.

This brings us to what looks, at first glance, like a paradox: How does a franchise so lovingly built on disguises, gadgets, and inventions of all kinds — from the eye-tracking projector that gets Cruise into the Kremlin to the single suction glove that lets him cling to the Burj Khalifa — end with a villain made of pure technology?

If you asked Cruise, his answer would be simple: technology is good when it roots you in your body and bad when it lets you forget you have one. That’s why Final Reckoning, for all its AI villainy and suspicion of the terminally-online, still treats technology with a near-Romantic sensibility. Hand-soldered pen drives, aging aircraft carriers, and vintage biplanes carry Cruise and his team on their mission to save the world. At times subtlety disappears altogether; the film’s most inviting location is a candle-lit Arctic hideout filled with analogue comforts: old books and gramophones, telescopes and soldering tools.

The same ideas return — turned up to eleven — in Cruise and McQuarrie’s two other collaborations this decade outside the Mission: Impossible franchise. The first, Edge of Tomorrow, in which Cruise relives the same day on repeat until he generates enough embodied knowledge to defeat an autonomous alien race, is, even for the purposes of this essay, too on the nose, so I’ll focus instead on Top Gun: Maverick.

The film opens with Cruise test-piloting an experimental stealth aircraft in a last-ditch attempt to save the program from cancellation by the “drone ranger,” an admiral who wants the budget for his autonomous fleet. For the program to survive, Cruise needs to hit Mach 10: a speed no vehicle has ever reached. As the team watches on, he delivers the impossible. Gauzy wisps of supersonic air stream across the cockpit windows as Maverick stares out into the black of space. He whispers softly to his dead best friend, “Talk to me, Goose.”

Soon afterwards, Maverick is sent back to Top Gun to train a new generation of pilots. He begins his first lesson holding up the flight manual for the F-18, which makes the Riverside Chaucer look like a novella, before throwing it in the bin. “I assume you know this book inside and out. So does your enemy.” What matters instead is the knowledge that can’t be written down: the things his students already know by instinct, but cannot yet express  “Today we’ll start with only what you think you know.”

The quest to ‘“know more than we can tell,”’ as Michael Polanyi put it, drives the rest of the film. The pilots even have their own version of the phrase, a near-religious catechism recited at almost every decisive moment: “Don’t think. Just do.”

Beyond the screen, the same principle applies. In the Mission: Impossible franchise, filming begins with no plot or script, only a commitment to figuring it out in the process. It’s most evident in each film’s tentpole action sequences, where the line between Cruise the actor and Cruise the stuntman blurs beyond recognition.

The art critic Robert Hughes once wrote of his love for “the spectacle of skill” — the thrill of watching an expert at work, whatever the discipline. Nowhere is this more evident than in Cruise’s increasingly daring plane sequences. In Mission Impossible: Rogue Nation, Cruise clings to a real Airbus A400M as it lifts off from an airfield in Lincolnshire. He sprints across the field, in that inimitable Tom Cruise style, mounts the wing with practiced ease, and seats himself by the cargo door. The plane taxis. So far, so cool. Then it lifts off. The perfect hair vanishes, blown back and forwards, alternating second by second between old skeleton and boy with bowl cut. His clothes are shapeless and billowing, pulled off him by the force of the air.

This is no country for sprezzatura, nor the embodiment preached by the wellness industry with its vocabulary of “balance” and “equilibrium.” Here, we are meant to feel the effort. To know yourself is to know your limits, and so push your body to the edge of failure. When they are about to perform stunts, Cruise often briefs his team with an unusual mantra: ‘Don’t be safe, be competent.”

At the end of Final Reckoning, Cruise plummets through the sky as his parachute burns to cinders above him. To film it, the stunt team soaked a parachute in flammable liquid, flew him to altitude in a helicopter, and pushed him out as it ignited. He did this 19 times. When he asked to go again, the stunt coordinator told him there were no parachutes left. This was a lie. McQuarrie was more direct: “You’re done. Do not anger the gods.”

It’s interesting to see this return to embodiment and strange to find myself drawn to it. Like many default clever people, I’d long paid lip service to Merleau-Ponty and his ilk while living as a dualist; my brain was the moneymaker, my body just along for the ride. It was only after having children that I began to understand what it meant to inhabit a body rather than simply use one.

In an essay for Granta earlier this year, the writer Saba Sams contrasted her son’s love of leaping from benches and walls with her own unease: “For them, the body is not a constraint, is not a ticking clock, is not something to be moulded or hidden. The body is the window to movement, and movement is a window to joy.”

Sams captures something larger. This renewed fascination with embodiment isn’t spontaneous, it’s a reaction to technologies so powerful and frictionless they’re impossible to ignore. Even the most grounded among us now move through the world not through our bodies but through screens, which is why so many make the negative case for technology, urging us, thankfully without a Cruise-style kick to the head, to spend less time on the internet.

What Cruise gives us is the positive case: not just resistance to disembodiment but a reminder of what is beautiful about being physical in the first place. The skilled things bodies can do are inherently satisfying. They can be thrilling, reassuring, even a little terrifying. But, as David Foster Wallace put it in his essay on Roger Federer:
The human beauty we’re talking about here is beauty of a particular type; it might be called kinetic beauty. Its power and appeal are universal. It has nothing to do with sex or cultural norms. What it seems to have to do with, really, is human beings’ reconciliation with the fact of having a body.
That’s the mission, if we choose to accept it. The target is not the recent bugbear of AI, but instead the more gentle conditions of modernity. When we use Google Maps instead of a printed atlas, or when CGI is used to sell a stunt instead of the performers doing it themselves, something is lost. It’s why the focus on AI can sometimes be misguided. It’s not so much a revolution, it’s simply the next step on the ladder of disembodiment: another in a long line of technologies to make humans a little less self-reliant. Why learn, if you can ask?

In the final biplane sequence, we watch Cruise commandeer a plane, fly it to another, board that plane midair, and take control of it — a feat so exhausting it beggars belief. Gabriel, the villain, in order to survive his defeat, needs only do something a hundredth as difficult: jump from the plane and deploy a parachute. He laughs. This is easy. But he doesn’t know the complexities of leaving a biplane with a parachute — the correct moment to release, the parts to steer clear from. He’s never bothered to learn. He frees himself, clips the rudder, cracks his skull open, and dies.

Here we see the real villain: not intelligence, but convenience. The mission so often feels impossible because we keep trying to do things without effort. Cruise’s answer is simple: Stop. Remember your body. Sometimes, it’s better to take the hard way.

Final Reckoning’s closing scene presents us with two intelligences and two bodies. One is Cruise, a 62-year-old body who we’ve seen, for the last two hours, run fast, dive deep, and hang from planes. The other is the Entity, trapped in a glorified USB stick: a golden nugget incapable of anything other than being flushed down a toilet.

One still moves. The other never could.

by Aled Maclean-Jones, The Metropolictan Review | Read more:
Image: Getty

The 49MB Web Page

If active distraction of readers of your own website was an Olympic Sport, news publications would top the charts every time.

I went to the New York Times to glimpse at four headlines and was greeted with 422 network requests and 49 megabytes of data. It took two minutes before the page settled. And then you wonder why every sane tech person has an adblocker installed on systems of all their loved ones.

It is the same story across top publishers today.

To truly wrap your head around the phenomenon of a 49 MB web page, let's quickly travel back a few decades. With this page load, you would be leaping ahead of the size of Windows 95 (28 floppy disks). The OS that ran the world fits perfectly inside a single modern page load. In 2006, the iPod reigned supreme and digital music was precious. A standard high-quality MP3 song at 192 kbps bitrate took up around 4 to 5 MB. This singular page represents roughly 10 to 12 full-length songs. I essentially downloaded an entire album's worth of data just to read a few paragraphs of text. According to the International Telecommunication Union, the global average broadband internet speed back then was about 1.5 Mbps. Your browser would continue loading this monstrosity for several minutes, enough time for you to walk away and make a cup of coffee.

If hardware has improved so much over the last 20 years, has the modern framework/ad-tech stack completely negated that progress with abstraction and poorly architected bloat?

CPU throttles, tracking and privacy nightmares


For the example above, taking a cursory look at the network waterfall for a single article load reveals a sprawling, unregulated programmatic ad auction happening entirely in the client's browser. Before the user finishes reading the headline, the browser is forced to process dozens of concurrent bidding requests to exchanges like Rubicon Project (fastlane.json) and Amazon Ad Systems. While these requests are asynchronous over the network, their payloads are incredibly hostile to the browser's main thread. To facilitate this, the browser must download, parse and compile megabytes of JS [ed. javascript]. As a publisher, you shouldn't run compute cycles to calculate ad yields before rendering the actual journalism.

1. The user requests text.
2. The browser downloads 5MB of tracking JS.
3. A silent auction happens in the background, taxing the mobile CPU.
4. The winning bidder injects a carefully selected interstitial ad you didn't ask for.


Beyond the sheer weight of the programmatic auction, the frequency of behavioral surveillance was surprising. There is user monitoring running in parallel with a relentless barrage of POST beacons firing to first-party tracking endpoints (a.et.nytimes.com/track). The background invisible pixel drops and redirects to doubleclick.net and casalemedia help stitch the user's cross-site identity together across different ad networks.

When you open a website on your phone, it's like participating in a high-frequency financial trading market. That heat you feel on the back of your phone? The sudden whirring of fans on your laptop? Contributing to that plus battery usage are a combination of these tiny scripts.

Ironically, this surveillance apparatus initializes alongside requests fetching purr.nytimes.com/tcf which I can only assume is Europe's IAB transparency and consent framework. They named the consent framework endpoint purr. A cat purring while it rifles through your pockets.

So therein lies the paradox of modern news UX. The mandatory cookie banners you are forced to click are merely legal shields deployed to protect the publisher while they happily mine your data in the background. But that's enough about NYT.

The Economics of Hostile Architecture

Publishers aren't evil but they are desperate. Caught in this programmatic ad-tech death spiral, they are trading long-term reader retention for short-term CPM pennies. The modern ad industry is slowly de-coupling the creator from the advertiser. They weaponize the UI because they think they have to.

Viewability and time-on-page are very important metrics these days. Every hostile UX decision originates from this single fact. The longer you're trapped on the page, the higher the CPM the publisher can charge. Your frustration is the product. No wonder engineers and designers make every UX decision that optimizes for that. And you, the reader, are forced to interact, wait, click, scroll multiple times because of this optimization. Not only is it a step in the wrong direction, it is adversarial by design.

The reader is not respected enough by the software. The publisher is held hostage by incentives from an auction system that not only encourages but also rewards dark patterns.

And almost all modern news websites are guilty of some variation of anti-user patterns. As a reminder, the NNgroup defines interaction cost as the sum of mental and physical efforts a user must exert to reach their goal. In the physical world, hostile architecture refers to a park bench with spikes that prevent people from sleeping. In the digital world, we can call it a system carefully engineered to extract metrics at the expense of human cognitive load. Let's also cover some popular user-hostile design choices that have gone mainstream.

The Pre-Read Ambush


Selected GDPR examplesThe advantage and disadvantages of these have been discussed in tech circles ever since they launched.

When a user clicks a news link, they have a singular purpose of reading the headline and going through the text. The problem is that upon page load, users are greeted by what I call Z-Index Warfare. The GDPR/Cookie banners occupy the bottom 30%. The user scrolls once and witnesses a "Subscribe to our Newsletter" modal. Meanwhile the browser has started hammering them with allow notification prompts.

The user must perform visual triage, identify the close icons (which are deliberately given low contrast) and execute side quests just to access the 5KB of text they came for. Let's look at how all these anti-patterns combine into a single, user-hostile experience.

by Shubham Bose, Thatshubham |  Read more:
Images: uncredited

Saturday, March 28, 2026

Welcome to a Multidimensional Economic Disaster

The global economy has become dependent on the AI industry. Trillions of dollars are being invested into the technology and the infrastructure it relies on; in the final months of 2025, functionally all economic growth in the United States came from AI investments. This would be risky even in ideal conditions. And we are very far from ideal conditions.

Much of the AI supply chain—chips, data centers, combustion turbines, and so on—relies on key materials that are produced in or transported through just a few places on Earth, with little overlap. In particular, the industry is highly dependent on the Middle East, which has been destabilized by the war in Iran. A global energy shock seems all but certain to come soon—the kind where even the best-case scenario is a disaster. The war could grind the AI build-out to a halt. This would be devastating for the tech firms that have issued historic amounts of debt to race against their highly leveraged competitors, and it would be devastating for the private lenders and banks that have been buying up that debt in the hope of ever bigger returns.

For the better part of the past year, Wall Street analysts and tech-industry observers have fretted publicly about an AI bubble. The fear is that too much money is coming in too fast and that generative-AI companies still have not offered anything close to a viable business model. If growth were to stall or the technology were to be seen as failing to deliver on its promises, the bubble might burst, triggering a chain reaction across the financial system. Everyone—big banks, private-equity firms, people who have no idea what’s mixed into their 401(k)—would be hit by the AI crash.

Until recently, that kind of crash felt hypothetical; today, it feels plausible and, to some, almost inevitable. “What’s unusual about this, unlike commercial real estate during the global financial crisis,” Paul Kedrosky, an investor and financial consultant, told us, “is all of these interlocking points of fragility.”

Perhaps the clearest examples are advanced memory and training chips, which are among the most important—and are by far the most expensive—components of training any AI model. Currently, most of them are produced by two companies in South Korea and one in Taiwan. These countries, in turn, get a large majority of their crude oil and much of their liquefied natural gas—which help fuel semiconductor manufacturing—from the Persian Gulf. The chip companies also require helium, sulfur, and bromine—three key inputs to silicon wafers—largely sourced from the region. In addition, Saudi Arabia, Qatar, the United Arab Emirates, and other regional petrostates have become key investors in the American AI firms that purchase most of those chips.

Because of the war in Iran, the Strait of Hormuz is functionally closed to most shipping vessels, stranding one-fifth of the world’s exports of natural gas, one-third of the world’s exports of crude oil, and significant quantities of the planet’s exportable fertilizer, helium, and sulfur. Meanwhile, Iran and Israel have begun bombing much of the fossil-fuel infrastructure in the region, which could take many years to replace. In only a month of war, the price of Brent crude—a global oil benchmark—has jumped by 40 percent and could more than double, liquefied-natural-gas prices are soaring in Europe and Asia, and helium spot prices have already doubled. The strait is “critical to basically every aspect of the global economy,” Sam Winter-Levy, a technology and national-security researcher at the Carnegie Endowment for International Peace, told us. “The AI supply chain is not insulated.”

The situation could quickly deteriorate from here. A helium crunch could trigger a shortage of AI chips or cause chip prices to rise. AI companies need ever more advanced chips to fill their data centers—at higher prices, the massive server farms, already hurting from elevated energy costs caused by the war, would have almost no hope of becoming profitable. Without these chips, new data centers would not be built or would sit empty. Astronomical tech valuations, and in turn the entire stock market, could collapse.

One industry’s precarious position isn’t usually everyone’s problem. Unfortunately, AI is different. The biggest data-center players, known as hyperscalers, are among the biggest corporations in the history of capitalism; they include Microsoft, Google, Meta, and Amazon. But even they will be pressed by collectively spending nearly $700 billion on AI in a single year. In order to get the money for these unprecedented projects, data-center providers are beginning to take on colossal amounts of debt. Some of this is done through creative deals with private-equity firms including Blackstone, BlackRock, and Blue Owl Capital—which themselves operate as sort of shadow banks that, since the most recent financial crisis, have arguably become as powerful and as influential as Bear Stearns and Lehman Brothers were prior to 2008. Endowments, pensions, insurance funds, and other major institutions all trust private equity to invest their money.

For a while, it seemed like every time Google or Microsoft announced more data-center investments, their stock prices rose. Now the opposite occurs: The hyperscalers are spending far more, but investors have started to notice that they are not generating anything near the revenue they need to. The data-center boom’s top players—Google, Meta, Microsoft, Amazon, Nvidia, and Oracle—have all lost 8 to 27 percent of their value since the start of the year, making them a huge drag on the overall stock market. And the $121 billion of debt that hyperscalers issued in 2025, four times more than what they averaged for years prior, is expected to grow dramatically.

All of the major players in this investment ecosystem are vulnerable. Private-equity firms are being squeezed on both ends by generative AI: During the coronavirus pandemic, they bought up software companies, which are now plummeting in value because AI is expected to eat their lunch. Meanwhile, private equity’s new investment strategy, data centers, is also falling apart because of AI. Blackstone, Blue Owl, and the like are sinking huge sums into data-center construction with the assumption that lease payments from tech companies will pay for their debt. In order to pay for their investments, private-equity companies raised money from major financial institutions—but now the viability of those lease payments is coming into question as the hyperscalers’ cash flow is strained. “There’s a reason to think we’re seeing some of the same 2008 dynamics now,” Brad Lipton, a former senior adviser at the Consumer Financial Protection Bureau and now the director of corporate power and financial regulation at the Roosevelt Institute, told us. “Everyone’s getting tied up together. Banks are lending money to private credit, which in turn lends it elsewhere. That amps up the risk.” [...]

The war in Iran affects data-center finances as well. Should energy prices continue to skyrocket, so will the cost of this already very expensive computing equipment, because it needs tremendous amounts of energy to manufacture and operate. And the war has exposed physical risks to these buildings. Janet Egan, a senior fellow at the Center for a New American Security, described data centers to us as “large, juicy targets.” It is impossible to hide these facilities, which can cover 1 million square feet. Earlier this month, Iran bombed Amazon data centers in the UAE and Bahrain. American hyperscalers had been planning to build far more data centers in the region, because the Trump administration and the AI industry have sought funding from Saudi Arabia, the UAE, Qatar, and Oman. Now there’s a two-way strain on those relationships. The physical security of the data centers is more precarious, and the conflict is damaging the economic health of the petrostates, thereby jeopardizing a major source of further investment in American AI firms. The Trump administration “staked a lot on the Gulf as their close AI partner, and now the war that they’ve launched poses a huge threat to the viability of the Gulf as that AI partner,” Winter-Levy said.

Plus, “what’s to prevent Iran or a proxy group, or another maligned actor, from tomorrow launching an armed drone against a data center in Northern Virginia?” Chip Usher, the senior director for intelligence at the Special Competitive Studies Project, a national-security and AI think tank, told us. “It could happen. Our defenses are not adequate.” State-sponsored cyberattacks of the variety Iran is known for could also knock a data center offline. You can build all manner of defenses—reinforced concrete, drone-interception systems—but doing so adds cost and time to already costly and slow construction. [...]

Even if Iran and the Strait of Hormuz don’t directly trigger an AI-driven financial crisis, the odds are decent that another vector could. (Remember tariffs?) Energy prices could stay elevated for years, because the targeted fossil-fuel facilities in the Persian Gulf will take a long time to restore. As the U.S. directs huge amounts of attention and military resources toward Iran, it’s easy to imagine China launching an invasion of Taiwan—a scenario that terrifies Silicon Valley, because it would halt the production of chips needed to train frontier models. That’s not even considering the single Dutch company that makes the high-tech lithography machines used to print virtually all AI chips, or the German company that makes the mirrors used in those machines. “There are too many ways for it to fail for it not to fail,” Kedrosky said of the AI industry’s web of risk. “All you can say for sure is this is a fragile and overdetermined system that must break, so it will.”

by Matteo Wong and Charlie Warzel, The Atlantic | Read more:
Image: An Amazon Web Services data center in Manassas, Virginia (Nathan Howard / Bloomberg / Getty)
[ed. See also: