Friday, April 10, 2026

Joke of the Day: Prediction Markets

White House staff were warned last month not to use insider information to place bets on predictions markets.

The email was sent to staff on 24 March, a day after US President Donald Trump announced a five-day pause on his threat to attack Iranian power plants and energy infrastructure.

It referred to press reports that raised concerns over government officials using non-public information to place bets on platforms like Kalshi or Polymarket.

White House spokesman Davis Ingle told the BBC that "any implication that Administration officials are engaged in such activity without evidence is baseless and irresponsible reporting."

The Wall Street Journal first reported the email on Thursday.

Ingle also said that all federal employees are subject to government ethics guidelines that prohibit the use of insider information for financial gain.

"The only special interest that will ever guide President Trump is the best interest of the American people," he added.

The BBC has contacted Kalshi and Polymarket for comment.

by Osmond Chia, BBC |  Read more:
Image: via
[ed. That's some weapons-grade PR spin right there. Of course they know who placed the bets. Despite current directors, the FBI and CIA aren't stupid. They just don't want it to be too obvious.]

Claude Mythos: The System Card

Mythos self-portrait, as imagined by Opus based on the System Card

Claude Mythos is different.

This is the first model other than GPT-2 that is at first not being released for public use at all.

With GPT-2 the delay was due to a general precautionary principle. OpenAI did not know what they had, or what effect on demand text would have on various systems. It sounds funny now, GPT-2 was harmless, but at the time the concern was highly reasonable.

The decision not to release Claude Mythos is not about an amorphous fear. If given to anyone with a credit card, Claude Mythos would give attackers a cornucopia of zero-day exploits for essentially all the software on Earth, including every major operating system and browser. It would be chaos.

Or, in theory, if Anthropic had chosen to do so, it could have used those exploits. Great power was on offer, and that power was refused. This does not happen often.

Instead Anthropic has created Project Glasswing. Mythos is being given only to cybersecurity firms, so they can patch the world’s most important software. Based on how that goes, we can then decide if and when it will become reasonable to give access to a broader range of people.

Who counts as this ‘we’ is suddenly quite the interesting question. The government picked quite the month to decide to try and disentangle itself from all Anthropic products. Anthropic says it is attempting to work with the government, so that they too can fix their own systems before it is too late. Hopefully that can happen. I also hope that there isn’t an attempt by the government to hijack these capabilities to use them in an offensive capacity. That would be a very serious mistake.

Am I taking Anthropic’s word for all this? Yes, I am taking Anthropic’s word for all of this. They’ve given us sufficient public demonstrations, identifying numerous bugs, and they’ve gotten the cooperation of the world’s biggest tech and cybersecurity firms, and if it wasn’t real then the whole thing would quickly and obviously backfire. I think it is safe to assume that all of this is legitimate.

by Zvi Mowshowitz, Don't Worry About the Vase |  Read more:
Image: Mythos self portrait (Anthropic)
[ed. See also: AI #163: Mythos Quest]

Thursday, April 9, 2026

Hot Ticket

Jeffrey Epstein’s web of influence stretched from European palaces to Ivy League universities and Wall Street banks, but there was apparently at least one little corner of the establishment that seems to have been beyond his reach: Augusta National. In July 2019, Epstein sent an iMessage to Steve Bannon asking for his help with a particularly difficult problem. “Need to work magic to get brad Karp admitted to augusta golf club,” Epstein wrote. “The head of Paul Weiss Brad Karp?” Bannon replied. “Yes.”

Karp, the former chair of the legal firm Paul, Weiss, Rifkind, Wharton & Garrison, stepped down from his position in February because of his ties with Epstein.

Bannon and Epstein talked it over for an hour. Bannon suggested that Karp’s “best shot” was to “take a strong interest in amateur golf”, Epstein complained that some of the existing Augusta members who might help, like Bill Gates, “have no sway”, and asked “Who s their senator” as if they might. Bannon explained that he thinks the club is run by “7 Atlanta and Augusta families”, who he calls “crackers” from the “Old south” who are prejudiced against “lawyers and investment bankers”. The heart bleeds.

It’s a pungently obnoxious conversation, racist and misogynistic, and at the end of it, Bannon and Epstein are absolutely no closer to figuring out how to go about getting in.

Which is one of the great lessons of Augusta National. Money only goes so far. It is, even now, just about the only sports event in the US where you don’t need to worry that Donald Trump is going to decide to put in an appearance.

If Brad Karp and his ilk are busy worrying about how to get a club membership, most of the rest of us would settle for just making it inside the gates during Masters week. But admittance, like everything else around here, is done according to its own peculiar set of rules. Most of the tickets go to lifetime patrons from the local community, who own badges passed down through the generations along with grandad’s pocket watch. That route in was shut back in the 1970s. The other is the annual lottery, and your odds of winning it make Tiger’s chances of a sixth Green Jacket look good this year.

Officially that’s it. Unofficially, anyone who’s willing to spend enough was usually able to pick one up from one of the touts who camp out on the easements down by the interstate, just beyond the reach of the 2,700ft boundary that makes scalping near the property a criminal offence under Georgia law. Reselling tickets is against the terms and conditions, but the loophole was that anyone who bought one could always insist they had received it as a gift. In recent years, though, resale has become an industrial business, and second-hand tickets have been appearing on the internet where they sell for as much as 50 times their face value.

Until, that was, Augusta’s members decided they had had enough of other companies making the profit the club have chosen to forgo by keeping the actual admission prices so low. The Sunday of last year’s Masters was described as a “bloodbath” by an executive working for one of the hospitality companies in the area, as hundreds of paying customers found they were detained, and even refused entry, at the club gates because they had come on someone else’s ticket. According to industry reports, as many as 200 ticket holders were turned away on the day.

Some said they were taken into a room and asked to hand over their identity documents before being grilled about how they got their tickets, and where they were staying during the tournament. One person said it was like being pulled over by the police. Some were let in anyway, others say they were turned away. As is the way at Augusta, it’s almost impossible to get a straight answer from anyone at the club about exactly what’s going on and, in the absence of any information, there are an awful lot of rumours about the club’s crackdown on the market.

They say the four-day tickets have radio-frequency identification chips in them, and that the club were able to trace all the ones that were being returned back to a single geographical location each evening before being used again by someone else the next day. They say the information contained in the barcodes includes the buyer’s address. They say the club are employing undercover agents to idly ask patrons where they picked up their tickets while they are walking around the grounds.

The other theory is the club are buying up a lot of the resale tickets themselves just so they can find out the names of the people who put them up for sale. The letter they send out is a masterpiece of Masters manners, thanking the recipient for their support and patronage over the years before informing them that they are now permanently banned from the grounds.

by Andy Bull, The Guardian |  Read more:
Image: Mike Blake/Reuters
[ed. Master's weekend. Glad there's still one institution left with some balls.]

Wassily Kandinsky, 'Zweierbund', 1932
via:

Ghost Murmur

At a press conference on Monday, CIA Director John Ratcliffe disclosed that the agency had used “exquisite technologies that no other intelligence service in the world possesses” to find and rescue the second American airman shot down in southern Iran. Ratcliffe likened it to “hunting for a single grain of sand in the middle of a desert.”

President Trump gushed about the technical wizardry the CIA deployed, claiming it was able to locate the airman from “40 miles away.”

“It’s like finding a needle in a haystack, finding this pilot, and the CIA was unbelievable,” Trump said Monday. “The CIA was very responsible for finding this little speck.”

Later that same day, the New York Post revealed in an exclusive report that the tool Ratcliffe was referring to was something called “Ghost Murmur.”

“The secret technology uses long-range quantum magnetometry to find the electromagnetic fingerprint of a human heartbeat and pairs the data with artificial intelligence software to isolate the signature from background noise,” two sources close to the breakthrough told the Post.

I’m calling bullshit.

Perhaps the CIA does have a tool called Ghost Murmur. Maybe it can detect faint signals from a not-too-far distance away. But it didn’t locate the downed airman from 40 miles away as Trump suggested. Nor can it locate a heartbeat across 1,000 square miles of desert, as one of the Post’s sources claimed. Not unless the CIA has figured out how to rewrite the laws of physics.

A heartbeat does produce a magnetic signal, but don’t confuse that with the electrical signal picked up by the electrodes that get stuck to your chest in the hospital, the ones that generate the beeping waveform patterns we all recognize from The Pitt. (Great show, by the way.) The heart’s magnetic signature is far weaker than its electrical one.

The tesla is the unit used to measure the strength of a magnetic field.. The Earth’s magnetic field measures about 50 microtesla. Studies I’ll get to in a minute have measured the cardiac magnetic signal at chest contact to be about 25 picotesla, already 2 million times weaker than Earth’s own field.

Quantum sensors can detect this extraordinarily faint signal without touching the body. But only under optimal conditions and at very close range.

I’m no expert in this field, but Quantum Insider, which tracks these developments, pointed to several studies that show the limits of this technology.

One study published this year on diamond quantum magnetometry, the same technology Ghost Murmur supposedly uses, required sensors placed 1 centimeter from the chest inside a magnetically shielded room and an average of up to 12,000 heartbeats to detect a signal.

“Averaging was necessary since magnetic field recordings did not reveal the MCG signal in the NV trace in real-time,” the study reported.

In plain English: The quantum sensor could not detect a heartbeat in real time in a shielded room at one centimeter.

A 2024 study detected the heartbeat of an anesthetized rat, a weaker signal than a human heart, using a sensor placed 5 millimeters from the animal’s chest, inside a magnetic shielding cylinder, after an hour of continuous data accumulation.

Ghost Murmur supposedly detected a single beating heart, in real time, from 40 miles away, over open desert, from a moving aircraft, in an environment saturated with competing signals from the Earth’s magnetic field, electronic devices, and other living creatures. Not likely.

by Seth Hettenna, After-Action Report |  Read more:
Image: White House
[ed. Interesting technology. But, why does it seem like everyone is lying these days? Hmm... maybe because they are? And everybody just expects it and lets it keep happening?]

Two-Week Iran War Ceasefire Agreement DOA: Updates

Iran War: US Pokes Iran in the Eye with Immediate Bad Faith Dealing Over Ceasefire; Strait of Hormuz Again Closed; US Insists Talks in Pakistan On but Iran Demands Halt in Lebanon Attacks

New statement from the Speaker of Iran’s Parliament

The Trump Administration, true to form, doubled down on lying, with JD Vance and others maintaining that having Israel cease operations in Lebanon was never part of the deal. The Janta Ka clip below not only recounts how Israel launched its most savage air strikes against Lebanon ever, of 100 missiles in 10 minutes, killing over 182 as of recent reports, but also has none other than the White House’s pet Middle East stenographer, Barak Ravid, effectively calling out the falsehood. [...]

This section from Ravid in a CNN video starts at about 8:20:
"Well, I think it’s not only the Iranians. problem is that the Pakistani prime minister when he announced the ceasefire he made it clear that Lebanon was part of the deal, which raises the question of what happened there in the negotiations if the main mediator says that Lebanon is part of the deal. I know that the Egyptian mediators and the Turkish mediators see it the same way, that Lebanon is part of this deal.

Yesterday uh shortly before Trump announced a ceasefire he called Israeli Prime Minister Netanyahu who sort of lost control of the process and was very nervous about this ceasefire and during that call when Trump told him listen I’m going to agree to a ceasefire with Iran, Netanyahu told him but what about Lebanon we want to continue fighting. And Trump told Netanyahu, no problem you can continue fighting Lebanon is not part of this deal. So this was something that was agreed upon before the announcement of the ceasefire, it was agreed upon between Israel and the US. I heard it from both Israeli officials and US officials. 
And US officials told me today that they’re not concerned about this those Iranian threats to withdraw from the negotiations or to uh close again the straight of Hormuz because of the situation in Lebanon. They think it will be solved and and it’s not going to be a reason for the agreement to collapse."
Other sources confirm the Iranian view:


The press is amplifying market-soothing Trump claims that he has cemented a ceasefire “deal” with Iran and is on a path to a resolution of the war. But there are serious differences between what Iran has said it has agreed to, which is a US capitulation. The only concession Iran appears to have made is to somewhat reduce its Strait of Hormuz transit fee. By contrast, Trump depicts the two week ceasefire as a pause in his threat to end Iran as a civilization over a four-hour period, contingent on Iran fully opening the Strait…to which Iran has not agreed.

In addition, the Iran terms call for all hostile action to end, including of Israel against Lebanon. But Israel was not a party to this (non-convergent) agreement and is making minimally compliant noises while also reaffirming its intent to continue ethnic cleansing in Lebanon.

Now this turn of events is admittedly a lot better than where we were 24 hours ago, which was Trump threatening a bombing campaign against Iran that would have produced Iranian retaliation across the Gulf State which was certain, whatever form it took, to damage energy-related infrastructure so severely as to reduce energy output for many many years, risking as many warned, a global deep depression and even potentially a large rollback of living standards across the globe. If nothing else, this seems to signal that Trump is on a path to a durable TACO, as in he really has decided that he needs to find the most face-saving exit he can muster. Perhaps the same way only Nixon could go to China, perhaps only Susie Wiles could produce this shift

But just as Ukraine has agency in ending the war with Russia, so to does Israel in this conflict. This not-really-an-agreement was done over Israel’s head. Israel like Ukraine has ample means to sabotage. And that is before getting to the fact that Israel has never honored ceasefires it actually did agree to, save when it used one to make a short pause for its military to regroup before resuming fighting...

And we also do not know where the Gulf States stand on this development. The UAE, Kuwait and the Saudis has been on board with escalation, even by some accounts, egging Trump on.

And this view charitably assumes Trump really wants out, as opposed to is simply trying to buy time after the fiasco of what looks like a failed raid on Iran nuclear operations to figure out what to do next. Trump’s default is to try to keep options open and buy for time. He likely still thinks if he can contain paper oil prices and thus hopefully gas and diesel prices in the US, that he can keep pressure of various sorts on to open up anther path. He may not understand that anything less than going back to pretty close to the old normal levels of transit through the Strait of Hormuz very soon means compounding real economy damage. More but less than a high level of traffic would only reduce the rate of intensification of harm.

by Yves Smith, Naked Capitalism |  Read more: 
(previous day's report from April 8: here).
Images: Iran/X
[ed. Much more. Also this (Trump got played by Israel). And, why are we attacking Iraq again?!]

***
Here is the non-news news flash up front — The alleged ceasefire between the United States and Iran is kaput. While there has been no official announcement stipulating that it is over, trust me, it is over. The copium in the Trump administration in particular, and in Washington, DC in general, is ridiculous… Proclamations of a great military victory over Iran, without one shred of evidence that the US achieved any strategic objectives other than inspiring Iran to take control of the Strait of Hormuz and place the world economy in a supply-chain chokehold.

Both Iran and Pakistan, the intermediary in the ceasefire, insist that the Trump administration accepted Iran’s 10-point as a workable basis for negotiation:

The initial reaction among Trump’s Zionist supporters and the Netanyahu government was a combination of shock and fury. The push back started immediately on Tuesday night and by Wednesday morning, the Trump administration insisted that it agreed to a different — yet undefined — set of 10-points. Israel made certain that the negotiations would fail by launching a vicious, murderous bombing of central and southern Lebanon.
***
In the shadow of its wars in Iran and Lebanon, the U.S. has conducted devastating attacks on the security forces of its Iraqi ally.

The March 25 assault was the sixth American attack on the Iraqi army since the launch of the war on Iran. As of April 7, there had been a total of 138 U.S. attacks on Iraq—including two additional strikes on the Iraqi army—resulting in the deaths of more than 73 PMF fighters, 10 Iraqi army soldiers, three dead from the Interior Ministry, and six dead civilians, according to Iraqi officials. For many in the country, it was starting to feel as if the U.S. has declared war on Iraq as well.

The U.S. attacks continued until the two-week ceasefire between the U.S. and Iran was announced on April 8. On April 8, as Israel struck Lebanon at least 100 times and killed hundreds, Iran refused to implement the ceasefire agreement until Israel halted its aggression against Lebanon...

“There was a political, economic, and social effect to this last war,” said an official with the Islamic Resistance of Iraq, who spoke on condition of anonymity. “Who is striking Iraq today? America, right? What is America striking in Iraq? Bases of security forces, the PMF, the army. America is destroying the Iraq it built.”

Joe Kent, former director of the Trump administration’s National Counterterrorism Center, worked closely with the Iraqi government before resigning in March in protest over the U.S. war on Iran. He said he was at a loss to explain why the U.S. military was going after such a wide variety of targets in Iraq.

“For the life of me I don’t know,” Kent told Drop Site on March 28, “a lot of targeting inside Iran comes from Israelis. I’m assuming they have done some targeting in Iraq. They didn’t invest much. It seems like blind American ignorance. Someone convinced us that everything that is PMF is an Iranian proxy. It’s people who didn’t understand the history of Iraq in the last 20 years.”

“There’s definitely no strategy there. The charge d’affaires [at the U.S. embassy in Baghdad] and his team are not this fucking dumb, there’s no way they’re advocating this, they would know the difference between the militias,” he added. “You have guys who didn’t spend that long in Iraq, or senior leaders who spent time in Iraq during the ‘surge’ and think this is their chance to settle scores.” [via: “It Seems Like Blind American Ignorance”: The New U.S. War on Iraq (Drop Site).]

No Shy Person Left Behind

American democracy has a personality problem.

At its core, our political system is a popularity contest. Elections reward those who are comfortable performing in public and on social media, projecting confidence and dominating attention. This dynamic tends to select for so-called alpha types, the charismatic and the daring, but also the entitled, the arrogant and even the narcissistic.

This raises a basic but rarely asked question: Why are we filtering out the quiet voices? And at what cost?

Over the past two decades, my research on collective intelligence in politics, democratic theory and the design of our institutions shows that the system structurally excludes those I call, in my new book, “the shy.” By the shy I mean not just the natural introverts, but all the people who have internalized the idea that they lack power, that politics is not built for them, and who could never imagine running for office. That is, potentially, most of us, though predictable groups — women, the young and many minorities — are overrepresented in that category.

The early-20th-century British writer G.K. Chesterton once offered a striking and unusual metaphor for what democracy should look like. He wrote, “All real democracy is an attempt (like that of a jolly hostess) to bring the shy people out.” What would our democratic institutions look like if we took that metaphor seriously?

One answer — perhaps the most promising one we have at this time — can be found in citizens’ assemblies.

Citizens’ assemblies are large groups of ordinary people, selected by lottery, who come together to learn about a public issue, hear from experts and advocacy groups, deliberate with one another and make recommendations. Picture jury duty for politics. Through random selection, citizens’ assemblies reach deep into the body politic to bring even the initially unwilling to the table. Once seated, participants are given time, structure and support to find their voices and contribute to forming a thoughtful collective judgment.

Citizens’ assemblies are gaining traction around the world. As of 2023, the Organization for Economic Cooperation and Development documented 733 cases of lot-based deliberative assemblies around the world, most of them taking place over the last 20 years, in what the subtitle of an earlier report calls a “deliberative wave.”

Ireland conducted at least five of them at the national level, where they helped break political gridlock on issues ranging from same-sex marriage to abortion and climate policy. In recent years, France convened at least 19 at the regional level and three at the national level, including one on climate policy and one on end-of-life issues. (I sat on the Citizens’ Convention for Climate as a researcher-observer and was later appointed by the French government to the governance committee of the Citizens’ Convention on the End of Life.)

Citizens’ assemblies are now also spreading across the United States at the local level — from Oregon’s Citizens’ Initiative Review model to Michigan’s Independent Citizens Redistricting Commission to Washington State’s climate assembly to Petaluma’s Citizens’ Assembly in California. [...]

The benefits of these assemblies are striking. Citizens’ assemblies typically produce recommendations that are more nuanced, more pragmatic and more aligned with what the public actually wants than what currently emerges from elected legislatures. When their recommendations are put to voters in polls, as in France on climate, or referendums, as in Ireland on same sex-marriage and abortion, they usually receive overwhelming public support.

Because their members are randomly selected, citizens’ assemblies reflect the underlying values and preferences of the larger population. But what is truly fascinating is that the depolarizing and educational effects of deliberation in this nonpartisan context will sometimes sway liberal majorities toward conservative conclusions and vice versa.

In the 2019 “America in One Room” deliberative poll (a cousin of citizens’ assemblies, except bigger, shorter in duration and with the goal of generating informed policy preferences rather than actionable policy recommendations), deliberation led both Republicans and Democrats to revise their views — often substantially. Republicans shifted on immigration, with support for reducing admissions falling from 65 percent to 34 percent and backing for undocumented immigrants being forced to return to their home country before applying to work legally dropping from 79 percent to 40 percent. Democrats also changed their minds, in some cases moving away from traditionally progressive positions: support for “Baby Bonds” collapsed from 62 percent to 21 percent, backing for a $15 minimum wage fell from 82 percent to 59 percent and support for expanding Medicare dropped from 70 percent to 56 percent. These shifts show that deliberation does not push opinion in a single ideological direction but rather toward the conclusions supported by better evidence and what Jürgen Habermas used to call “the unforced force of the better argument.”

Interestingly, it is also true that where a pre-existing underlying consensus in the assembly survives deliberation, as it did in France on end-of-life issues, the outcome is nevertheless much more acceptable to the minority.

This is so because in citizens’ assemblies, minorities are given time and attention in a way that our competitive, winner-takes-all politics often does not. In the last plenary of the French convention on end-of-life issues, Soline Castel, a member of the ideological minority against assisted dying, made a point of saying: “I want to thank the 75 percent for giving us 50 percent of the final document and 50 percent of the speaking time.”

Beyond their problem-solving and depolarizing dimensions, however, citizens’ assemblies are also joyful and exciting processes that reconcile people with one another and with politics. Participants arrive as strangers; they leave as civic friends. [...]

No one is saying that we don’t also need assertive leaders — people whose personalities are so strong and charismatic that they can help persuade other people of something they would not necessarily consider otherwise. But do we need a Congress and a White House full of them?

And contrary to our intuitions, leadership need not be loud. In an experiment with student councils chosen by lottery in Bolivia, Adam Cronkright, a sortition activist with Democracy in Practice and the director of the forthcoming documentary “Goodbye Elections, Hello Democracy,” showed that leadership skills reveal themselves among students who would never have run for elections. Freed from the need to campaign, these students focused less on popularity-enhancing promises (like a cool prom) and more on concrete improvements to student life (like creating a school library, securing computer donations and establishing a student ID system to gain access to half-price bus fares).

In citizens’ assemblies, similarly, it is not necessarily the flamboyant and the know-it-alls who are the most influential or socially rewarded, though they, too, can be right and even appreciated! It’s very often the quiet, serious people who do the real work, without claiming the credit or the limelight.

Critics sometimes dismiss citizens’ assemblies as naïve or impractical, arguing that ordinary people lack the expertise to make complex decisions. But this objection misunderstands both expertise and democracy. Assemblies do not replace experts; they hear from them. Their proponents do not claim that everyone knows everything, only that when placed in the right conditions, everyone is capable of learning, deliberating and exercising judgment. Like voting, but in a more demanding form, citizens’ assemblies institutionalize a fundamental democratic premise: political equality.

Most important, citizens’ assemblies recognize that confidence should not be confused with expertise nor shyness with ignorance. Our current system routinely entrusts complex decisions to elected officials, on the basis of their confidence, ambition and visibility. Citizens’ assemblies create groups in which the shy are on par with the confident, and where the values of humility and listening are privileged. There are reasons to believe that this model is more effective.

by Hélène Landemore, NY Times |  Read more:
Image: Claudia Zonta
[ed. Interesting option. Perhaps better (and less convoluted) than attempting to create some new political party or single issue organization.]

Jiangxi Province, China
China stands to benefit most from the war-driven energy crisis (WaPo)
Image: AFP/Getty, and Lorenzo Martinez

Wednesday, April 8, 2026

Yonder Mountain String Band

[ed. See also: Blind]

Is Strait of Hormuz Open Again? Maybe, but Few Ships Are Using It.

As the cease-fire between the United States and Iran neared the 24-hour mark, it remained unclear on Wednesday when Iran might begin allowing vessels to pass through the Strait of Hormuz, the economically vital waterway brought to a near standstill by the war.

No oil or gas tankers have traversed the strait since the cease-fire was struck on Tuesday, according to data provided to The New York Times by Kpler, a global ship-tracking firm. Four bulk carriers — vessels that carry dry cargo — did make it through.

Iranian state media said on Wednesday afternoon that the strait was “fully closed,” and that some tankers had been turned away. That report came after semiofficial outlets, affiliated with Iran’s Islamic Revolutionary Guards Corps, reported that traffic in the strait had again been halted, this time in response to a deadly wave of Israeli attacks on Lebanon.

Since those reports, no vessels have appeared to cross the strait, according to Kpler’s data. The most recent vessel to cross the waterway — a cargo ship — was tracked in the middle of the strait around 10:45 a.m. Eastern time on Wednesday, according to the maritime data.

Nikos Pothitakis, a media relations manager for Kpler, said the traffic showed that whatever the official status of the strait, it was “pretty much closed.” It was unclear why a limited traffic pattern was being observed.

Iran’s official broadcaster has said that because of mines, vessels must coordinate with the Iranian navy and use designated routes to cross the waterway. After the cease-fire was announced Tuesday, Iran’s foreign minister said safe passage through the strait would be possible if coordinated with the military, and with consideration of “technical limitations.”

The sparse traffic could also reflect the lingering jitters of mariners and their insurers, who may be wary of resuming operations until they feel more confident that it is safe.

The White House press secretary, Karoline Leavitt, added to the confusion.

Briefing reporters on Wednesday, she said news reports that the strait had been closed were “false.” Then she called for it to be reopened “immediately.” She would not answer repeated questions about who currently controlled the waterway.

After the United States and Israel launched strikes on Iran in late February, Iran began shutting down the strait, laying mines and launching sporadic attacks on ships. The waterway carries a quarter of the world’s seaborne oil and one-fifth of its gas.

On Wednesday, with the cease-fire in place, Kpler’s ship-tracking data appeared to support an Iranian state news report that a Panamanian-flagged oil tanker, the AUROURA, had been turned back. As it was transiting the strait, the data shows, the vessel changed course, making a 180-degree turn. Then it came to a halt.

by Pranav Baskar and Shirin Hakim, NY Times | Read more:
Image: Reuters
[ed. I can say with some experience that "lingering jitters" might also be caused by the cost of shipping insurance from places like Lloyds of London.]

Tuesday, April 7, 2026

Anthropic’s Restraint Is a Terrifying Warning Sign

Normally right now I would be writing about the geopolitical implications of the war with Iran, and I am sure I will again soon. But I want to interrupt that thought to highlight a stunning advance in artificial intelligence — one that arrived sooner than expected and that will have equally profound geopolitical implications.

The artificial intelligence company Anthropic announced Tuesday that it was releasing the newest generation of its large language model, dubbed Claude Mythos Preview, but to only a limited consortium of roughly 40 technology companies, including Google, Broadcom, Nvidia, Cisco, Palo Alto Networks, Apple, JPMorganChase, Amazon and Microsoft. Some of its competitors are among these partners because this new A.I. model represents a “step change” in performance that has some critically important positive and negative implications for cybersecurity and America’s national security.

The good news is that Anthropic discovered in the process of developing Claude Mythos that the A.I. could not only write software code more easily and with greater complexity than any model currently available, but as a byproduct of that capability, it could also find vulnerabilities in virtually all of the world’s most popular software systems more easily than before.

The bad news is that if this tool falls into the hands of bad actors, they could hack pretty much every major software system in the world, including all those made by the companies in the consortium.

This is not a publicity stunt. In the run-up to this announcement, representatives of leading tech companies have been in private conversation with the Trump administration about the implications for the security of the United States and all the other countries that use these now vulnerable software systems, technologists involved told me.

For good reason. As Anthropic said in its written statement on Tuesday, in just the past month, “Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of A.I. progress, it will not be long before such capabilities proliferate, potentially beyond actors who committed to deploying them safely. The fallout — economics, public safety and national security — could be severe.’’

Project Glasswing, Anthropic’s name for the consortium, is an undertaking to work with the biggest and most trusted tech companies and critical infrastructure providers, including banks, “to put these capabilities to work for defensive purposes,” the company added, and to give the leading technology firms a head start in finding and patching those vulnerabilities.

“We do not plan to make Claude Mythos Preview generally available, but our eventual goal is to enable our users to safely deploy Mythos-class models at scale — for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring,” Anthropic said.

My translation: Holy cow! Superintelligent A.I. is arriving faster than anticipated, at least in this area. We knew it was getting amazingly good at enabling anyone, no matter how computer literate, to write software code. But even Anthropic reportedly did not anticipate that it would get this good, this fast, at finding ways to find and exploit flaws in existing code.

Anthropic said it found critical exposures in every major operating system and Web browser, many of which run power grids, waterworks, airline reservation systems, retailing networks, military systems and hospitals all over the world.

If this A.I. tool were, indeed, to become widely available, it would mean the ability to hack any major infrastructure system — a hard and expensive effort that was once essentially the province only of private-sector experts and intelligence organizations — will be available to every criminal actor, terrorist organization and country, no matter how small. [...]

That is why Anthropic is giving carefully controlled versions to key software providers so they can find and fix the vulnerabilities before the bad guys do — or your kids.

At moments like this I prefer to do a deep dive with my technology tutor, Craig Mundie, a former director of research and strategy at Microsoft, a member of President Barack Obama’s President’s Council of Advisors on Science and Technology and an author, with Henry Kissinger and Eric Schmidt, of a book on A.I. called “Genesis.”

In our view, no country in the world can solve this problem alone. The solution — this may shock people — must begin with the two A.I. superpowers, the U.S. and China. It is now urgent that they learn to collaborate to prevent bad actors from gaining access to this next level of cyber capability.

Such a powerful tool would threaten them both, leaving them exposed to criminal actors inside their countries and terrorist groups and other adversaries outside. It could easily become a greater threat to each country than the two countries are to each other.

Indeed, this is potentially as fundamental and significant a turning point as was the emergence of mutually assured destruction and the need for nuclear nonproliferation. The U.S. and China need to work together to protect themselves, as well the rest of the world, from humans and autonomous A.I.s using this technology — a lot more than they need to worry about Russia.

This is so important and urgent that it should be a top subject on the agenda for the summit between Trump and President Xi Jinping in Beijing next month.

“What used to be the province of big countries, big militaries, big companies and big criminal organizations with big budgets — this ability to develop sophisticated cyberhacking operations — could become easily available to small actors,” explained Mundie. “What we are about to see is nothing short of the complete democratization of cyberattack capabilities.”

It means that responsible governments, in concert with the companies that build these A.I. tools and software infrastructure, need to do three things urgently, Mundie argues.

For starters, he says, we need to “carefully control the release of these new superintelligent models and make sure they only go to the most responsible governments and companies.”

Then we need to use the time this buys us to distribute defensive tools to the good actors “so that the software that runs their key infrastructure can have all their flaws found and fixed before hackers inevitably get these tools one way or another.” (By the way, the cost of fixing the vulnerabilities that are sure to be discovered in legacy software systems, like those of telephone companies, will be significant. Then multiply that across our whole industrial base.)

by Thomas Friedman, NY Times |  Read more:
Image: Vincent Forstenlechner/Connected Archives
[ed. No shit Sherlock. Basically, everything that runs on software is vulnerable (including all forms of infrastructure). It's only what everyone's been saying for months now, if not years. Maybe this will finally get someone's attention, but who? Congress can't even rouse itself to engage with a war and a mentally unstable President. So all the enablers (politicians, banks, hedge funds, corporations) will finally get to meet their Frankenstein and are appropriately freaking out. See also: Anthropic Claims Its New A.I. Model, Mythos, Is a Cybersecurity ‘Reckoning’ (NYT):]
***
Claude Mythos Preview is already capable of carrying out autonomous security research, including scanning for and exploiting so-called zero-day vulnerabilities in critical software programs, flaws that are unknown even to the software’s developer. These efforts can often be triggered by amateurs with simple prompts. The company claims that the new model has already identified “thousands” of bugs and vulnerabilities in popular software programs, including every major operating system and browser.

One of the vulnerabilities Claude found, the company said, was a 27-year-old bug in OpenBSD, an open-source operating system that was designed to be difficult to hack. Many internet routers and secure firewalls incorporate OpenBSD’s technology. Another was a longstanding issue in a piece of popular video software that automated testing tools had scanned five million times, without finding any problems.

“This model is good at finding vulnerabilities that would be well understood and findable by security researchers,” Mr. Graham said. “At the same time, it has found vulnerabilities, and in some cases crafted exploits, sophisticated enough that they were both missed by literally decades of security researchers, as well as all the automated tools designed to find them.”

[ed. Probably a good idea to take a few screenshots of your bank accounts before they disappear.]

Intraventricular CARv3-TEAM-E T Cells in Recurrent Glioblastoma

In this first-in-human, investigator-initiated, open-label study, three participants with recurrent glioblastoma were treated with CARv3-TEAM-E T cells, which are chimeric antigen receptor (CAR) T cells engineered to target the epidermal growth factor receptor (EGFR) variant III tumor-specific antigen, as well as the wild-type EGFR protein, through secretion of a T-cell–engaging antibody molecule (TEAM). Treatment with CARv3-TEAM-E T cells did not result in adverse events greater than grade 3 or dose-limiting toxic effects. Radiographic tumor regression was dramatic and rapid, occurring within days after receipt of a single intraventricular infusion, but the responses were transient in two of the three participants. (Funded by Gateway for Cancer Research and others; INCIPIENT ClinicalTrials.gov number, NCT05660369.)
***

Glioblastoma is the most aggressive primary brain tumor, and the prognosis for recurrent disease is exceedingly poor with no effective treatment options. Chimeric antigen receptor (CAR) T cells represent a promising approach to cancer because of their proven efficacy against refractory lymphoid malignant neoplasms, for which they have become the standard of care. However, the use of CAR T cells in solid tumors such as glioblastomas has been limited to date, largely owing to the challenge in targeting a single antigen in a heterogeneous disease and to immunosuppressive mechanisms associated with the tumor microenvironment. 

In a previous clinical trial, we found that peripheral infusion of epidermal growth factor receptor (EGFR) variant III–specific CAR T cells (CART-EGFRvIII) safely mediated on-target effects in patients with glioblastoma. Despite this activity, no radiographic responses were observed, and recurrent tumor cells expressed wildtype EGFR protein and showed heavy intratumoral infiltration with suppressive regulatory T cells. To address these barriers, we developed an engineered T-cell product (CARv3-TEAM-E) that targets EGFRvIII through a second-generation CAR while also secreting T-cell–engaging antibody molecules (TEAMs) against wildtype EGFR, which is not expressed in the normal brain but is nearly always expressed in glioblastoma. We found in preclinical models that TEAMs secreted by CAR T cells act locally at the site where cognate antigen is engaged by the CAR T cells in the treatment of heterogeneous tumors. We also found in vitro that these molecules have the capacity to redirect even regulatory T cells against tumors. On the basis of these data, we initiated a first-in-human, phase 1 clinical study to evaluate the safety of CARv3-TEAM-E T cells in patients with recurrent or newly diagnosed glioblastoma. Here, we report the findings from a prespecified interim analysis involving the first three participants treated with this approach. [...]

Discussion

This study shows that antitumor CAR-mediated responses can be rapidly obtained in patients with glioblastoma, even in those with advanced, intraparenchymal cerebral disease. This finding contrasts with a previous report of a complete response that was observed in a patient with recurrent leptomeningeal disease who received treatment with 16 intracranial infusions of monospecific interleukin-13 receptor alpha 2 CAR T cells. It was hypothesized by the investigators of that study that the involvement of glioblastoma in the leptomeninges may have rendered the disease more responsive to intraventricular therapy. Our experience in the current study suggests that even a single dose of intraventricularly administered living drugs such as CAR T cells also have the capacity to access and mediate activity against infiltrative, parenchymal glioblastoma.

by Bryan D. Choi, M.D., Ph.D., Elizabeth R. Gerstner, M.D., Matthew J. Frigault, M.D., Mark B. Leick, M.D., Christopher W. Mount, M.D., Ph.D., Leonora Balaj, Ph.D., Sarah Nikiforow, M.D., Ph.D., Bob S. Carter, M.D., Ph.D., William T. Curry, M.D., Kathleen Gallagher, Ph.D., and Marcela V. Maus, M.D., Ph.D. NIH, National Center for Biotechnology Information |   Read more:
Image: via
[ed. Only three patients (so far) and it appears sustained treatments are needed to prevent recurrence. But still, pretty interesting.]

Man vs Mist vs Mountain

Something big is happening, but nothing big is happening to me.

Throughout my "career" as a "statistician", 13 years and counting now (but how much longer?), I've always been great at stopping myself from doing useful work. At first, I worried that I didn't know enough yet to tackle interesting problems---until I've started feeling that I forgot too much to do "real" statistics. With LLMs that barrier is now gone and I've been finding them very useful. I have just enough context and experience to pose good questions and understand the explanations.

(BTW, I am surprised by how little students and my peers seem to use them. I am usually, willingly, cast in the role of the nay-sayer. So what's happening? Are they using them surreptitiously? Or else, why do I get more utility than others?)...

So, obviously I decided to make this situation even worse and dip my toes into THE AGENTS this month (starting with the OpenAI one). In case you haven't encountered them, these are the ~latest craze in the LLM world.

Yes, just like with chatbots, you just describe what you'd like, in words, and it gets coded. But it's not just coding programs. You can do (some parts of) academic research or you can just make small, fun ideas come to life. I recently met a girl who vibe coded a Chinese medicine app that took photo of your tongue and told you seven things that were fucked up about your bladder.

Ultimately, however, my problem---because obviously I wouldn't bother to write this to just conclude that they're alright, would I---is that these tools are designed for people who like manipulating mental symbols in a certain way, you know, the screen-starers. Obviously this is a ridiculous complaint, not least because I am one of them... but as I get older, screen-staring part of my brain feels like the one I want to be visiting least often. And I think it was no coincidence that I had most fun playing with these tools when my mood was lowest.

In fact, they are addictive as hell, like a video game can feel. Everyone keeps reporting this. They dial difficulty down so much, that things get a bit muddy. People who talk about these models the most often seem to me maniacal, but I think these agents can stop you from getting actual work done. When I tried using these agents for my work, I ended up solving a lot of problems, but none of these problems seemed very important in retrospect.

Clearly that is a skill issue. I have no doubt that I'll get better at it. And if your work is mediated through screens and you're good at defining what you do and don't like, these agents may be great for me.

But at the same time, it feels like a general manifestation of any sort of "life-improving" technology, which is often just about channeling of mental disturbances. So, no, I am not banking on it making me a Nietzschean ubermensch next month; nor helping me start a billion dollar company, or even on having a better time. Right now it still feels net zero: for every bit of busy work that it may rescue me from, it feels like it has potential to rob meaningful work or meaning---or maybe even my life of life itself. "Projects" that I really care about in my life are not app-shaped or list-shaped, and in doing things, technology is always an afterthought.

by Witold Więcek, Monthly Witold | Read more:
Image: Strawberry in ASCII by Claude Sonnet 4.5 via:
[ed. More from Witold, about keeping a journal:]
***
I have been keeping a somewhat regular journal for close to 15 years now, a few pages per week usually. Most of it is very mundane, too, not even an attempt at recollection of what happened, more of a microcatalogue of internal states that feel new---I'd be less embarrassed by someone getting their paws on it as sorry for them.

Why do it? I used to call this project "long Witek", extracting what is slow-moving or semi-permanent from the detritus, the more transitory elements. I use these journals to sometimes jump back an arbitrary number of years and try to recognise myself again. In other words, I try to make myself legible to myself.

Yasushi Kishida (岸田 保), Old folk house in Muroo
via:

Sam Altman May Control Our Future—Can He Be Trusted?

[ed. A must read, possibly historic. Unfortuntately, the accompanying visual is too weird to include here. For a more concise summary see: A history and a proposal (DWAtV)]

In the fall of 2023, Ilya Sutskever, OpenAI’s chief scientist, sent secret memos to three fellow-members of the organization’s board of directors. For weeks, they’d been having furtive discussions about whether Sam Altman, OpenAI’s C.E.O., and Greg Brockman, his second-in-command, were fit to run the company. Sutskever had once counted both men as friends. In 2019, he’d officiated Brockman’s wedding, in a ceremony at OpenAI’s offices that included a ring bearer in the form of a robotic hand. But as he grew convinced that the company was nearing its long-term goal—creating an artificial intelligence that could rival or surpass the cognitive capabilities of human beings—his doubts about Altman increased. As Sutskever put it to another board member at the time, “I don’t think Sam is the guy who should have his finger on the button.”

At the behest of his fellow board members, Sutskever worked with like-minded colleagues to compile some seventy pages of Slack messages and H.R. documents, accompanied by explanatory text. The material included images taken with a cellphone, apparently to avoid detection on company devices. He sent the final memos to the other board members as disappearing messages, to insure that no one else would ever see them. “He was terrified,” a board member who received them recalled. The memos, which we reviewed, have not previously been disclosed in full. They allege that Altman misrepresented facts to executives and board members, and deceived them about internal safety protocols. One of the memos, about Altman, begins with a list headed “Sam exhibits a consistent pattern of . . .” The first item is “Lying.”

Many technology companies issue vague proclamations about improving the world, then go about maximizing revenue. But the founding premise of OpenAI was that it would have to be different. The founders, who included Altman, Sutskever, Brockman, and Elon Musk, asserted that artificial intelligence could be the most powerful, and potentially dangerous, invention in human history, and that perhaps, given the existential risk, an unusual corporate structure would be required. The firm was established as a nonprofit, whose board had a duty to prioritize the safety of humanity over the company’s success, or even its survival. The C.E.O. had to be a person of uncommon integrity. According to Sutskever, “any person working to build this civilization-altering technology bears a heavy burden and is taking on unprecedented responsibility.” But “the people who end up in these kinds of positions are often a certain kind of person, someone who is interested in power, a politician, someone who likes it.” In one of the memos, he seemed concerned with entrusting the technology to someone who “just tells people what they want to hear.” If OpenAI’s C.E.O. turned out not to be reliable, the board, which had six members, was empowered to fire him. Some members, including Helen Toner, an A.I.-policy expert, and Tasha McCauley, an entrepreneur, received the memos as a confirmation of what they had already come to believe: Altman’s role entrusted him with the future of humanity, but he could not be trusted. [...]

The day that Altman was fired, he flew back to his twenty-seven-million-dollar mansion in San Francisco, which has panoramic views of the bay and once featured a cantilevered infinity pool, and set up what he called a “sort of government-in-exile.” Conway, the Airbnb co-founder Brian Chesky, and the famously aggressive crisis-communications manager Chris Lehane joined, sometimes for hours a day, by video and phone. Some members of Altman’s executive team camped out in the hallways of the house. Lawyers set up in a home office next to his bedroom. During bouts of insomnia, Altman would wander by them in his pajamas. When we spoke with Altman recently, he described the aftermath of his firing as “just this weird fugue.”

With the board silent, Altman’s advisers built a public case for his return. Lehane has insisted that the firing was a coup orchestrated by rogue “effective altruists”—adherents of a belief system that focusses on maximizing the well-being of humanity, who had come to see A.I. as an existential threat. (Hoffman told Nadella that the firing might be due to “effective-altruism craziness.”) Lehane—whose reported motto, after Mike Tyson, is “Everyone has a game plan until you punch them in the mouth”—urged Altman to wage an aggressive social-media campaign. Chesky stayed in contact with the tech journalist Kara Swisher, relaying criticism of the board.

Altman interrupted his “war room” at six o’clock each evening with a round of Negronis. “You need to chill,” he recalls saying. “Whatever’s gonna happen is gonna happen.” But, he added, his phone records show that he was on calls for more than twelve hours a day. At one point, Altman conveyed to Mira Murati, who had given Sutskever material for his memos and was serving as the interim C.E.O. of OpenAI in that period, that his allies were “going all out” and “finding bad things” to damage her reputation, as well as those of others who had moved against him, according to someone with knowledge of the conversation. (Altman does not recall the exchange.) [...]

In a series of increasingly tense calls, Altman demanded the resignations of board members who had moved to fire him. “I have to pick up the pieces of their mess while I’m in this crazy cloud of suspicion?” Altman recalled initially thinking, about his return. “I was just, like, Absolutely fucking not.” Eventually, Sutskever, Toner, and McCauley lost their board seats. Adam D’Angelo, a founder of Quora, was the sole original member who remained. As a condition of their exit, the departing members demanded that the allegations against Altman—including that he pitted executives against one another and concealed his financial entanglements—be investigated. They also pressed for a new board that could oversee the outside inquiry with independence. But the two new members, the former Harvard president Lawrence Summers and the former Facebook C.T.O. Bret Taylor, were selected after close conversations with Altman. “would you do this,” Altman texted Nadella. “bret, larry summers, adam as the board and me as ceo and then bret handles the investigation.” (McCauley later testified in a deposition that when Taylor was previously considered for a board seat she’d had concerns about his deference to Altman.)

Less than five days after his firing, Altman was reinstated. Employees now call this moment “the Blip,” after an incident in the Marvel films in which characters disappear from existence and then return, unchanged, to a world profoundly altered by their absence. But the debate over Altman’s trustworthiness has moved beyond OpenAI’s boardroom. The colleagues who facilitated his ouster accuse him of a degree of deception that is untenable for any executive and dangerous for a leader of such a transformative technology. “We need institutions worthy of the power they wield,” Murati told us. “The board sought feedback, and I shared what I was seeing. Everything I shared was accurate, and I stand behind all of it.” Altman’s allies, on the other hand, have long dismissed the accusations. After the firing, Conway texted Chesky and Lehane demanding a public-relations offensive. “This is REPUTATIONAL TO SAM,” he wrote. He told the Washington Post that Altman had been “mistreated by a rogue board of directors.”

OpenAI has since become one of the most valuable companies in the world. It is reportedly preparing for an initial public offering at a potential valuation of a trillion dollars. Altman is driving the construction of a staggering amount of A.I. infrastructure, some of it concentrated within foreign autocracies. OpenAI is securing sweeping government contracts, setting standards for how A.I. is used in immigration enforcement, domestic surveillance, and autonomous weaponry in war zones.

Altman has promoted OpenAI’s growth by touting a vision in which, he wrote in a 2024 blog post, “astounding triumphs—fixing the climate, establishing a space colony, and the discovery of all of physics—will eventually become commonplace.” His rhetoric has helped sustain one of the fastest cash burns of any startup in history, relying on partners that have borrowed vast sums. The U.S. economy is increasingly dependent on a few highly leveraged A.I. companies, and many experts—at times including Altman—have warned that the industry is in a bubble. “Someone is going to lose a phenomenal amount of money,” he told reporters last year. If the bubble pops, economic catastrophe may follow. If his most bullish projections prove correct, he may become one of the wealthiest and most powerful people on the planet.

In a tense call after Altman’s firing, the board pressed him to acknowledge a pattern of deception. “This is just so fucked up,” he said repeatedly, according to people on the call. “I can’t change my personality.” Altman says that he doesn’t recall the exchange. “It’s possible I meant something like ‘I do try to be a unifying force,’ ” he told us, adding that this trait had enabled him to lead an immensely successful company. He attributed the criticism to a tendency, especially early in his career, “to be too much of a conflict avoider.” But a board member offered a different interpretation of his statement: “What it meant was ‘I have this trait where I lie to people, and I’m not going to stop.’ ” Were the colleagues who fired Altman motivated by alarmism and personal animus, or were they right that he couldn’t be trusted?

One morning this winter, we met Altman at OpenAI’s headquarters, in San Francisco, for one of more than a dozen conversations with him for this story. The company had recently moved into a pair of eleven-story glass towers, one of which had been occupied by Uber, another tech behemoth, whose co-founder and C.E.O., Travis Kalanick, seemed like an unstoppable prodigy—until he resigned, in 2017, under pressure from investors, who cited concerns about his ethics. (Kalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAI’s ChatGPT “to get to the edge of what’s known in quantum physics.”)

An employee gave us a tour of the office. In an airy space full of communal tables, there was an animated digital painting of the computer scientist Alan Turing; its eyes tracked us as we passed. The installation is a winking reference to the Turing test, the 1950 thought experiment about whether a machine can credibly imitate a person. (In a 2025 study, ChatGPT passed the test more reliably than actual humans did.) Typically, you can interact with the painting. But the sound had been disabled, our guide told us, because it wouldn’t stop eavesdropping on employees and then butting into their conversations. Elsewhere in the office, plaques, brochures, and merchandise displayed the words “Feel the AGI.” The phrase was originally associated with Sutskever, who used it to caution his colleagues about the risks of artificial general intelligence—the threshold at which machines match human cognitive capacities. After the Blip, it became a cheerful slogan hailing a superabundant future.

We met Altman in a generic-looking conference room on the eighth floor. “People used to tell me about decision fatigue, and I didn’t get it,” Altman told us. “Now I wear a gray sweater and jeans every day, and even picking which gray sweater out of my closet—I’m, like, I wish I didn’t have to think about that.” Altman has a youthful appearance—he is slender, with wide-set blue eyes and tousled hair—but he is now forty, and he and Mulherin have a one-year-old son, delivered by a surrogate. “I’m sure, like, being President of the United States would be a much more stressful job, but of all the jobs that I think I could reasonably do, this is the most stressful one I can imagine,” he said, making eye contact with one of us, then with the other. “The way that I’ve explained this to my friends is: ‘This was the most fun job in the world until the day we launched ChatGPT.’ We were making these massive scientific discoveries—I think we did the most important piece of scientific discovery in, I don’t know, many decades.” He cast his eyes down. “And then, since the launch of ChatGPT, the decisions have gotten very difficult.”

by Ronan Farrow and Andrew Marantz, New Yorker | Read more:
Image: via

Monday, April 6, 2026

Anton Elfilter, Shifting Tides

China's AI Education Experiment

A deep dive.

Pilot schools in China are already using AI to grade children’s artwork, monitor their facial expressions during lectures, and screen them for psychological problems — and the Ministry of Education (MOE) wants schools across the country to follow suit.

Integrating AI into the education system has rapidly become a top priority of the Chinese central government, which is betting that AI tools can eliminate China’s vast educational inequities and make the next generation of workers more productive. The State Council highlighted education as a key area of focus in the “AI+” plan, it received a shout-out in the 15th Five-Year Plan, and in May 2025, the Ministry of Education (MOE) released a white paper on AI for education. This MOE document proclaims that 2025 marks the dawn of an era (“智慧教育元年”), the beginning of a system-wide effort to “intelligentize” 智能化 education using AI tools. The MOE’s goal: universalize basic AI access in primary and secondary schools by 2030. Industry received that signal and responded rapidly, with Alibaba Cloud releasing its own AI+education white paper the following month. But the gap between Beijing’s (and Hangzhou’s) techno-optimism and rural China’s reality is enormous.

This report explores why the Party wants to integrate AI into education, what applications the MOE is most optimistic about, and where the barriers to successful rollout lie. We’ll limit our analysis to K-12 education today, but university AI initiatives will be the focus of our next report in this series!

Institutional History

In official discourse, China is said to have entered a “post-equity era” 后均衡时代 since the MOE announced that all counties had met the baseline quality level for compulsory schooling in 2021. Now, the focus is shifting from access to education to improving the quality of that education. The 14th 5-year plan (2021-2025) prioritized expanding infrastructure in rural schools through the “county-level high school revitalization initiative” (县中振兴), part of which involved equipping classrooms with ‘smart hardware’ such as digitized blackboards. During this period, the party spent significant resources to provide nearly every school with an internet connection.

Still, rural education in China faces serious structural challenges. I spoke with Leo He — a research fellow at the Hoover Institution who did NGO work in rural China from 2019 to 2023 — for a firsthand account of the situation. Every locality, he explained, has designated “elite” schools that talented students from surrounding areas compete to transfer into. The result is a system where “educational resources are systematically sucked up to the center from the periphery, leaving rural areas incredibly depleted.” While this arguably gives academically gifted students opportunities to develop their talents, it deprives most students of educational resources.

According to China’s 2020 census, only 30.6% of the population has ever attended high school (including non-academic vocational secondary school), which Stanford professor Scott Rozelle notes, “is lower than South Africa, lower than Turkey and lower than Mexico.” In 2022, roughly 40% of China’s middle school graduates didn’t go on to attend high school of any kind, and among the students that do continue their education, national policy stipulates that roughly half (“五五分流”) are funneled into non-academic vocational high schools with no path to enter college.

To understand how AI could fit into this picture, we first need to understand the political and economic factors that incentivize Beijing to care about students in the countryside. It’s not clear that more investment in education will translate to high economic growth at this point in China’s development path — the real youth unemployment rate is probably still around 20%, and there are fewer entry-level positions available just as a record number of new graduates enter the workforce. Rather, this is a priority for the Party because improving the education system is so popular.

When Rozelle’s team surveyed 1,800 rural mothers and asked what they wanted their children to aspire to, over 95% said, “I want my child to go to college.” In China, a degree from an elite college doesn’t just translate to higher earnings — it unlocks better healthcare via the hukou system, cushy “iron rice bowl” 铁饭碗 jobs, and above all, social prestige. In 2023, researchers at Stanford found that Chinese families spent an average of 17.1% of their annual household income on education, which amounts to 7.9% of annual household expenditures. (Households in the US and Japan, by comparison, dedicate just 1-2% of annual expenditures to education.) The poorest quartile of families in China devotes a staggering 56.8% of income to education, and education spending is inelastic — that is, it’s prioritized as a necessary expense — across all income levels.

As Andrew Kipnis, the anthropologist who wrote Governing Educational Desire, explained to ChinaTalk, educational reform is a priority for the party “because it’s a way of keeping people happy. If they think there’s some hope their child will attend university, that gives them some investment in the system.” But not every child can become part of the elite: “People who have gone to university won’t work in factories,” as Kipnis put it. No matter how popular it would be, Beijing is not interested in building a system where a college education is available to anyone who wants one. But within this zero-sum system, where anyone who receives an advantage is inherently disadvantaging someone else, the party still needs to make parents feel like their child is getting ahead. Infrastructure is pretty much the perfect tool for this. It makes schools feel luxurious on the ground without changing the fundamentals that make the system so unfair. Shiny new facilities deliver popularity gains immediately, and if your child doesn’t get into university years later, it’s their own damn fault.

Those incentives are shaping the world’s largest AI education experiment. China is not the only country betting that AI will transform education, but the scale and style of China’s ambitions are unmatched globally. While China started with pilot programs, South Korea’s government led with inflexible national-level implementation, spending US$850 million on an ambitious AI textbook initiative that collapsed after just 4 months. India’s edtech ecosystem is private-sector-led with little top-down guidance or regulation, which resulted in the high-profile implosion of Byju’s and a proliferation of predatory practices targeting low-income families. Japan, unlike China, pledged to make sure every student had a device before implementing AI teaching tools.

Ultimately, China stands out globally for the sheer scale of its AI education ambitions — and the scope of applications its edtech industry is targeting for AI integration.

by Lily Ottinger, China Talks |  Read more:
Image: via
[ed. See also: Massive budget cuts for US science proposed again by Trump administration (Nature). National Science Foundation.]

Dating Apps: Giving Men What They Want But Not What They Need

Dating apps were built on the bones of Grindr. I have been known to joke that everything wrong with dating apps is divine retribution for culturally appropriating them from the gays.

Gay men, specifically, that’s important - the overwhelming majority of people making apps are still men, and most of those are still straight men, and while I don’t exactly have insider knowledge on this, it couldn’t be clearer to me that some open-ish minded straight tech boy heard from one of his gay male friends about being able to summon sex partners to his bed from the immediate vicinity after filtering on a bunch of lewd photos and thought: “There isn’t a straight man alive who wouldn’t consider giving up his left hand to have this experience with women. I could make a billion dollars making straight Grindr.”

And thus Tinder was born. Blah blah blah lust and greed sullying the purity of romantic and sexual love; a direction I could go, but instead we’re going to talk about the ways that playing to male preferences in the short term can easily ruin their entire lives, even when it was men’s idea.

Dating apps aggressively reflect male preferences, sexuality neutral. They’re long on photos, short on text. They filter primarily on location, which has some usefulness, but is most useful if the question is “who’s geographically close enough to me that walking to my place for sex is a realistic option” .

Men love flipping through photos of people they’re attracted to - that alone drove much of the traffic to Facebook’s precursor, Hot or Not. This app is built to give men a sexual scrolling experience as soothingly magnetic as any social media site while providing enough mystery to feel less degenerate than porn (the better for large doses and intermittent rewards).

For women, it’s grim. Yes, they get matches much more often than men do (largely because these extremely male-centric UI decisions lure vastly more male users than women; what economist could have predicted this problem with a heterosexual dating app). They don’t enjoy using these apps, not nearly to the degree or as often as men do. For most women, sifting through men feels dehumanizing, and sorting on pictures feels painfully limited (the male equivalent might be having to swipe based on photos of a woman’s favorite outfit, laid out on her bed. Vaguely boring and frustrating to have to make important decisions with so little information about the things you care about).

This isn’t just because of blackpill stuff about how men aren’t hot to women - that topic has been covered to death, yes women find men physically hot but no it doesn’t always work in such a way that static photos capture, so men are impossibly screwed by efforts to appeal to women with photos alone. There’s also the fact that men suck at taking pictures, because the market for photos of people is overwhelmingly men as buyers and women as suppliers, with the demand being for sexually attractive photos of women. Looking at photos of men is like driving a Nissan truck: it couldn’t be clearer that it is not your specialty and significantly worse than other products that your entire factory line was designed for.

You might think that dating apps are bad for men because they lead to men experiencing significant rejection - even the way my post is framed up until this point sort of implies as much. That framework, like much about dating apps, gets the whole picture subtly, insidiously wrong in a way that leaves people who take them at face value much worse off. You know who takes things at face value most often? You’re not going to believe this,

No, the greatest deprivation created by dating apps is specifically denying women and men the opportunity for women to keep men around in a general capacity. (If this idea makes you freak out about the friend zone, I’m almost impressed with you because young people seem to do so little socializing that no one complains about the friend zone anymore. Pat yourself on the back for having friends if you’ve managed to develop a resentment complex around the friend zone).

Most women develop attraction to men via proximity and time. Force a woman to choose if she wants the option to sleep with a man the second she meets him, and she will default to no in almost every single case. For many men, this means that any men who enjoy the attention of women who are open to sleeping with them at first glance are the only men women authentically want. Respectfully, you’re thinking like a guy, and if you believe that men and women are extremely different, I’m going to need you to trust that women develop affection for men differently than men do for women, such that you’ll ruin your life trying to figure out why women don’t desire you in the exact same way that you desire them...

One of the worst things you can do if you date women is to push them into a choice of yes or no as early as possible. You are simply too much of a risk on too many axes to get something other than a no unless you look like Chris Hemsworth, and even that wouldn’t get you yeses from 100% of the women you might ask out (hot men can still be shitty in about a thousand ways, and women often aren’t willing to take risks even for hotness. Again. They are not men). You might think that your goal should be to look like Chris Hemsworth, or alternatively to despair that you don’t look like Chris Hemsworth and go sulkily into that good night, but that’s you thinking like a guy and assuming that how women feel has to match how you feel. Frankly, that’s what got you into this mess: by trusting tech men who told you that you could game heterosexual dating by giving you an interface that pinged all your dopamine sensors while curiously robbing you of a lot of opportunities to find and develop a fulfilling relationship. [...]

The major product provided by a dating app is the illusion of participating in dating at all - some time swiping through faces, and congratulations, you are “dating”, you Tried, you do not need to do anything scarier or riskier or less fun than this.

by Eurydice, Eurydice Lives |  Read more:
Image: uncredited via