Duck Soup
...dog paddling through culture, technology, music and more.
Friday, April 10, 2026
A.I. Logic
via: The Onion
[ed. See also: Anyone Else Have Those Weird Dreams Where Sobbing Future Generations Beg You To Change Course? ~ Sam Altman, CEO, OpenAI
Shining a Harsh Light on Our New Tech Overlords
[ed. A followup to the New Yorker's Sam Altman profile posted here previously, which should definitely be a Pulitzer prize candidate. Why people continue to trust obvious liars - from politicians to tech bros... to anyone actually, is beyond me. Probably just the obvious endpoint to the vicious capitalistic system we live in now where winner takes all, whatever the methods or consequences. We're all NPCs now.]
Consider, for instance, Altman’s blog post “A Gentle Singularity,” published last year and read by nearly 600,000 people. Its central thesis seems to be that AI is all upside; everything has been great so far, and everything will be even greater in the future! I mean, just wait until we build robots that we can shove these AIs into—then tell those robots to go make more robots.
If we have to make the first million humanoid robots the old-fashioned way, but then they can operate the entire supply chain—digging and refining minerals, driving trucks, running factories, etc.—to build more robots, which can build more chip fabrication facilities, data centers, etc, then the rate of progress will obviously be quite different.Everything is getting better; indeed, it’s getting better faster thanks to “self-reinforcing loops” like this. Downsides? Trick question! There aren’t any real downsides because people get used to things. Quickly. Just listen to how great it’s gonna be:
The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything. There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before. We probably won’t adopt a new social contract all at once, but when we look back in a few decades, the gradual changes will have amounted to something big.Perhaps you have looked around at the world recently and wondered whether building “ever-more-wonderful things for each other” is actually a good description of what you are seeing.
If history is any guide, we will figure out new things to do and new things to want, and assimilate new tools quickly (job change after the industrial revolution is a good recent example). Expectations will go up, but capabilities will go up equally quickly, and we’ll all get better stuff. We will build ever-more-wonderful things for each other.
But any niggles you might have—questions about the insane violence of the post-Industrial Revolution world, for instance, or whether “better stuff” is even the solution to many human problems—barely need to be addressed in Altman’s world. The future’s so bright we need to wear (AI-powered) shades!
(This simplistic attitude is shockingly common among smart Silicon Valley types. Marc Andreessen, the venture capitalist and Netscape co-founder, wrote an infamous 2023 essay in the same “No downsides!” vein. It was stuffed with non-ironic statements like “We had a problem of isolation, so we invented the Internet.” It featured the genre’s Randian fetishization of “the great technologists and industrialists who came before us,” some Nietzsche quotes, and of course howlers like “We are not primitives, cowering in fear of the lightning bolt. We are the apex predator; the lightning works for us.”
Silicon Valley—where nuance goes to die, where “hubris” is just a synonym for “success,” and where nerds see themselves as apex predators.
Meanwhile, tech investor Peter Thiel travels around the globe ranting about the Antichrist, while Mark Zuckerberg drops $80 billion on a failed “metaverse.” These dudes are just not the world-bestriding geniuses they think they are. But they do share a certain will to power—and a sense that they deserve to wield this power.)
If you have doubts about just how great a world dominated by people like Sam Altman might be, you owe it to yourself to read the long (loooooong) profile of him that appeared yesterday in our sister publication The New Yorker. Yes, it’s over 16,000 words, and yes, you will encounter the diaeresis a disturbing number of times, but it is absolutely worth the effort. [...]
For their piece, Ronan Farrow and Andrew Marantz interviewed over 100 people, including Altman, and the report they bring back from this effort is quite depressing; the words “lying” and “sociopath” are used repeatedly. Here are just a few of the relevant quotes:
A board member offered a different interpretation of [Altman’s] statement: “What it meant was ‘I have this trait where I lie to people, and I’m not going to stop.’”…The piece documents what appear to be incredibly flexible ethical and political views. Altman slides smoothly from Democratic booster to Trump whisperer, from hoping that the “insane sci-fi future comes true for all of us” to taking meetings with dictators. AI safety, such a key part of OpenAI’s stated mission a few years back, has largely fallen by the wayside as Altman chased money, power, and deals.
Altman’s attitude in childhood, his brother told The New Yorker, in 2016, was “I have to win, and I’m in charge of everything”…
As Mark Jacobstein, an older Loopt employee who was asked by investors to act as Altman’s “babysitter,” later told Keach Hagey, for “The Optimist,” a biography of Altman, “There’s a blurring between ‘I think I can maybe accomplish this thing’ and ‘I have already accomplished this thing’ that in its most toxic form leads to Theranos,” Elizabeth Holmes’s fraudulent startup…
Multiple senior executives at Microsoft said that, despite [Satya] Nadella’s long-standing loyalty, the company’s relationship with Altman has become fraught. “He has misrepresented, distorted, renegotiated, reneged on agreements,” one said…
Altman has a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart. “He’s unconstrained by truth,” the board member told us…
One of Altman’s batch mates in the first Y Combinator cohort was Aaron Swartz, a brilliant but troubled coder who died by suicide in 2013 and is now remembered in many tech circles as something of a sage. Not long before his death, Swartz expressed concerns about Altman to several friends. “You need to understand that Sam can never be trusted,” he told one. “He is a sociopath. He would do anything.”
One of the article’s subtexts is that the negative traits on display here aren’t actually bad for business; indeed, they’re quite good for (short-term) business. Whether they are good for business in the long term, where you actually need people to trust you, is an open question.
by Nate Anderson, Ars Technica | Read more:
Image: Getty
***
Inequality is such a fact of American life that it’s easy to shrug off. But we are in uncharted terrain. The amassed wealth of today’s tech titans makes the Rockefellers and the Vanderbilts look quaint. Over the past two years, 19 households have added $1.8 trillion to their coffers, the economist Gabriel Zucman told me — roughly the size of the economy of Australia.Into this fragile state enters artificial intelligence. It threatens to make a bad situation much worse.
Left on its current course, A.I. could deliver a bleak picture: lower- and middle-income jobs automated away, with top earners remaining unscathed. Income shifting from middle-wage workers doing the bulk of the labor toward those wealthy enough to bankroll the technology. Growth headwinds. Worsening affordability. So, too, a federal government less able to respond, thanks to a shrinking tax base.
For any society in which this much wealth gets concentrated in so few hands, and is then so easily parlayed into political clout, the question becomes one not just of economics but of basic civic standing. At some point soon, we are no longer sharing in self-government.
Labels:
Critical Thought,
Politics,
Psychology,
Technology
Fed Up. Finally
"I'm SICK of this shit... can't he just behave like a normal human?"[ed. See also: Trump Lashes Out at Prominent Conservatives Over Iran War Criticism (NYT). It's starting to look like Hitler in the bunker time. The only question is why they thought he was somebody different.]
Joke of the Day: Prediction Markets
White House staff were warned last month not to use insider information to place bets on predictions markets.
The email was sent to staff on 24 March, a day after US President Donald Trump announced a five-day pause on his threat to attack Iranian power plants and energy infrastructure.
The email was sent to staff on 24 March, a day after US President Donald Trump announced a five-day pause on his threat to attack Iranian power plants and energy infrastructure.
It referred to press reports that raised concerns over government officials using non-public information to place bets on platforms like Kalshi or Polymarket.
White House spokesman Davis Ingle told the BBC that "any implication that Administration officials are engaged in such activity without evidence is baseless and irresponsible reporting."
The Wall Street Journal first reported the email on Thursday.
Ingle also said that all federal employees are subject to government ethics guidelines that prohibit the use of insider information for financial gain.
"The only special interest that will ever guide President Trump is the best interest of the American people," he added.
The BBC has contacted Kalshi and Polymarket for comment.
White House spokesman Davis Ingle told the BBC that "any implication that Administration officials are engaged in such activity without evidence is baseless and irresponsible reporting."
The Wall Street Journal first reported the email on Thursday.
Ingle also said that all federal employees are subject to government ethics guidelines that prohibit the use of insider information for financial gain.
"The only special interest that will ever guide President Trump is the best interest of the American people," he added.
The BBC has contacted Kalshi and Polymarket for comment.
by Osmond Chia, BBC | Read more:
Image: via
[ed. That's some weapons-grade PR spin right there. Of course they know who placed those bets. Despite current directors, the FBI and CIA aren't stupid. They just don't want it to be too obvious.]
Claude Mythos: The System Card
Claude Mythos is different.
This is the first model other than GPT-2 that is at first not being released for public use at all.
With GPT-2 the delay was due to a general precautionary principle. OpenAI did not know what they had, or what effect on demand text would have on various systems. It sounds funny now, GPT-2 was harmless, but at the time the concern was highly reasonable.
The decision not to release Claude Mythos is not about an amorphous fear. If given to anyone with a credit card, Claude Mythos would give attackers a cornucopia of zero-day exploits for essentially all the software on Earth, including every major operating system and browser. It would be chaos.
Or, in theory, if Anthropic had chosen to do so, it could have used those exploits. Great power was on offer, and that power was refused. This does not happen often.
Instead Anthropic has created Project Glasswing. Mythos is being given only to cybersecurity firms, so they can patch the world’s most important software. Based on how that goes, we can then decide if and when it will become reasonable to give access to a broader range of people.
Who counts as this ‘we’ is suddenly quite the interesting question. The government picked quite the month to decide to try and disentangle itself from all Anthropic products. Anthropic says it is attempting to work with the government, so that they too can fix their own systems before it is too late. Hopefully that can happen. I also hope that there isn’t an attempt by the government to hijack these capabilities to use them in an offensive capacity. That would be a very serious mistake.
Am I taking Anthropic’s word for all this? Yes, I am taking Anthropic’s word for all of this. They’ve given us sufficient public demonstrations, identifying numerous bugs, and they’ve gotten the cooperation of the world’s biggest tech and cybersecurity firms, and if it wasn’t real then the whole thing would quickly and obviously backfire. I think it is safe to assume that all of this is legitimate.
[ed. See also: AI #163: Mythos Quest]
With GPT-2 the delay was due to a general precautionary principle. OpenAI did not know what they had, or what effect on demand text would have on various systems. It sounds funny now, GPT-2 was harmless, but at the time the concern was highly reasonable.
The decision not to release Claude Mythos is not about an amorphous fear. If given to anyone with a credit card, Claude Mythos would give attackers a cornucopia of zero-day exploits for essentially all the software on Earth, including every major operating system and browser. It would be chaos.
Or, in theory, if Anthropic had chosen to do so, it could have used those exploits. Great power was on offer, and that power was refused. This does not happen often.
Instead Anthropic has created Project Glasswing. Mythos is being given only to cybersecurity firms, so they can patch the world’s most important software. Based on how that goes, we can then decide if and when it will become reasonable to give access to a broader range of people.
Who counts as this ‘we’ is suddenly quite the interesting question. The government picked quite the month to decide to try and disentangle itself from all Anthropic products. Anthropic says it is attempting to work with the government, so that they too can fix their own systems before it is too late. Hopefully that can happen. I also hope that there isn’t an attempt by the government to hijack these capabilities to use them in an offensive capacity. That would be a very serious mistake.
Am I taking Anthropic’s word for all this? Yes, I am taking Anthropic’s word for all of this. They’ve given us sufficient public demonstrations, identifying numerous bugs, and they’ve gotten the cooperation of the world’s biggest tech and cybersecurity firms, and if it wasn’t real then the whole thing would quickly and obviously backfire. I think it is safe to assume that all of this is legitimate.
by Zvi Mowshowitz, Don't Worry About the Vase | Read more:
Image: Mythos self portrait (Anthropic)
Thursday, April 9, 2026
Hot Ticket
Jeffrey Epstein’s web of influence stretched from European palaces to Ivy League universities and Wall Street banks, but there was apparently at least one little corner of the establishment that seems to have been beyond his reach: Augusta National. In July 2019, Epstein sent an iMessage to Steve Bannon asking for his help with a particularly difficult problem. “Need to work magic to get brad Karp admitted to augusta golf club,” Epstein wrote. “The head of Paul Weiss Brad Karp?” Bannon replied. “Yes.”
Karp, the former chair of the legal firm Paul, Weiss, Rifkind, Wharton & Garrison, stepped down from his position in February because of his ties with Epstein.
[ed. Master's weekend. Glad there's still one institution left with some balls.]
Karp, the former chair of the legal firm Paul, Weiss, Rifkind, Wharton & Garrison, stepped down from his position in February because of his ties with Epstein.
Bannon and Epstein talked it over for an hour. Bannon suggested that Karp’s “best shot” was to “take a strong interest in amateur golf”, Epstein complained that some of the existing Augusta members who might help, like Bill Gates, “have no sway”, and asked “Who s their senator” as if they might. Bannon explained that he thinks the club is run by “7 Atlanta and Augusta families”, who he calls “crackers” from the “Old south” who are prejudiced against “lawyers and investment bankers”. The heart bleeds.
It’s a pungently obnoxious conversation, racist and misogynistic, and at the end of it, Bannon and Epstein are absolutely no closer to figuring out how to go about getting in.
Which is one of the great lessons of Augusta National. Money only goes so far. It is, even now, just about the only sports event in the US where you don’t need to worry that Donald Trump is going to decide to put in an appearance.
If Brad Karp and his ilk are busy worrying about how to get a club membership, most of the rest of us would settle for just making it inside the gates during Masters week. But admittance, like everything else around here, is done according to its own peculiar set of rules. Most of the tickets go to lifetime patrons from the local community, who own badges passed down through the generations along with grandad’s pocket watch. That route in was shut back in the 1970s. The other is the annual lottery, and your odds of winning it make Tiger’s chances of a sixth Green Jacket look good this year.
Officially that’s it. Unofficially, anyone who’s willing to spend enough was usually able to pick one up from one of the touts who camp out on the easements down by the interstate, just beyond the reach of the 2,700ft boundary that makes scalping near the property a criminal offence under Georgia law. Reselling tickets is against the terms and conditions, but the loophole was that anyone who bought one could always insist they had received it as a gift. In recent years, though, resale has become an industrial business, and second-hand tickets have been appearing on the internet where they sell for as much as 50 times their face value.
Until, that was, Augusta’s members decided they had had enough of other companies making the profit the club have chosen to forgo by keeping the actual admission prices so low. The Sunday of last year’s Masters was described as a “bloodbath” by an executive working for one of the hospitality companies in the area, as hundreds of paying customers found they were detained, and even refused entry, at the club gates because they had come on someone else’s ticket. According to industry reports, as many as 200 ticket holders were turned away on the day.
Some said they were taken into a room and asked to hand over their identity documents before being grilled about how they got their tickets, and where they were staying during the tournament. One person said it was like being pulled over by the police. Some were let in anyway, others say they were turned away. As is the way at Augusta, it’s almost impossible to get a straight answer from anyone at the club about exactly what’s going on and, in the absence of any information, there are an awful lot of rumours about the club’s crackdown on the market.
They say the four-day tickets have radio-frequency identification chips in them, and that the club were able to trace all the ones that were being returned back to a single geographical location each evening before being used again by someone else the next day. They say the information contained in the barcodes includes the buyer’s address. They say the club are employing undercover agents to idly ask patrons where they picked up their tickets while they are walking around the grounds.
The other theory is the club are buying up a lot of the resale tickets themselves just so they can find out the names of the people who put them up for sale. The letter they send out is a masterpiece of Masters manners, thanking the recipient for their support and patronage over the years before informing them that they are now permanently banned from the grounds.
It’s a pungently obnoxious conversation, racist and misogynistic, and at the end of it, Bannon and Epstein are absolutely no closer to figuring out how to go about getting in.
Which is one of the great lessons of Augusta National. Money only goes so far. It is, even now, just about the only sports event in the US where you don’t need to worry that Donald Trump is going to decide to put in an appearance.
If Brad Karp and his ilk are busy worrying about how to get a club membership, most of the rest of us would settle for just making it inside the gates during Masters week. But admittance, like everything else around here, is done according to its own peculiar set of rules. Most of the tickets go to lifetime patrons from the local community, who own badges passed down through the generations along with grandad’s pocket watch. That route in was shut back in the 1970s. The other is the annual lottery, and your odds of winning it make Tiger’s chances of a sixth Green Jacket look good this year.
Officially that’s it. Unofficially, anyone who’s willing to spend enough was usually able to pick one up from one of the touts who camp out on the easements down by the interstate, just beyond the reach of the 2,700ft boundary that makes scalping near the property a criminal offence under Georgia law. Reselling tickets is against the terms and conditions, but the loophole was that anyone who bought one could always insist they had received it as a gift. In recent years, though, resale has become an industrial business, and second-hand tickets have been appearing on the internet where they sell for as much as 50 times their face value.
Until, that was, Augusta’s members decided they had had enough of other companies making the profit the club have chosen to forgo by keeping the actual admission prices so low. The Sunday of last year’s Masters was described as a “bloodbath” by an executive working for one of the hospitality companies in the area, as hundreds of paying customers found they were detained, and even refused entry, at the club gates because they had come on someone else’s ticket. According to industry reports, as many as 200 ticket holders were turned away on the day.
Some said they were taken into a room and asked to hand over their identity documents before being grilled about how they got their tickets, and where they were staying during the tournament. One person said it was like being pulled over by the police. Some were let in anyway, others say they were turned away. As is the way at Augusta, it’s almost impossible to get a straight answer from anyone at the club about exactly what’s going on and, in the absence of any information, there are an awful lot of rumours about the club’s crackdown on the market.
They say the four-day tickets have radio-frequency identification chips in them, and that the club were able to trace all the ones that were being returned back to a single geographical location each evening before being used again by someone else the next day. They say the information contained in the barcodes includes the buyer’s address. They say the club are employing undercover agents to idly ask patrons where they picked up their tickets while they are walking around the grounds.
The other theory is the club are buying up a lot of the resale tickets themselves just so they can find out the names of the people who put them up for sale. The letter they send out is a masterpiece of Masters manners, thanking the recipient for their support and patronage over the years before informing them that they are now permanently banned from the grounds.
by Andy Bull, The Guardian | Read more:
Image: Mike Blake/ReutersGhost Murmur
At a press conference on Monday, CIA Director John Ratcliffe disclosed that the agency had used “exquisite technologies that no other intelligence service in the world possesses” to find and rescue the second American airman shot down in southern Iran. Ratcliffe likened it to “hunting for a single grain of sand in the middle of a desert.”
President Trump gushed about the technical wizardry the CIA deployed, claiming it was able to locate the airman from “40 miles away.”
“It’s like finding a needle in a haystack, finding this pilot, and the CIA was unbelievable,” Trump said Monday. “The CIA was very responsible for finding this little speck.”
President Trump gushed about the technical wizardry the CIA deployed, claiming it was able to locate the airman from “40 miles away.”
“It’s like finding a needle in a haystack, finding this pilot, and the CIA was unbelievable,” Trump said Monday. “The CIA was very responsible for finding this little speck.”
Later that same day, the New York Post revealed in an exclusive report that the tool Ratcliffe was referring to was something called “Ghost Murmur.”
“The secret technology uses long-range quantum magnetometry to find the electromagnetic fingerprint of a human heartbeat and pairs the data with artificial intelligence software to isolate the signature from background noise,” two sources close to the breakthrough told the Post.
I’m calling bullshit.
Perhaps the CIA does have a tool called Ghost Murmur. Maybe it can detect faint signals from a not-too-far distance away. But it didn’t locate the downed airman from 40 miles away as Trump suggested. Nor can it locate a heartbeat across 1,000 square miles of desert, as one of the Post’s sources claimed. Not unless the CIA has figured out how to rewrite the laws of physics.
A heartbeat does produce a magnetic signal, but don’t confuse that with the electrical signal picked up by the electrodes that get stuck to your chest in the hospital, the ones that generate the beeping waveform patterns we all recognize from The Pitt. (Great show, by the way.) The heart’s magnetic signature is far weaker than its electrical one.
The tesla is the unit used to measure the strength of a magnetic field.. The Earth’s magnetic field measures about 50 microtesla. Studies I’ll get to in a minute have measured the cardiac magnetic signal at chest contact to be about 25 picotesla, already 2 million times weaker than Earth’s own field.
Quantum sensors can detect this extraordinarily faint signal without touching the body. But only under optimal conditions and at very close range.
I’m no expert in this field, but Quantum Insider, which tracks these developments, pointed to several studies that show the limits of this technology.
One study published this year on diamond quantum magnetometry, the same technology Ghost Murmur supposedly uses, required sensors placed 1 centimeter from the chest inside a magnetically shielded room and an average of up to 12,000 heartbeats to detect a signal.
“Averaging was necessary since magnetic field recordings did not reveal the MCG signal in the NV trace in real-time,” the study reported.
In plain English: The quantum sensor could not detect a heartbeat in real time in a shielded room at one centimeter.
A 2024 study detected the heartbeat of an anesthetized rat, a weaker signal than a human heart, using a sensor placed 5 millimeters from the animal’s chest, inside a magnetic shielding cylinder, after an hour of continuous data accumulation.
Ghost Murmur supposedly detected a single beating heart, in real time, from 40 miles away, over open desert, from a moving aircraft, in an environment saturated with competing signals from the Earth’s magnetic field, electronic devices, and other living creatures. Not likely.
“The secret technology uses long-range quantum magnetometry to find the electromagnetic fingerprint of a human heartbeat and pairs the data with artificial intelligence software to isolate the signature from background noise,” two sources close to the breakthrough told the Post.
I’m calling bullshit.
Perhaps the CIA does have a tool called Ghost Murmur. Maybe it can detect faint signals from a not-too-far distance away. But it didn’t locate the downed airman from 40 miles away as Trump suggested. Nor can it locate a heartbeat across 1,000 square miles of desert, as one of the Post’s sources claimed. Not unless the CIA has figured out how to rewrite the laws of physics.
A heartbeat does produce a magnetic signal, but don’t confuse that with the electrical signal picked up by the electrodes that get stuck to your chest in the hospital, the ones that generate the beeping waveform patterns we all recognize from The Pitt. (Great show, by the way.) The heart’s magnetic signature is far weaker than its electrical one.
The tesla is the unit used to measure the strength of a magnetic field.. The Earth’s magnetic field measures about 50 microtesla. Studies I’ll get to in a minute have measured the cardiac magnetic signal at chest contact to be about 25 picotesla, already 2 million times weaker than Earth’s own field.
Quantum sensors can detect this extraordinarily faint signal without touching the body. But only under optimal conditions and at very close range.
I’m no expert in this field, but Quantum Insider, which tracks these developments, pointed to several studies that show the limits of this technology.
One study published this year on diamond quantum magnetometry, the same technology Ghost Murmur supposedly uses, required sensors placed 1 centimeter from the chest inside a magnetically shielded room and an average of up to 12,000 heartbeats to detect a signal.
“Averaging was necessary since magnetic field recordings did not reveal the MCG signal in the NV trace in real-time,” the study reported.
In plain English: The quantum sensor could not detect a heartbeat in real time in a shielded room at one centimeter.
A 2024 study detected the heartbeat of an anesthetized rat, a weaker signal than a human heart, using a sensor placed 5 millimeters from the animal’s chest, inside a magnetic shielding cylinder, after an hour of continuous data accumulation.
Ghost Murmur supposedly detected a single beating heart, in real time, from 40 miles away, over open desert, from a moving aircraft, in an environment saturated with competing signals from the Earth’s magnetic field, electronic devices, and other living creatures. Not likely.
by Seth Hettenna, After-Action Report | Read more:
Image: White House
[ed. Interesting technology. But, why does it seem like everyone is lying these days? Hmm... maybe because they are? And everybody just expects it and lets them keep doing it?]
Two-Week Iran War Ceasefire Agreement DOA: Updates
Iran War: US Pokes Iran in the Eye with Immediate Bad Faith Dealing Over Ceasefire; Strait of Hormuz Again Closed; US Insists Talks in Pakistan On but Iran Demands Halt in Lebanon Attacks
This section from Ravid in a CNN video starts at about 8:20:
The press is amplifying market-soothing Trump claims that he has cemented a ceasefire “deal” with Iran and is on a path to a resolution of the war. But there are serious differences between what Iran has said it has agreed to, which is a US capitulation. The only concession Iran appears to have made is to somewhat reduce its Strait of Hormuz transit fee. By contrast, Trump depicts the two week ceasefire as a pause in his threat to end Iran as a civilization over a four-hour period, contingent on Iran fully opening the Strait…to which Iran has not agreed.
In addition, the Iran terms call for all hostile action to end, including of Israel against Lebanon. But Israel was not a party to this (non-convergent) agreement and is making minimally compliant noises while also reaffirming its intent to continue ethnic cleansing in Lebanon.
Now this turn of events is admittedly a lot better than where we were 24 hours ago, which was Trump threatening a bombing campaign against Iran that would have produced Iranian retaliation across the Gulf State which was certain, whatever form it took, to damage energy-related infrastructure so severely as to reduce energy output for many many years, risking as many warned, a global deep depression and even potentially a large rollback of living standards across the globe. If nothing else, this seems to signal that Trump is on a path to a durable TACO, as in he really has decided that he needs to find the most face-saving exit he can muster. Perhaps the same way only Nixon could go to China, perhaps only Susie Wiles could produce this shift
But just as Ukraine has agency in ending the war with Russia, so to does Israel in this conflict. This not-really-an-agreement was done over Israel’s head. Israel like Ukraine has ample means to sabotage. And that is before getting to the fact that Israel has never honored ceasefires it actually did agree to, save when it used one to make a short pause for its military to regroup before resuming fighting...
And we also do not know where the Gulf States stand on this development. The UAE, Kuwait and the Saudis has been on board with escalation, even by some accounts, egging Trump on.
And this view charitably assumes Trump really wants out, as opposed to is simply trying to buy time after the fiasco of what looks like a failed raid on Iran nuclear operations to figure out what to do next. Trump’s default is to try to keep options open and buy for time. He likely still thinks if he can contain paper oil prices and thus hopefully gas and diesel prices in the US, that he can keep pressure of various sorts on to open up anther path. He may not understand that anything less than going back to pretty close to the old normal levels of transit through the Strait of Hormuz very soon means compounding real economy damage. More but less than a high level of traffic would only reduce the rate of intensification of harm.
Both Iran and Pakistan, the intermediary in the ceasefire, insist that the Trump administration accepted Iran’s 10-point as a workable basis for negotiation:
The initial reaction among Trump’s Zionist supporters and the Netanyahu government was a combination of shock and fury. The push back started immediately on Tuesday night and by Wednesday morning, the Trump administration insisted that it agreed to a different — yet undefined — set of 10-points. Israel made certain that the negotiations would fail by launching a vicious, murderous bombing of central and southern Lebanon.
New statement from the Speaker of Iran’s Parliament
The Trump Administration, true to form, doubled down on lying, with JD Vance and others maintaining that having Israel cease operations in Lebanon was never part of the deal. The Janta Ka clip below not only recounts how Israel launched its most savage air strikes against Lebanon ever, of 100 missiles in 10 minutes, killing over 182 as of recent reports, but also has none other than the White House’s pet Middle East stenographer, Barak Ravid, effectively calling out the falsehood. [...]This section from Ravid in a CNN video starts at about 8:20:
"Well, I think it’s not only the Iranians. problem is that the Pakistani prime minister when he announced the ceasefire he made it clear that Lebanon was part of the deal, which raises the question of what happened there in the negotiations if the main mediator says that Lebanon is part of the deal. I know that the Egyptian mediators and the Turkish mediators see it the same way, that Lebanon is part of this deal.
Yesterday uh shortly before Trump announced a ceasefire he called Israeli Prime Minister Netanyahu who sort of lost control of the process and was very nervous about this ceasefire and during that call when Trump told him listen I’m going to agree to a ceasefire with Iran, Netanyahu told him but what about Lebanon we want to continue fighting. And Trump told Netanyahu, no problem you can continue fighting Lebanon is not part of this deal. So this was something that was agreed upon before the announcement of the ceasefire, it was agreed upon between Israel and the US. I heard it from both Israeli officials and US officials.
And US officials told me today that they’re not concerned about this those Iranian threats to withdraw from the negotiations or to uh close again the straight of Hormuz because of the situation in Lebanon. They think it will be solved and and it’s not going to be a reason for the agreement to collapse."Other sources confirm the Iranian view:
The press is amplifying market-soothing Trump claims that he has cemented a ceasefire “deal” with Iran and is on a path to a resolution of the war. But there are serious differences between what Iran has said it has agreed to, which is a US capitulation. The only concession Iran appears to have made is to somewhat reduce its Strait of Hormuz transit fee. By contrast, Trump depicts the two week ceasefire as a pause in his threat to end Iran as a civilization over a four-hour period, contingent on Iran fully opening the Strait…to which Iran has not agreed.
In addition, the Iran terms call for all hostile action to end, including of Israel against Lebanon. But Israel was not a party to this (non-convergent) agreement and is making minimally compliant noises while also reaffirming its intent to continue ethnic cleansing in Lebanon.
Now this turn of events is admittedly a lot better than where we were 24 hours ago, which was Trump threatening a bombing campaign against Iran that would have produced Iranian retaliation across the Gulf State which was certain, whatever form it took, to damage energy-related infrastructure so severely as to reduce energy output for many many years, risking as many warned, a global deep depression and even potentially a large rollback of living standards across the globe. If nothing else, this seems to signal that Trump is on a path to a durable TACO, as in he really has decided that he needs to find the most face-saving exit he can muster. Perhaps the same way only Nixon could go to China, perhaps only Susie Wiles could produce this shift
But just as Ukraine has agency in ending the war with Russia, so to does Israel in this conflict. This not-really-an-agreement was done over Israel’s head. Israel like Ukraine has ample means to sabotage. And that is before getting to the fact that Israel has never honored ceasefires it actually did agree to, save when it used one to make a short pause for its military to regroup before resuming fighting...
And we also do not know where the Gulf States stand on this development. The UAE, Kuwait and the Saudis has been on board with escalation, even by some accounts, egging Trump on.
And this view charitably assumes Trump really wants out, as opposed to is simply trying to buy time after the fiasco of what looks like a failed raid on Iran nuclear operations to figure out what to do next. Trump’s default is to try to keep options open and buy for time. He likely still thinks if he can contain paper oil prices and thus hopefully gas and diesel prices in the US, that he can keep pressure of various sorts on to open up anther path. He may not understand that anything less than going back to pretty close to the old normal levels of transit through the Strait of Hormuz very soon means compounding real economy damage. More but less than a high level of traffic would only reduce the rate of intensification of harm.
by Yves Smith, Naked Capitalism | Read more:
(previous day's report from April 8: here).
Images: Iran/X
[ed. Much more. Also this (Trump got played by Israel). And, why are we attacking Iraq again?!]
***
Here is the non-news news flash up front — The alleged ceasefire between the United States and Iran is kaput. While there has been no official announcement stipulating that it is over, trust me, it is over. The copium in the Trump administration in particular, and in Washington, DC in general, is ridiculous… Proclamations of a great military victory over Iran, without one shred of evidence that the US achieved any strategic objectives other than inspiring Iran to take control of the Strait of Hormuz and place the world economy in a supply-chain chokehold.Both Iran and Pakistan, the intermediary in the ceasefire, insist that the Trump administration accepted Iran’s 10-point as a workable basis for negotiation:
The initial reaction among Trump’s Zionist supporters and the Netanyahu government was a combination of shock and fury. The push back started immediately on Tuesday night and by Wednesday morning, the Trump administration insisted that it agreed to a different — yet undefined — set of 10-points. Israel made certain that the negotiations would fail by launching a vicious, murderous bombing of central and southern Lebanon.
***
In the shadow of its wars in Iran and Lebanon, the U.S. has conducted devastating attacks on the security forces of its Iraqi ally.The March 25 assault was the sixth American attack on the Iraqi army since the launch of the war on Iran. As of April 7, there had been a total of 138 U.S. attacks on Iraq—including two additional strikes on the Iraqi army—resulting in the deaths of more than 73 PMF fighters, 10 Iraqi army soldiers, three dead from the Interior Ministry, and six dead civilians, according to Iraqi officials. For many in the country, it was starting to feel as if the U.S. has declared war on Iraq as well.
The U.S. attacks continued until the two-week ceasefire between the U.S. and Iran was announced on April 8. On April 8, as Israel struck Lebanon at least 100 times and killed hundreds, Iran refused to implement the ceasefire agreement until Israel halted its aggression against Lebanon...
“There was a political, economic, and social effect to this last war,” said an official with the Islamic Resistance of Iraq, who spoke on condition of anonymity. “Who is striking Iraq today? America, right? What is America striking in Iraq? Bases of security forces, the PMF, the army. America is destroying the Iraq it built.”
Joe Kent, former director of the Trump administration’s National Counterterrorism Center, worked closely with the Iraqi government before resigning in March in protest over the U.S. war on Iran. He said he was at a loss to explain why the U.S. military was going after such a wide variety of targets in Iraq.
“For the life of me I don’t know,” Kent told Drop Site on March 28, “a lot of targeting inside Iran comes from Israelis. I’m assuming they have done some targeting in Iraq. They didn’t invest much. It seems like blind American ignorance. Someone convinced us that everything that is PMF is an Iranian proxy. It’s people who didn’t understand the history of Iraq in the last 20 years.”
“There’s definitely no strategy there. The charge d’affaires [at the U.S. embassy in Baghdad] and his team are not this fucking dumb, there’s no way they’re advocating this, they would know the difference between the militias,” he added. “You have guys who didn’t spend that long in Iraq, or senior leaders who spent time in Iraq during the ‘surge’ and think this is their chance to settle scores.” [via: “It Seems Like Blind American Ignorance”: The New U.S. War on Iraq (Drop Site).]
The U.S. attacks continued until the two-week ceasefire between the U.S. and Iran was announced on April 8. On April 8, as Israel struck Lebanon at least 100 times and killed hundreds, Iran refused to implement the ceasefire agreement until Israel halted its aggression against Lebanon...
“There was a political, economic, and social effect to this last war,” said an official with the Islamic Resistance of Iraq, who spoke on condition of anonymity. “Who is striking Iraq today? America, right? What is America striking in Iraq? Bases of security forces, the PMF, the army. America is destroying the Iraq it built.”
Joe Kent, former director of the Trump administration’s National Counterterrorism Center, worked closely with the Iraqi government before resigning in March in protest over the U.S. war on Iran. He said he was at a loss to explain why the U.S. military was going after such a wide variety of targets in Iraq.
“For the life of me I don’t know,” Kent told Drop Site on March 28, “a lot of targeting inside Iran comes from Israelis. I’m assuming they have done some targeting in Iraq. They didn’t invest much. It seems like blind American ignorance. Someone convinced us that everything that is PMF is an Iranian proxy. It’s people who didn’t understand the history of Iraq in the last 20 years.”
“There’s definitely no strategy there. The charge d’affaires [at the U.S. embassy in Baghdad] and his team are not this fucking dumb, there’s no way they’re advocating this, they would know the difference between the militias,” he added. “You have guys who didn’t spend that long in Iraq, or senior leaders who spent time in Iraq during the ‘surge’ and think this is their chance to settle scores.” [via: “It Seems Like Blind American Ignorance”: The New U.S. War on Iraq (Drop Site).]
No Shy Person Left Behind
American democracy has a personality problem.
At its core, our political system is a popularity contest. Elections reward those who are comfortable performing in public and on social media, projecting confidence and dominating attention. This dynamic tends to select for so-called alpha types, the charismatic and the daring, but also the entitled, the arrogant and even the narcissistic.
This raises a basic but rarely asked question: Why are we filtering out the quiet voices? And at what cost?
[ed. Interesting option. Perhaps better (and less convoluted) than attempting to create some new political party or single issue organization.]
At its core, our political system is a popularity contest. Elections reward those who are comfortable performing in public and on social media, projecting confidence and dominating attention. This dynamic tends to select for so-called alpha types, the charismatic and the daring, but also the entitled, the arrogant and even the narcissistic.
This raises a basic but rarely asked question: Why are we filtering out the quiet voices? And at what cost?
Over the past two decades, my research on collective intelligence in politics, democratic theory and the design of our institutions shows that the system structurally excludes those I call, in my new book, “the shy.” By the shy I mean not just the natural introverts, but all the people who have internalized the idea that they lack power, that politics is not built for them, and who could never imagine running for office. That is, potentially, most of us, though predictable groups — women, the young and many minorities — are overrepresented in that category.
The early-20th-century British writer G.K. Chesterton once offered a striking and unusual metaphor for what democracy should look like. He wrote, “All real democracy is an attempt (like that of a jolly hostess) to bring the shy people out.” What would our democratic institutions look like if we took that metaphor seriously?
One answer — perhaps the most promising one we have at this time — can be found in citizens’ assemblies.
Citizens’ assemblies are large groups of ordinary people, selected by lottery, who come together to learn about a public issue, hear from experts and advocacy groups, deliberate with one another and make recommendations. Picture jury duty for politics. Through random selection, citizens’ assemblies reach deep into the body politic to bring even the initially unwilling to the table. Once seated, participants are given time, structure and support to find their voices and contribute to forming a thoughtful collective judgment.
Citizens’ assemblies are gaining traction around the world. As of 2023, the Organization for Economic Cooperation and Development documented 733 cases of lot-based deliberative assemblies around the world, most of them taking place over the last 20 years, in what the subtitle of an earlier report calls a “deliberative wave.”
Ireland conducted at least five of them at the national level, where they helped break political gridlock on issues ranging from same-sex marriage to abortion and climate policy. In recent years, France convened at least 19 at the regional level and three at the national level, including one on climate policy and one on end-of-life issues. (I sat on the Citizens’ Convention for Climate as a researcher-observer and was later appointed by the French government to the governance committee of the Citizens’ Convention on the End of Life.)
Citizens’ assemblies are now also spreading across the United States at the local level — from Oregon’s Citizens’ Initiative Review model to Michigan’s Independent Citizens Redistricting Commission to Washington State’s climate assembly to Petaluma’s Citizens’ Assembly in California. [...]
The benefits of these assemblies are striking. Citizens’ assemblies typically produce recommendations that are more nuanced, more pragmatic and more aligned with what the public actually wants than what currently emerges from elected legislatures. When their recommendations are put to voters in polls, as in France on climate, or referendums, as in Ireland on same sex-marriage and abortion, they usually receive overwhelming public support.
Because their members are randomly selected, citizens’ assemblies reflect the underlying values and preferences of the larger population. But what is truly fascinating is that the depolarizing and educational effects of deliberation in this nonpartisan context will sometimes sway liberal majorities toward conservative conclusions and vice versa.
In the 2019 “America in One Room” deliberative poll (a cousin of citizens’ assemblies, except bigger, shorter in duration and with the goal of generating informed policy preferences rather than actionable policy recommendations), deliberation led both Republicans and Democrats to revise their views — often substantially. Republicans shifted on immigration, with support for reducing admissions falling from 65 percent to 34 percent and backing for undocumented immigrants being forced to return to their home country before applying to work legally dropping from 79 percent to 40 percent. Democrats also changed their minds, in some cases moving away from traditionally progressive positions: support for “Baby Bonds” collapsed from 62 percent to 21 percent, backing for a $15 minimum wage fell from 82 percent to 59 percent and support for expanding Medicare dropped from 70 percent to 56 percent. These shifts show that deliberation does not push opinion in a single ideological direction but rather toward the conclusions supported by better evidence and what Jürgen Habermas used to call “the unforced force of the better argument.”
Interestingly, it is also true that where a pre-existing underlying consensus in the assembly survives deliberation, as it did in France on end-of-life issues, the outcome is nevertheless much more acceptable to the minority.
This is so because in citizens’ assemblies, minorities are given time and attention in a way that our competitive, winner-takes-all politics often does not. In the last plenary of the French convention on end-of-life issues, Soline Castel, a member of the ideological minority against assisted dying, made a point of saying: “I want to thank the 75 percent for giving us 50 percent of the final document and 50 percent of the speaking time.”
Beyond their problem-solving and depolarizing dimensions, however, citizens’ assemblies are also joyful and exciting processes that reconcile people with one another and with politics. Participants arrive as strangers; they leave as civic friends. [...]
The early-20th-century British writer G.K. Chesterton once offered a striking and unusual metaphor for what democracy should look like. He wrote, “All real democracy is an attempt (like that of a jolly hostess) to bring the shy people out.” What would our democratic institutions look like if we took that metaphor seriously?
One answer — perhaps the most promising one we have at this time — can be found in citizens’ assemblies.
Citizens’ assemblies are large groups of ordinary people, selected by lottery, who come together to learn about a public issue, hear from experts and advocacy groups, deliberate with one another and make recommendations. Picture jury duty for politics. Through random selection, citizens’ assemblies reach deep into the body politic to bring even the initially unwilling to the table. Once seated, participants are given time, structure and support to find their voices and contribute to forming a thoughtful collective judgment.
Citizens’ assemblies are gaining traction around the world. As of 2023, the Organization for Economic Cooperation and Development documented 733 cases of lot-based deliberative assemblies around the world, most of them taking place over the last 20 years, in what the subtitle of an earlier report calls a “deliberative wave.”
Ireland conducted at least five of them at the national level, where they helped break political gridlock on issues ranging from same-sex marriage to abortion and climate policy. In recent years, France convened at least 19 at the regional level and three at the national level, including one on climate policy and one on end-of-life issues. (I sat on the Citizens’ Convention for Climate as a researcher-observer and was later appointed by the French government to the governance committee of the Citizens’ Convention on the End of Life.)
Citizens’ assemblies are now also spreading across the United States at the local level — from Oregon’s Citizens’ Initiative Review model to Michigan’s Independent Citizens Redistricting Commission to Washington State’s climate assembly to Petaluma’s Citizens’ Assembly in California. [...]
The benefits of these assemblies are striking. Citizens’ assemblies typically produce recommendations that are more nuanced, more pragmatic and more aligned with what the public actually wants than what currently emerges from elected legislatures. When their recommendations are put to voters in polls, as in France on climate, or referendums, as in Ireland on same sex-marriage and abortion, they usually receive overwhelming public support.
Because their members are randomly selected, citizens’ assemblies reflect the underlying values and preferences of the larger population. But what is truly fascinating is that the depolarizing and educational effects of deliberation in this nonpartisan context will sometimes sway liberal majorities toward conservative conclusions and vice versa.
In the 2019 “America in One Room” deliberative poll (a cousin of citizens’ assemblies, except bigger, shorter in duration and with the goal of generating informed policy preferences rather than actionable policy recommendations), deliberation led both Republicans and Democrats to revise their views — often substantially. Republicans shifted on immigration, with support for reducing admissions falling from 65 percent to 34 percent and backing for undocumented immigrants being forced to return to their home country before applying to work legally dropping from 79 percent to 40 percent. Democrats also changed their minds, in some cases moving away from traditionally progressive positions: support for “Baby Bonds” collapsed from 62 percent to 21 percent, backing for a $15 minimum wage fell from 82 percent to 59 percent and support for expanding Medicare dropped from 70 percent to 56 percent. These shifts show that deliberation does not push opinion in a single ideological direction but rather toward the conclusions supported by better evidence and what Jürgen Habermas used to call “the unforced force of the better argument.”
Interestingly, it is also true that where a pre-existing underlying consensus in the assembly survives deliberation, as it did in France on end-of-life issues, the outcome is nevertheless much more acceptable to the minority.
This is so because in citizens’ assemblies, minorities are given time and attention in a way that our competitive, winner-takes-all politics often does not. In the last plenary of the French convention on end-of-life issues, Soline Castel, a member of the ideological minority against assisted dying, made a point of saying: “I want to thank the 75 percent for giving us 50 percent of the final document and 50 percent of the speaking time.”
Beyond their problem-solving and depolarizing dimensions, however, citizens’ assemblies are also joyful and exciting processes that reconcile people with one another and with politics. Participants arrive as strangers; they leave as civic friends. [...]
No one is saying that we don’t also need assertive leaders — people whose personalities are so strong and charismatic that they can help persuade other people of something they would not necessarily consider otherwise. But do we need a Congress and a White House full of them?
And contrary to our intuitions, leadership need not be loud. In an experiment with student councils chosen by lottery in Bolivia, Adam Cronkright, a sortition activist with Democracy in Practice and the director of the forthcoming documentary “Goodbye Elections, Hello Democracy,” showed that leadership skills reveal themselves among students who would never have run for elections. Freed from the need to campaign, these students focused less on popularity-enhancing promises (like a cool prom) and more on concrete improvements to student life (like creating a school library, securing computer donations and establishing a student ID system to gain access to half-price bus fares).
In citizens’ assemblies, similarly, it is not necessarily the flamboyant and the know-it-alls who are the most influential or socially rewarded, though they, too, can be right and even appreciated! It’s very often the quiet, serious people who do the real work, without claiming the credit or the limelight.
Critics sometimes dismiss citizens’ assemblies as naïve or impractical, arguing that ordinary people lack the expertise to make complex decisions. But this objection misunderstands both expertise and democracy. Assemblies do not replace experts; they hear from them. Their proponents do not claim that everyone knows everything, only that when placed in the right conditions, everyone is capable of learning, deliberating and exercising judgment. Like voting, but in a more demanding form, citizens’ assemblies institutionalize a fundamental democratic premise: political equality.
Most important, citizens’ assemblies recognize that confidence should not be confused with expertise nor shyness with ignorance. Our current system routinely entrusts complex decisions to elected officials, on the basis of their confidence, ambition and visibility. Citizens’ assemblies create groups in which the shy are on par with the confident, and where the values of humility and listening are privileged. There are reasons to believe that this model is more effective.
by Hélène Landemore, NY Times | Read more:
And contrary to our intuitions, leadership need not be loud. In an experiment with student councils chosen by lottery in Bolivia, Adam Cronkright, a sortition activist with Democracy in Practice and the director of the forthcoming documentary “Goodbye Elections, Hello Democracy,” showed that leadership skills reveal themselves among students who would never have run for elections. Freed from the need to campaign, these students focused less on popularity-enhancing promises (like a cool prom) and more on concrete improvements to student life (like creating a school library, securing computer donations and establishing a student ID system to gain access to half-price bus fares).
In citizens’ assemblies, similarly, it is not necessarily the flamboyant and the know-it-alls who are the most influential or socially rewarded, though they, too, can be right and even appreciated! It’s very often the quiet, serious people who do the real work, without claiming the credit or the limelight.
Critics sometimes dismiss citizens’ assemblies as naïve or impractical, arguing that ordinary people lack the expertise to make complex decisions. But this objection misunderstands both expertise and democracy. Assemblies do not replace experts; they hear from them. Their proponents do not claim that everyone knows everything, only that when placed in the right conditions, everyone is capable of learning, deliberating and exercising judgment. Like voting, but in a more demanding form, citizens’ assemblies institutionalize a fundamental democratic premise: political equality.
Most important, citizens’ assemblies recognize that confidence should not be confused with expertise nor shyness with ignorance. Our current system routinely entrusts complex decisions to elected officials, on the basis of their confidence, ambition and visibility. Citizens’ assemblies create groups in which the shy are on par with the confident, and where the values of humility and listening are privileged. There are reasons to believe that this model is more effective.
by Hélène Landemore, NY Times | Read more:
Image: Claudia Zonta
Labels:
Critical Thought,
Government,
Politics,
Psychology,
Relationships
Jiangxi Province, China
China stands to benefit most from the war-driven energy crisis (WaPo)
China stands to benefit most from the war-driven energy crisis (WaPo)
Image: AFP/Getty, and Lorenzo Martinez
Wednesday, April 8, 2026
Is Strait of Hormuz Open Again? Maybe, but Few Ships Are Using It.
As the cease-fire between the United States and Iran neared the 24-hour mark, it remained unclear on Wednesday when Iran might begin allowing vessels to pass through the Strait of Hormuz, the economically vital waterway brought to a near standstill by the war.
No oil or gas tankers have traversed the strait since the cease-fire was struck on Tuesday, according to data provided to The New York Times by Kpler, a global ship-tracking firm. Four bulk carriers — vessels that carry dry cargo — did make it through.
Iranian state media said on Wednesday afternoon that the strait was “fully closed,” and that some tankers had been turned away. That report came after semiofficial outlets, affiliated with Iran’s Islamic Revolutionary Guards Corps, reported that traffic in the strait had again been halted, this time in response to a deadly wave of Israeli attacks on Lebanon.
Since those reports, no vessels have appeared to cross the strait, according to Kpler’s data. The most recent vessel to cross the waterway — a cargo ship — was tracked in the middle of the strait around 10:45 a.m. Eastern time on Wednesday, according to the maritime data.
Nikos Pothitakis, a media relations manager for Kpler, said the traffic showed that whatever the official status of the strait, it was “pretty much closed.” It was unclear why a limited traffic pattern was being observed.
Iran’s official broadcaster has said that because of mines, vessels must coordinate with the Iranian navy and use designated routes to cross the waterway. After the cease-fire was announced Tuesday, Iran’s foreign minister said safe passage through the strait would be possible if coordinated with the military, and with consideration of “technical limitations.”
The sparse traffic could also reflect the lingering jitters of mariners and their insurers, who may be wary of resuming operations until they feel more confident that it is safe.
The White House press secretary, Karoline Leavitt, added to the confusion.
Briefing reporters on Wednesday, she said news reports that the strait had been closed were “false.” Then she called for it to be reopened “immediately.” She would not answer repeated questions about who currently controlled the waterway.
After the United States and Israel launched strikes on Iran in late February, Iran began shutting down the strait, laying mines and launching sporadic attacks on ships. The waterway carries a quarter of the world’s seaborne oil and one-fifth of its gas.
On Wednesday, with the cease-fire in place, Kpler’s ship-tracking data appeared to support an Iranian state news report that a Panamanian-flagged oil tanker, the AUROURA, had been turned back. As it was transiting the strait, the data shows, the vessel changed course, making a 180-degree turn. Then it came to a halt.
by Pranav Baskar and Shirin Hakim, NY Times | Read more:
Image: Reuters
No oil or gas tankers have traversed the strait since the cease-fire was struck on Tuesday, according to data provided to The New York Times by Kpler, a global ship-tracking firm. Four bulk carriers — vessels that carry dry cargo — did make it through.
Iranian state media said on Wednesday afternoon that the strait was “fully closed,” and that some tankers had been turned away. That report came after semiofficial outlets, affiliated with Iran’s Islamic Revolutionary Guards Corps, reported that traffic in the strait had again been halted, this time in response to a deadly wave of Israeli attacks on Lebanon.
Since those reports, no vessels have appeared to cross the strait, according to Kpler’s data. The most recent vessel to cross the waterway — a cargo ship — was tracked in the middle of the strait around 10:45 a.m. Eastern time on Wednesday, according to the maritime data.
Nikos Pothitakis, a media relations manager for Kpler, said the traffic showed that whatever the official status of the strait, it was “pretty much closed.” It was unclear why a limited traffic pattern was being observed.
Iran’s official broadcaster has said that because of mines, vessels must coordinate with the Iranian navy and use designated routes to cross the waterway. After the cease-fire was announced Tuesday, Iran’s foreign minister said safe passage through the strait would be possible if coordinated with the military, and with consideration of “technical limitations.”
The sparse traffic could also reflect the lingering jitters of mariners and their insurers, who may be wary of resuming operations until they feel more confident that it is safe.
The White House press secretary, Karoline Leavitt, added to the confusion.
Briefing reporters on Wednesday, she said news reports that the strait had been closed were “false.” Then she called for it to be reopened “immediately.” She would not answer repeated questions about who currently controlled the waterway.
After the United States and Israel launched strikes on Iran in late February, Iran began shutting down the strait, laying mines and launching sporadic attacks on ships. The waterway carries a quarter of the world’s seaborne oil and one-fifth of its gas.
On Wednesday, with the cease-fire in place, Kpler’s ship-tracking data appeared to support an Iranian state news report that a Panamanian-flagged oil tanker, the AUROURA, had been turned back. As it was transiting the strait, the data shows, the vessel changed course, making a 180-degree turn. Then it came to a halt.
by Pranav Baskar and Shirin Hakim, NY Times | Read more:
Image: Reuters
[ed. I can say with some experience that "lingering jitters" might also be caused by the cost of shipping insurance from places like Lloyds of London.]
Labels:
Business,
Economics,
Government,
Military,
Security
Tuesday, April 7, 2026
Anthropic’s Restraint Is a Terrifying Warning Sign
Normally right now I would be writing about the geopolitical implications of the war with Iran, and I am sure I will again soon. But I want to interrupt that thought to highlight a stunning advance in artificial intelligence — one that arrived sooner than expected and that will have equally profound geopolitical implications.
The artificial intelligence company Anthropic announced Tuesday that it was releasing the newest generation of its large language model, dubbed Claude Mythos Preview, but to only a limited consortium of roughly 40 technology companies, including Google, Broadcom, Nvidia, Cisco, Palo Alto Networks, Apple, JPMorganChase, Amazon and Microsoft. Some of its competitors are among these partners because this new A.I. model represents a “step change” in performance that has some critically important positive and negative implications for cybersecurity and America’s national security.
The good news is that Anthropic discovered in the process of developing Claude Mythos that the A.I. could not only write software code more easily and with greater complexity than any model currently available, but as a byproduct of that capability, it could also find vulnerabilities in virtually all of the world’s most popular software systems more easily than before.
The bad news is that if this tool falls into the hands of bad actors, they could hack pretty much every major software system in the world, including all those made by the companies in the consortium.
This is not a publicity stunt. In the run-up to this announcement, representatives of leading tech companies have been in private conversation with the Trump administration about the implications for the security of the United States and all the other countries that use these now vulnerable software systems, technologists involved told me.
For good reason. As Anthropic said in its written statement on Tuesday, in just the past month, “Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of A.I. progress, it will not be long before such capabilities proliferate, potentially beyond actors who committed to deploying them safely. The fallout — economics, public safety and national security — could be severe.’’
Project Glasswing, Anthropic’s name for the consortium, is an undertaking to work with the biggest and most trusted tech companies and critical infrastructure providers, including banks, “to put these capabilities to work for defensive purposes,” the company added, and to give the leading technology firms a head start in finding and patching those vulnerabilities.
“We do not plan to make Claude Mythos Preview generally available, but our eventual goal is to enable our users to safely deploy Mythos-class models at scale — for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring,” Anthropic said.
My translation: Holy cow! Superintelligent A.I. is arriving faster than anticipated, at least in this area. We knew it was getting amazingly good at enabling anyone, no matter how computer literate, to write software code. But even Anthropic reportedly did not anticipate that it would get this good, this fast, at finding ways to find and exploit flaws in existing code.
Anthropic said it found critical exposures in every major operating system and Web browser, many of which run power grids, waterworks, airline reservation systems, retailing networks, military systems and hospitals all over the world.
If this A.I. tool were, indeed, to become widely available, it would mean the ability to hack any major infrastructure system — a hard and expensive effort that was once essentially the province only of private-sector experts and intelligence organizations — will be available to every criminal actor, terrorist organization and country, no matter how small. [...]
That is why Anthropic is giving carefully controlled versions to key software providers so they can find and fix the vulnerabilities before the bad guys do — or your kids.
At moments like this I prefer to do a deep dive with my technology tutor, Craig Mundie, a former director of research and strategy at Microsoft, a member of President Barack Obama’s President’s Council of Advisors on Science and Technology and an author, with Henry Kissinger and Eric Schmidt, of a book on A.I. called “Genesis.”
In our view, no country in the world can solve this problem alone. The solution — this may shock people — must begin with the two A.I. superpowers, the U.S. and China. It is now urgent that they learn to collaborate to prevent bad actors from gaining access to this next level of cyber capability.
Such a powerful tool would threaten them both, leaving them exposed to criminal actors inside their countries and terrorist groups and other adversaries outside. It could easily become a greater threat to each country than the two countries are to each other.
Indeed, this is potentially as fundamental and significant a turning point as was the emergence of mutually assured destruction and the need for nuclear nonproliferation. The U.S. and China need to work together to protect themselves, as well the rest of the world, from humans and autonomous A.I.s using this technology — a lot more than they need to worry about Russia.
This is so important and urgent that it should be a top subject on the agenda for the summit between Trump and President Xi Jinping in Beijing next month.
“What used to be the province of big countries, big militaries, big companies and big criminal organizations with big budgets — this ability to develop sophisticated cyberhacking operations — could become easily available to small actors,” explained Mundie. “What we are about to see is nothing short of the complete democratization of cyberattack capabilities.”
It means that responsible governments, in concert with the companies that build these A.I. tools and software infrastructure, need to do three things urgently, Mundie argues.
For starters, he says, we need to “carefully control the release of these new superintelligent models and make sure they only go to the most responsible governments and companies.”
Then we need to use the time this buys us to distribute defensive tools to the good actors “so that the software that runs their key infrastructure can have all their flaws found and fixed before hackers inevitably get these tools one way or another.” (By the way, the cost of fixing the vulnerabilities that are sure to be discovered in legacy software systems, like those of telephone companies, will be significant. Then multiply that across our whole industrial base.)
The artificial intelligence company Anthropic announced Tuesday that it was releasing the newest generation of its large language model, dubbed Claude Mythos Preview, but to only a limited consortium of roughly 40 technology companies, including Google, Broadcom, Nvidia, Cisco, Palo Alto Networks, Apple, JPMorganChase, Amazon and Microsoft. Some of its competitors are among these partners because this new A.I. model represents a “step change” in performance that has some critically important positive and negative implications for cybersecurity and America’s national security.
The good news is that Anthropic discovered in the process of developing Claude Mythos that the A.I. could not only write software code more easily and with greater complexity than any model currently available, but as a byproduct of that capability, it could also find vulnerabilities in virtually all of the world’s most popular software systems more easily than before.
The bad news is that if this tool falls into the hands of bad actors, they could hack pretty much every major software system in the world, including all those made by the companies in the consortium.
This is not a publicity stunt. In the run-up to this announcement, representatives of leading tech companies have been in private conversation with the Trump administration about the implications for the security of the United States and all the other countries that use these now vulnerable software systems, technologists involved told me.
For good reason. As Anthropic said in its written statement on Tuesday, in just the past month, “Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of A.I. progress, it will not be long before such capabilities proliferate, potentially beyond actors who committed to deploying them safely. The fallout — economics, public safety and national security — could be severe.’’
Project Glasswing, Anthropic’s name for the consortium, is an undertaking to work with the biggest and most trusted tech companies and critical infrastructure providers, including banks, “to put these capabilities to work for defensive purposes,” the company added, and to give the leading technology firms a head start in finding and patching those vulnerabilities.
“We do not plan to make Claude Mythos Preview generally available, but our eventual goal is to enable our users to safely deploy Mythos-class models at scale — for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring,” Anthropic said.
My translation: Holy cow! Superintelligent A.I. is arriving faster than anticipated, at least in this area. We knew it was getting amazingly good at enabling anyone, no matter how computer literate, to write software code. But even Anthropic reportedly did not anticipate that it would get this good, this fast, at finding ways to find and exploit flaws in existing code.
Anthropic said it found critical exposures in every major operating system and Web browser, many of which run power grids, waterworks, airline reservation systems, retailing networks, military systems and hospitals all over the world.
If this A.I. tool were, indeed, to become widely available, it would mean the ability to hack any major infrastructure system — a hard and expensive effort that was once essentially the province only of private-sector experts and intelligence organizations — will be available to every criminal actor, terrorist organization and country, no matter how small. [...]
That is why Anthropic is giving carefully controlled versions to key software providers so they can find and fix the vulnerabilities before the bad guys do — or your kids.
At moments like this I prefer to do a deep dive with my technology tutor, Craig Mundie, a former director of research and strategy at Microsoft, a member of President Barack Obama’s President’s Council of Advisors on Science and Technology and an author, with Henry Kissinger and Eric Schmidt, of a book on A.I. called “Genesis.”
In our view, no country in the world can solve this problem alone. The solution — this may shock people — must begin with the two A.I. superpowers, the U.S. and China. It is now urgent that they learn to collaborate to prevent bad actors from gaining access to this next level of cyber capability.
Such a powerful tool would threaten them both, leaving them exposed to criminal actors inside their countries and terrorist groups and other adversaries outside. It could easily become a greater threat to each country than the two countries are to each other.
Indeed, this is potentially as fundamental and significant a turning point as was the emergence of mutually assured destruction and the need for nuclear nonproliferation. The U.S. and China need to work together to protect themselves, as well the rest of the world, from humans and autonomous A.I.s using this technology — a lot more than they need to worry about Russia.
This is so important and urgent that it should be a top subject on the agenda for the summit between Trump and President Xi Jinping in Beijing next month.
“What used to be the province of big countries, big militaries, big companies and big criminal organizations with big budgets — this ability to develop sophisticated cyberhacking operations — could become easily available to small actors,” explained Mundie. “What we are about to see is nothing short of the complete democratization of cyberattack capabilities.”
It means that responsible governments, in concert with the companies that build these A.I. tools and software infrastructure, need to do three things urgently, Mundie argues.
For starters, he says, we need to “carefully control the release of these new superintelligent models and make sure they only go to the most responsible governments and companies.”
Then we need to use the time this buys us to distribute defensive tools to the good actors “so that the software that runs their key infrastructure can have all their flaws found and fixed before hackers inevitably get these tools one way or another.” (By the way, the cost of fixing the vulnerabilities that are sure to be discovered in legacy software systems, like those of telephone companies, will be significant. Then multiply that across our whole industrial base.)
by Thomas Friedman, NY Times | Read more:
Image: Vincent Forstenlechner/Connected Archives
[ed. No shit Sherlock. Basically, everything that runs on software is vulnerable (including all forms of infrastructure). It's only what everyone's been saying for months now, if not years. Maybe this will finally get someone's attention, but who? Congress can't even rouse itself to engage with a war and a mentally unstable President. So all the enablers (politicians, banks, hedge funds, corporations) will finally get to meet their Frankenstein and are appropriately freaking out. See also: Anthropic Claims Its New A.I. Model, Mythos, Is a Cybersecurity ‘Reckoning’ (NYT):]
One of the vulnerabilities Claude found, the company said, was a 27-year-old bug in OpenBSD, an open-source operating system that was designed to be difficult to hack. Many internet routers and secure firewalls incorporate OpenBSD’s technology. Another was a longstanding issue in a piece of popular video software that automated testing tools had scanned five million times, without finding any problems.
“This model is good at finding vulnerabilities that would be well understood and findable by security researchers,” Mr. Graham said. “At the same time, it has found vulnerabilities, and in some cases crafted exploits, sophisticated enough that they were both missed by literally decades of security researchers, as well as all the automated tools designed to find them.”
[ed. No shit Sherlock. Basically, everything that runs on software is vulnerable (including all forms of infrastructure). It's only what everyone's been saying for months now, if not years. Maybe this will finally get someone's attention, but who? Congress can't even rouse itself to engage with a war and a mentally unstable President. So all the enablers (politicians, banks, hedge funds, corporations) will finally get to meet their Frankenstein and are appropriately freaking out. See also: Anthropic Claims Its New A.I. Model, Mythos, Is a Cybersecurity ‘Reckoning’ (NYT):]
***
Claude Mythos Preview is already capable of carrying out autonomous security research, including scanning for and exploiting so-called zero-day vulnerabilities in critical software programs, flaws that are unknown even to the software’s developer. These efforts can often be triggered by amateurs with simple prompts. The company claims that the new model has already identified “thousands” of bugs and vulnerabilities in popular software programs, including every major operating system and browser.One of the vulnerabilities Claude found, the company said, was a 27-year-old bug in OpenBSD, an open-source operating system that was designed to be difficult to hack. Many internet routers and secure firewalls incorporate OpenBSD’s technology. Another was a longstanding issue in a piece of popular video software that automated testing tools had scanned five million times, without finding any problems.
“This model is good at finding vulnerabilities that would be well understood and findable by security researchers,” Mr. Graham said. “At the same time, it has found vulnerabilities, and in some cases crafted exploits, sophisticated enough that they were both missed by literally decades of security researchers, as well as all the automated tools designed to find them.”
[ed. Probably a good idea to take a few screenshots of your bank accounts before they disappear.]
Labels:
Business,
Government,
Politics,
Security,
Technology
Intraventricular CARv3-TEAM-E T Cells in Recurrent Glioblastoma
In this first-in-human, investigator-initiated, open-label study, three participants with recurrent glioblastoma were treated with CARv3-TEAM-E T cells, which are chimeric antigen receptor (CAR) T cells engineered to target the epidermal growth factor receptor (EGFR) variant III tumor-specific antigen, as well as the wild-type EGFR protein, through secretion of a T-cell–engaging antibody molecule (TEAM). Treatment with CARv3-TEAM-E T cells did not result in adverse events greater than grade 3 or dose-limiting toxic effects. Radiographic tumor regression was dramatic and rapid, occurring within days after receipt of a single intraventricular infusion, but the responses were transient in two of the three participants. (Funded by Gateway for Cancer Research and others; INCIPIENT ClinicalTrials.gov number, NCT05660369.)
Discussion
This study shows that antitumor CAR-mediated responses can be rapidly obtained in patients with glioblastoma, even in those with advanced, intraparenchymal cerebral disease. This finding contrasts with a previous report of a complete response that was observed in a patient with recurrent leptomeningeal disease who received treatment with 16 intracranial infusions of monospecific interleukin-13 receptor alpha 2 CAR T cells. It was hypothesized by the investigators of that study that the involvement of glioblastoma in the leptomeninges may have rendered the disease more responsive to intraventricular therapy. Our experience in the current study suggests that even a single dose of intraventricularly administered living drugs such as CAR T cells also have the capacity to access and mediate activity against infiltrative, parenchymal glioblastoma.
by Bryan D. Choi, M.D., Ph.D., Elizabeth R. Gerstner, M.D., Matthew J. Frigault, M.D., Mark B. Leick, M.D., Christopher W. Mount, M.D., Ph.D., Leonora Balaj, Ph.D., Sarah Nikiforow, M.D., Ph.D., Bob S. Carter, M.D., Ph.D., William T. Curry, M.D., Kathleen Gallagher, Ph.D., and Marcela V. Maus, M.D., Ph.D. NIH, National Center for Biotechnology Information | Read more:
***
Glioblastoma is the most aggressive primary brain tumor, and the prognosis for recurrent disease is exceedingly poor with no effective treatment options. Chimeric antigen receptor (CAR) T cells represent a promising approach to cancer because of their proven efficacy against refractory lymphoid malignant neoplasms, for which they have become the standard of care. However, the use of CAR T cells in solid tumors such as glioblastomas has been limited to date, largely owing to the challenge in targeting a single antigen in a heterogeneous disease and to immunosuppressive mechanisms associated with the tumor microenvironment.
In a previous clinical trial, we found that peripheral infusion of epidermal growth factor receptor (EGFR) variant III–specific CAR T cells (CART-EGFRvIII) safely mediated on-target effects in patients with glioblastoma. Despite this activity, no radiographic responses were observed, and recurrent tumor cells expressed wildtype EGFR protein and showed heavy intratumoral infiltration with suppressive regulatory T cells. To address these barriers, we developed an engineered T-cell product (CARv3-TEAM-E) that targets EGFRvIII through a second-generation CAR while also secreting T-cell–engaging antibody molecules (TEAMs) against wildtype EGFR, which is not expressed in the normal brain but is nearly always expressed in glioblastoma. We found in preclinical models that TEAMs secreted by CAR T cells act locally at the site where cognate antigen is engaged by the CAR T cells in the treatment of heterogeneous tumors. We also found in vitro that these molecules have the capacity to redirect even regulatory T cells against tumors. On the basis of these data, we initiated a first-in-human, phase 1 clinical study to evaluate the safety of CARv3-TEAM-E T cells in patients with recurrent or newly diagnosed glioblastoma. Here, we report the findings from a prespecified interim analysis involving the first three participants treated with this approach. [...]
Discussion
This study shows that antitumor CAR-mediated responses can be rapidly obtained in patients with glioblastoma, even in those with advanced, intraparenchymal cerebral disease. This finding contrasts with a previous report of a complete response that was observed in a patient with recurrent leptomeningeal disease who received treatment with 16 intracranial infusions of monospecific interleukin-13 receptor alpha 2 CAR T cells. It was hypothesized by the investigators of that study that the involvement of glioblastoma in the leptomeninges may have rendered the disease more responsive to intraventricular therapy. Our experience in the current study suggests that even a single dose of intraventricularly administered living drugs such as CAR T cells also have the capacity to access and mediate activity against infiltrative, parenchymal glioblastoma.
by Bryan D. Choi, M.D., Ph.D., Elizabeth R. Gerstner, M.D., Matthew J. Frigault, M.D., Mark B. Leick, M.D., Christopher W. Mount, M.D., Ph.D., Leonora Balaj, Ph.D., Sarah Nikiforow, M.D., Ph.D., Bob S. Carter, M.D., Ph.D., William T. Curry, M.D., Kathleen Gallagher, Ph.D., and Marcela V. Maus, M.D., Ph.D. NIH, National Center for Biotechnology Information | Read more:
Image: via
[ed. Only three patients (so far) and it appears sustained treatments are needed to prevent recurrence. But still, pretty interesting.]
Man vs Mist vs Mountain
Something big is happening, but nothing big is happening to me.
So, obviously I decided to make this situation even worse and dip my toes into THE AGENTS this month (starting with the OpenAI one). In case you haven't encountered them, these are the ~latest craze in the LLM world.
Yes, just like with chatbots, you just describe what you'd like, in words, and it gets coded. But it's not just coding programs. You can do (some parts of) academic research or you can just make small, fun ideas come to life. I recently met a girl who vibe coded a Chinese medicine app that took photo of your tongue and told you seven things that were fucked up about your bladder.
Ultimately, however, my problem---because obviously I wouldn't bother to write this to just conclude that they're alright, would I---is that these tools are designed for people who like manipulating mental symbols in a certain way, you know, the screen-starers. Obviously this is a ridiculous complaint, not least because I am one of them... but as I get older, screen-staring part of my brain feels like the one I want to be visiting least often. And I think it was no coincidence that I had most fun playing with these tools when my mood was lowest.
In fact, they are addictive as hell, like a video game can feel. Everyone keeps reporting this. They dial difficulty down so much, that things get a bit muddy. People who talk about these models the most often seem to me maniacal, but I think these agents can stop you from getting actual work done. When I tried using these agents for my work, I ended up solving a lot of problems, but none of these problems seemed very important in retrospect.
Clearly that is a skill issue. I have no doubt that I'll get better at it. And if your work is mediated through screens and you're good at defining what you do and don't like, these agents may be great for me.
But at the same time, it feels like a general manifestation of any sort of "life-improving" technology, which is often just about channeling of mental disturbances. So, no, I am not banking on it making me a Nietzschean ubermensch next month; nor helping me start a billion dollar company, or even on having a better time. Right now it still feels net zero: for every bit of busy work that it may rescue me from, it feels like it has potential to rob meaningful work or meaning---or maybe even my life of life itself. "Projects" that I really care about in my life are not app-shaped or list-shaped, and in doing things, technology is always an afterthought.
by Witold Więcek, Monthly Witold | Read more:
Image: Strawberry in ASCII by Claude Sonnet 4.5 via:
Throughout my "career" as a "statistician", 13 years and counting now (but how much longer?), I've always been great at stopping myself from doing useful work. At first, I worried that I didn't know enough yet to tackle interesting problems---until I've started feeling that I forgot too much to do "real" statistics. With LLMs that barrier is now gone and I've been finding them very useful. I have just enough context and experience to pose good questions and understand the explanations.
(BTW, I am surprised by how little students and my peers seem to use them. I am usually, willingly, cast in the role of the nay-sayer. So what's happening? Are they using them surreptitiously? Or else, why do I get more utility than others?)...
(BTW, I am surprised by how little students and my peers seem to use them. I am usually, willingly, cast in the role of the nay-sayer. So what's happening? Are they using them surreptitiously? Or else, why do I get more utility than others?)...
So, obviously I decided to make this situation even worse and dip my toes into THE AGENTS this month (starting with the OpenAI one). In case you haven't encountered them, these are the ~latest craze in the LLM world.
Yes, just like with chatbots, you just describe what you'd like, in words, and it gets coded. But it's not just coding programs. You can do (some parts of) academic research or you can just make small, fun ideas come to life. I recently met a girl who vibe coded a Chinese medicine app that took photo of your tongue and told you seven things that were fucked up about your bladder.
Ultimately, however, my problem---because obviously I wouldn't bother to write this to just conclude that they're alright, would I---is that these tools are designed for people who like manipulating mental symbols in a certain way, you know, the screen-starers. Obviously this is a ridiculous complaint, not least because I am one of them... but as I get older, screen-staring part of my brain feels like the one I want to be visiting least often. And I think it was no coincidence that I had most fun playing with these tools when my mood was lowest.
In fact, they are addictive as hell, like a video game can feel. Everyone keeps reporting this. They dial difficulty down so much, that things get a bit muddy. People who talk about these models the most often seem to me maniacal, but I think these agents can stop you from getting actual work done. When I tried using these agents for my work, I ended up solving a lot of problems, but none of these problems seemed very important in retrospect.
Clearly that is a skill issue. I have no doubt that I'll get better at it. And if your work is mediated through screens and you're good at defining what you do and don't like, these agents may be great for me.
But at the same time, it feels like a general manifestation of any sort of "life-improving" technology, which is often just about channeling of mental disturbances. So, no, I am not banking on it making me a Nietzschean ubermensch next month; nor helping me start a billion dollar company, or even on having a better time. Right now it still feels net zero: for every bit of busy work that it may rescue me from, it feels like it has potential to rob meaningful work or meaning---or maybe even my life of life itself. "Projects" that I really care about in my life are not app-shaped or list-shaped, and in doing things, technology is always an afterthought.
by Witold Więcek, Monthly Witold | Read more:
Image: Strawberry in ASCII by Claude Sonnet 4.5 via:
[ed. More from Witold, about keeping a journal:]
Why do it? I used to call this project "long Witek", extracting what is slow-moving or semi-permanent from the detritus, the more transitory elements. I use these journals to sometimes jump back an arbitrary number of years and try to recognise myself again. In other words, I try to make myself legible to myself.
***
I have been keeping a somewhat regular journal for close to 15 years now, a few pages per week usually. Most of it is very mundane, too, not even an attempt at recollection of what happened, more of a microcatalogue of internal states that feel new---I'd be less embarrassed by someone getting their paws on it as sorry for them.Why do it? I used to call this project "long Witek", extracting what is slow-moving or semi-permanent from the detritus, the more transitory elements. I use these journals to sometimes jump back an arbitrary number of years and try to recognise myself again. In other words, I try to make myself legible to myself.
Sam Altman May Control Our Future—Can He Be Trusted?
[ed. A must read, possibly historic. Unfortuntately, the accompanying visual is too weird to include here. For a more concise summary see: A history and a proposal (DWAtV)]
At the behest of his fellow board members, Sutskever worked with like-minded colleagues to compile some seventy pages of Slack messages and H.R. documents, accompanied by explanatory text. The material included images taken with a cellphone, apparently to avoid detection on company devices. He sent the final memos to the other board members as disappearing messages, to insure that no one else would ever see them. “He was terrified,” a board member who received them recalled. The memos, which we reviewed, have not previously been disclosed in full. They allege that Altman misrepresented facts to executives and board members, and deceived them about internal safety protocols. One of the memos, about Altman, begins with a list headed “Sam exhibits a consistent pattern of . . .” The first item is “Lying.”
Many technology companies issue vague proclamations about improving the world, then go about maximizing revenue. But the founding premise of OpenAI was that it would have to be different. The founders, who included Altman, Sutskever, Brockman, and Elon Musk, asserted that artificial intelligence could be the most powerful, and potentially dangerous, invention in human history, and that perhaps, given the existential risk, an unusual corporate structure would be required. The firm was established as a nonprofit, whose board had a duty to prioritize the safety of humanity over the company’s success, or even its survival. The C.E.O. had to be a person of uncommon integrity. According to Sutskever, “any person working to build this civilization-altering technology bears a heavy burden and is taking on unprecedented responsibility.” But “the people who end up in these kinds of positions are often a certain kind of person, someone who is interested in power, a politician, someone who likes it.” In one of the memos, he seemed concerned with entrusting the technology to someone who “just tells people what they want to hear.” If OpenAI’s C.E.O. turned out not to be reliable, the board, which had six members, was empowered to fire him. Some members, including Helen Toner, an A.I.-policy expert, and Tasha McCauley, an entrepreneur, received the memos as a confirmation of what they had already come to believe: Altman’s role entrusted him with the future of humanity, but he could not be trusted. [...]
The day that Altman was fired, he flew back to his twenty-seven-million-dollar mansion in San Francisco, which has panoramic views of the bay and once featured a cantilevered infinity pool, and set up what he called a “sort of government-in-exile.” Conway, the Airbnb co-founder Brian Chesky, and the famously aggressive crisis-communications manager Chris Lehane joined, sometimes for hours a day, by video and phone. Some members of Altman’s executive team camped out in the hallways of the house. Lawyers set up in a home office next to his bedroom. During bouts of insomnia, Altman would wander by them in his pajamas. When we spoke with Altman recently, he described the aftermath of his firing as “just this weird fugue.”
With the board silent, Altman’s advisers built a public case for his return. Lehane has insisted that the firing was a coup orchestrated by rogue “effective altruists”—adherents of a belief system that focusses on maximizing the well-being of humanity, who had come to see A.I. as an existential threat. (Hoffman told Nadella that the firing might be due to “effective-altruism craziness.”) Lehane—whose reported motto, after Mike Tyson, is “Everyone has a game plan until you punch them in the mouth”—urged Altman to wage an aggressive social-media campaign. Chesky stayed in contact with the tech journalist Kara Swisher, relaying criticism of the board.
Altman interrupted his “war room” at six o’clock each evening with a round of Negronis. “You need to chill,” he recalls saying. “Whatever’s gonna happen is gonna happen.” But, he added, his phone records show that he was on calls for more than twelve hours a day. At one point, Altman conveyed to Mira Murati, who had given Sutskever material for his memos and was serving as the interim C.E.O. of OpenAI in that period, that his allies were “going all out” and “finding bad things” to damage her reputation, as well as those of others who had moved against him, according to someone with knowledge of the conversation. (Altman does not recall the exchange.) [...]
In a series of increasingly tense calls, Altman demanded the resignations of board members who had moved to fire him. “I have to pick up the pieces of their mess while I’m in this crazy cloud of suspicion?” Altman recalled initially thinking, about his return. “I was just, like, Absolutely fucking not.” Eventually, Sutskever, Toner, and McCauley lost their board seats. Adam D’Angelo, a founder of Quora, was the sole original member who remained. As a condition of their exit, the departing members demanded that the allegations against Altman—including that he pitted executives against one another and concealed his financial entanglements—be investigated. They also pressed for a new board that could oversee the outside inquiry with independence. But the two new members, the former Harvard president Lawrence Summers and the former Facebook C.T.O. Bret Taylor, were selected after close conversations with Altman. “would you do this,” Altman texted Nadella. “bret, larry summers, adam as the board and me as ceo and then bret handles the investigation.” (McCauley later testified in a deposition that when Taylor was previously considered for a board seat she’d had concerns about his deference to Altman.)
Less than five days after his firing, Altman was reinstated. Employees now call this moment “the Blip,” after an incident in the Marvel films in which characters disappear from existence and then return, unchanged, to a world profoundly altered by their absence. But the debate over Altman’s trustworthiness has moved beyond OpenAI’s boardroom. The colleagues who facilitated his ouster accuse him of a degree of deception that is untenable for any executive and dangerous for a leader of such a transformative technology. “We need institutions worthy of the power they wield,” Murati told us. “The board sought feedback, and I shared what I was seeing. Everything I shared was accurate, and I stand behind all of it.” Altman’s allies, on the other hand, have long dismissed the accusations. After the firing, Conway texted Chesky and Lehane demanding a public-relations offensive. “This is REPUTATIONAL TO SAM,” he wrote. He told the Washington Post that Altman had been “mistreated by a rogue board of directors.”
OpenAI has since become one of the most valuable companies in the world. It is reportedly preparing for an initial public offering at a potential valuation of a trillion dollars. Altman is driving the construction of a staggering amount of A.I. infrastructure, some of it concentrated within foreign autocracies. OpenAI is securing sweeping government contracts, setting standards for how A.I. is used in immigration enforcement, domestic surveillance, and autonomous weaponry in war zones.
Altman has promoted OpenAI’s growth by touting a vision in which, he wrote in a 2024 blog post, “astounding triumphs—fixing the climate, establishing a space colony, and the discovery of all of physics—will eventually become commonplace.” His rhetoric has helped sustain one of the fastest cash burns of any startup in history, relying on partners that have borrowed vast sums. The U.S. economy is increasingly dependent on a few highly leveraged A.I. companies, and many experts—at times including Altman—have warned that the industry is in a bubble. “Someone is going to lose a phenomenal amount of money,” he told reporters last year. If the bubble pops, economic catastrophe may follow. If his most bullish projections prove correct, he may become one of the wealthiest and most powerful people on the planet.
In a tense call after Altman’s firing, the board pressed him to acknowledge a pattern of deception. “This is just so fucked up,” he said repeatedly, according to people on the call. “I can’t change my personality.” Altman says that he doesn’t recall the exchange. “It’s possible I meant something like ‘I do try to be a unifying force,’ ” he told us, adding that this trait had enabled him to lead an immensely successful company. He attributed the criticism to a tendency, especially early in his career, “to be too much of a conflict avoider.” But a board member offered a different interpretation of his statement: “What it meant was ‘I have this trait where I lie to people, and I’m not going to stop.’ ” Were the colleagues who fired Altman motivated by alarmism and personal animus, or were they right that he couldn’t be trusted?
One morning this winter, we met Altman at OpenAI’s headquarters, in San Francisco, for one of more than a dozen conversations with him for this story. The company had recently moved into a pair of eleven-story glass towers, one of which had been occupied by Uber, another tech behemoth, whose co-founder and C.E.O., Travis Kalanick, seemed like an unstoppable prodigy—until he resigned, in 2017, under pressure from investors, who cited concerns about his ethics. (Kalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAI’s ChatGPT “to get to the edge of what’s known in quantum physics.”)
An employee gave us a tour of the office. In an airy space full of communal tables, there was an animated digital painting of the computer scientist Alan Turing; its eyes tracked us as we passed. The installation is a winking reference to the Turing test, the 1950 thought experiment about whether a machine can credibly imitate a person. (In a 2025 study, ChatGPT passed the test more reliably than actual humans did.) Typically, you can interact with the painting. But the sound had been disabled, our guide told us, because it wouldn’t stop eavesdropping on employees and then butting into their conversations. Elsewhere in the office, plaques, brochures, and merchandise displayed the words “Feel the AGI.” The phrase was originally associated with Sutskever, who used it to caution his colleagues about the risks of artificial general intelligence—the threshold at which machines match human cognitive capacities. After the Blip, it became a cheerful slogan hailing a superabundant future.
We met Altman in a generic-looking conference room on the eighth floor. “People used to tell me about decision fatigue, and I didn’t get it,” Altman told us. “Now I wear a gray sweater and jeans every day, and even picking which gray sweater out of my closet—I’m, like, I wish I didn’t have to think about that.” Altman has a youthful appearance—he is slender, with wide-set blue eyes and tousled hair—but he is now forty, and he and Mulherin have a one-year-old son, delivered by a surrogate. “I’m sure, like, being President of the United States would be a much more stressful job, but of all the jobs that I think I could reasonably do, this is the most stressful one I can imagine,” he said, making eye contact with one of us, then with the other. “The way that I’ve explained this to my friends is: ‘This was the most fun job in the world until the day we launched ChatGPT.’ We were making these massive scientific discoveries—I think we did the most important piece of scientific discovery in, I don’t know, many decades.” He cast his eyes down. “And then, since the launch of ChatGPT, the decisions have gotten very difficult.”
by Ronan Farrow and Andrew Marantz, New Yorker | Read more:
Image: via
[ed. See also: “The problem is Sam Altman”: OpenAI Insiders don’t trust CEO (Ars Technica).]
Labels:
Business,
Critical Thought,
Journalism,
Politics,
Psychology,
Science,
Security,
Technology
Monday, April 6, 2026
Subscribe to:
Comments (Atom)
.jpg)


