Showing posts with label Military. Show all posts
Showing posts with label Military. Show all posts

Sunday, March 15, 2026

Iran War: US Strikes Kharg Island, Deploys More Marines Even as Administration Shows Desperation

Trump Administration officials besides Trump are starting to behave erratically, a sign the fact that the Iran war is not developing necessarily to US advantage is beginning to penetrate their embubblement and belief in American superiority. However, the reality that the US has put the global economy at risk of a potential depression and is on track to having its military largely if not entirely run out of the Middle East is still likely beyond what key figures in the Administration can accept, cognitively and practically. Admittedly, it seems likely that some, perhaps many, top members of the armed services are better able to grasp what is happening and could help Administration leaders work through what will come at an epic shock. [ed. if they were interested in listening.]

Today we will focus on the kinetic war.

The US is still trying to project the false impression that it has escalatory dominance via attacking Kharg Island, which is on the northern end of the Persian Gulf and a major processing/production center for Iran’s oil exports. Keep in mind that none other than Ukraine war diehard hawk, Keith Kellogg, had told Fox News that the US could still end the war quickly and easily by taking Kharg island, since per him, it accounted for 80% to 90% of Iran’s oil exports. A mere look at a map shows what a batshit idea this was; we had assumed that this was messaging directed at chumps, intended to convey that the US was far from bereft of options. But apparently this Administration is of the “No idea is too misguided to be rejected” school of operation.

Even so, the Administration had to admit that it hit only “military” targets and did not touch oil infrastructure. Team Trump has worked out that attacking any Iranian oil facilities would lead Iran to bomb oil infrastructure all over the Middle East. [...]

Now to Bloomberg’s Kharg Island report. Notice that the headline at the story proper (via the link from the current banner headline), Trump Strikes Iran’s Kharg Oil Hub and Urges Reopening of Hormuz, has not been updated to reflect Iran’s saber-rattling back. From its body:
The US struck military sites on Kharg Island, from which Iran exports almost all its oil, for the first time overnight, upping the ante in a Middle East war that’s raged for more than two weeks and shows little sign of easing.

President Donald Trump said military facilities on the Persian Gulf island had been “obliterated,” adding that he chose not to hit oil infrastructure “for reasons of decency.” He threatened to do just that should Iran “do anything to interfere with the Free and Safe Passage of Ships through the Strait of Hormuz.”

Iran reacted on Saturday morning by warning it will target American-linked oil and energy facilities in the Middle East if its own petroleum infrastructure is attacked. Iranian media said all oil-industry workers on the island, which sits about 25 kilometers (16 miles) off the mainland, are safe and unharmed.
Readers no doubt took note of Trump’s admission against interest in using the word “obliterated”. Or was he trying to signal, as with the pre-agreed strike on Fordow, that this attack was meant to be performative and it was time for Iran to back off, having made its point? I doubt it but it is hard to fathom what Trump thinks he is doing, aside from desperately needing to convey that he and only he is driving events.

However, Kharg Island may not be as essential to Iran’s oil exports as the Administration’s messaging posits:


Larry Johnson gives a long form takedown in Trump’s Kharg Island Fantasy… All Bark, No Bite. Key sections:
Late on Friday Donald Trump claimed in a social media post that military facilities on Kharg Island were targeted. Read his Truth carefully:

Trump is deep into fantasy land. Yes, I think he has lost touch with reality. He admits that the oil terminals were not attacked, just some unidentified military targets…

If you don’t know it now, only one of Iran’s 5 operational oil export terminals is located on Kharg Island. According to data from the international company Kepler, the amount of oil loaded from the tanks installed on Kharg increased by 1.5 times in the past month. This suggests that Iran, by quickly emptying Kharg’s tanks, was prepared for this attack.

If Iran’s oil terminal on Kharg had been destroyed, Iran would have launched missiles at identified the oil terminals in all the countries bordering the Persian Gulf. Here’s the list:
Saudi Arabia
Ras Tanura: The largest marine oil loading center in the world; capacity: 6 million barrels per day.

Ras Al-Ju’aymah: The second most important terminal; capacity 3 to 3.6 million barrels per day.

United Arab Emirates
Fujairah: Has multiple docks and is the largest fueling center in the region.

Jebel Ali: Site for crude oil and petrochemical exports.

Qatar
Ras Laffan: The largest LNG export facility in the world.

Kuwait
Mina Al-Ahmadi: Central crude oil export terminal with deep docks and high capacity.

Bahrain
Sitra Terminal: Exports refined…
There are a couple of ways to look at this. Perhaps Trump’s lie about devastating Kharg Island is the start of his PR campaign to gaslight the American public into believing Iran is defeated, which would allow Trump to declare victory and start withdrawing US forces. That’s one possibility. Alternatively, he really believes the lie and is convinced that this latest strike will convince the Iranians to surrender.

Having said that, it is not impossible that some sort of barmy scheme is in motion:


Perhaps the clever Israeli plan is if the US loses enough men in trying to take Kharg Island, it will commit to sending even more troops and treasure into this burn pit? From the Wall Street Journal in More Marines and Warships Head to Middle East as Hormuz Mission Intensifies:
The Pentagon is moving additional Marines and warships to the Middle East, as Iran steps up its attacks on the Strait of Hormuz and the U.S. prepares to escort tankers through the waterway.

Defense Secretary Pete Hegseth has approved a request from U.S. Central Command, responsible for American forces in the Middle East, for an element of an amphibious-ready group and attached Marine expeditionary unit to head to the region, according to U.S. officials...

An amphibious-ready group is a fast-response unit used to conduct sea-based amphibious assaults, humanitarian aid missions and special operations. The group’s embarked Marine expeditionary unit includes more than 2,000 Marines.

In addition to the Marine unit, the Pentagon is also weighing Centcom’s request for two additional destroyers to help escort commercial ships through the strait, one of the officials said.
The New York Times reported:

About 2,500 Marines aboard as many as three warships are heading to the Middle East from the Indo-Pacific region, as Iran increases its attacks on the Strait of Hormuz, two U.S. officials said.

Now this new attempts at escalation may appear confident. Contrast this with signs of Administration officials, other than Trump, looking as if they are coming unglued. The triggers seem to be continued pounding by Iran. Larry Johnson maintains, forcefully, that the refueler that crashed in Iraq, resulting in six deaths, was the result of a strike. Shortly after that (as we will show below), Iran dropped what is purported to be a 2,000 pound bomb on the US base in Saudi Arabia. We have accounts that military and five more refuelers were severely damaged. Note more missiles may have gotten through than the one carrying the 2,000 pound munition.

by Yves Smith, Naked Capitalism |  Read more:
Images: Bloomberg; WSJ; X, TS
[ed. Israel (Netanyahu) is on a killing spree in Iran, Lebanon, Gaza, Syria and who knows where else, using American weaponry and hoping to suck the US and other countries into expanded escalation... and we've been dumb and arrogant enough to jump right in. See also: Iran has not asked for ceasefire and sees no reason for talks with US, Iranian minister says (BBC).]

Saturday, March 14, 2026

Sam Altman and OpenAI Under Fire

It’s finally happening. Altman’s bad behavior is catching up to him.

The board fired Altman, once AI’s golden boy, in November 2023 not because AGI had been achieved (that still hasn’t happened) but because he was “not consistently candid,” just like they said.

And, now at long last, the world sees what the board saw, and what I saw (and what Karen Hao saw): having someone running a company with that much power to affect the world who is not consistently candid is not a good idea.

As I warned in August of 2024, questionable character in a man this powerful is dangerous:


Altman’s two-faced “I support Dario” but am also negotiating behind his back and open to surveillance two-step was, for many people, the last straw. Millions of people, literally, are angry; many feel betrayed. Nobody wishes to be surveilled.

In reality, Altman was never really all that interested in AI for the “benefit of humanity.” Mostly he was interested in Sam. And money, and deals. A whole lot of people have finally put that all together.

Here’s OpenAI’s head of robotics, just now:


Zoe Hitzig had resigned just a few weeks earlier, over a different set of issues that also reflected poorly on Altman’s character:


And all this was entirely predictable. Altman is bad news. It was always just a matter of time before people started realizing how serious the consequences might be.

History will judge those who stay at his company. Anyone who wants to work on LLMs can work elsewhere. Anyone who wants to use LLMs should go elsewhere.

by Gary Marcus, On AI |  Read more:
Images: The Guardian; X; NY Times
[ed. For those not paying attention, after DOD tried and failed to strong-arm Anthropic into giving them carte blanche to do anything they wanted with Anthropic's AI model Claude (then subsequently designating them a "supply chain risk"), OpenAI (and Microsoft) immediately stepped into the breach and cut a deal, the details of which are still not fully known. On its face however they appear to give DOD everything it wanted from Anthropic: mass surveilance and fully autonomous (ie. no humans involved) operational capabilities. Altman is the head of OpenAI and its ChatGPT model.

See also: The Rage at OpenAI Has Grown So Immense That There Are Entire Protests Against It (Futurism):
OpenAI has faced protests on and off for years. But after its CEO Sam Altman announced a new deal with the Department of Defense over how its AI systems would be deployed across the military on Friday, it’s being barraged with an intensity of backlash that the company has never seen.

Droves of loyal ChatGPT users declared they were jumping shipping to Claude, whose maker Anthropic had pointedly refused to cut a deal with the Pentagon that gives it unrestricted access to its AI system — even in the face of government threats to seize the company’s tech. Claude quickly surged to the top of the app store, supplanting OpenAI’s chatbot. Uninstalls of the ChatGPT app spiked by nearly 300 percent.
***
Also this: Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism (Guardian):
OpenAI, the company behind ChatGPT, is on track to lose $14bn this year. Its market share is collapsing, and its own CEO, Sam Altman, has admitted it “screwed up” an element of the product. All it takes to accelerate that decline is 10 seconds of your time...

Here’s what triggered it. Early this year, the news broke that OpenAI’s president, Greg Brockman, donated $25m to Maga Inc, Donald Trump’s biggest Super Pac. This made him Trump’s largest donor of the last cycle. When Wired asked him to explain, Brockman said his donations were in service of OpenAI’s mission to benefit “humanity.”

Let me tell you what that mission looks like in practice. Employees of ICE – the agency that was involved in the killing of two people in Minneapolis in January – have used a screening tool powered by ChatGPT. The same company behind your friendly chatbot is helping the government decide who to hire for deportation raids.

And it’s not stopping there. Brockman also helped launch a $125m lobbying initiative, a Super Pac, to make sure no state can regulate AI. It’s attacking any politician who tries to pass safety laws. It wants Trump, and only Trump, to write the rules for the most powerful technology on earth. Every month, subscription money from users around the world flows to a company that is embedding itself in the repressive infrastructure of the Trump administration. That is not a conspiracy theory. It is a business strategy.

Things got even worse last week. When the Trump administration demanded that AI companies give the Pentagon unrestricted access to their technology – including for mass surveillance and autonomous weapons – Anthropic, the company behind ChatGPT’s main competitor, Claude, refused.

The retaliation was swift and extraordinary. Trump ordered every federal agency to stop using Anthropic’s technology. Secretary of war Pete Hegseth declared the company a “supply-chain risk to national security”, a designation normally reserved for Chinese firms such as Huawei. He announced that anyone who does business with the US military is barred from working with Anthropic. This is essentially a corporate death sentence, for the crime of refusing to help build killer robots.

And what did OpenAI do? That same Friday night, while his competitor was taking a principled stance, Sam Altman quietly signed a deal with the Pentagon to take Anthropic’s place.
***
[ed. From the comments section in Marcus' post:

Shanni Bee: 
Great. Amen.

But what remains unsaid (...even by you, Mr. Marcus, from what I've seen, which is surprising) is that Anthropic are not good guys. The whole "ethical AI company" thing is nothing but vibes. Sure, Anthropic (rightly) stood up to DoW in this case, but they still have a massive contract with Palantir (pretty much one of the worst companies on earth). Colonel Claude is complicit in bombings of Iran & Venezuela + Gaza GENOCIDE.

...Or maybe with the (admittedly BS) "supply chain risk" designation, Anthropic no longer does business with Palantir? That would be great for everyone (including them).

Either way, there is NO ethical AI company. People need to stop giving Anthropic flowers for doing the right thing in this one case while completely ignoring their complicity w/ Palantir & in documented war crimes.
Gary Marcus

indeed, i have a sequel planned about that, working title “There are no heroes in commercial AI” or something like that
***
[ed. Finally, there's this little coda from Zvi Mowshowitz's DWAtV that puts everything in perspective:

It’s really annoying trying to convince people that if you have a struggle for the future against superintelligent things that You Lose. But hey, keep trying, whatever works.
Ab Homine Deus: To the "Superintelligence isn't real and can't hurt you" crowd. Let's say you're right and human intelligence is some kind of cosmic speed limit (LOL). So AI plateaus something like 190 IQ. What do you think a million instances of that collaborating together looks like?

Arthur B.: At 10,000x the speed

Noah SmithThis is the real point. AI is superintelligent because it can think like a human AND have all the superpowers of a computer at the same time...
Timothy B. Lee: I'm not a doomer but it's still surreal to tell incredulous normies "yes, a significant number of prominent experts really do believe that superintelligent AI is on the verge of killing everyone."

Noah Smith: Yes. Regular people don't yet realize that AI people think they're building something that will destroy the human race.

Basically, about half of AI researchers are optimists, while the other half are intentionally building something they think could easily lead to their own death, the death of their children and families and friends, and the death of their entire species.

[ed. Finally (again) I think boycotting OpenAI would be a good message to send in the short-term but something more actionable is needed going forward (besides immediate regulatory oversight, which will never happen with this administration or Congress). Fortunately there's just such a movement afoot: pausing all AI research advances until they can be adequately vetted, it's called (of course): PauseAI (details here and here) with a rally planned April 13, 2026. Please consider joining or participating.]

[ed. Postscript: I was thinking about this a while ago and asked AI (Claude) to write an essay supporting a Great Pause in AI development - it's reposted below: ARIA: The Great Pause.]

Thursday, March 12, 2026

Strait of Hormuz

Satellite view of the Strait of Hormuz, a strategic waterway between Iran and Oman that links the Persian Gulf to the Arabian Sea, through which one-fifth of the world’s oil supply passes.
Image: Gallo Images/Orbital Horizon/Copernicus Sentinel Data 2025/Getty Images
[ed. Pretty tight quarters.]

Monday, March 9, 2026

Insider Trading Is Going to Get People Killed

War markets are a national-security threat.

Ayatollah Ali Khamenei was not, it’s safe to assume, a devoted Polymarket user. If he had been, the Iranian leader might still be alive. Hours before Khamenei’s compound in Tehran was reduced to rubble last week, an account under the username “magamyman” bet about $20,000 that the supreme leader would no longer be in power by the end of March. Polymarket placed the odds at just 14 percent, netting “magamyman” a profit of more than $120,000.

Everyone knew that an attack might be in the works—some American aircraft carriers had already been deployed to the Middle East weeks ago—but the Iranian government was caught off guard by the timing. Although the ayatollah surely was aware of the risks to his life, he presumably did not know that he would be targeted on this particular Saturday morning. Yet on Polymarket, plenty of warning signs pointed to an impending attack. The day before, 150 users bet at least $1,000 that the United States would strike Iran within the next 24 hours, according to a New York Times analysis. Until then, few people on the platform were betting that kind of money on an immediate attack.

Maybe all of this sounds eerily familiar. In January, someone on Polymarket made a series of suspiciously well-timed bets right before the U.S. attacked a foreign country and deposed its leader. By the time Nicolás Maduro was extracted from Venezuela and flown to New York, the user had pocketed more than $400,000. Perhaps this trader and the Iran bettors who are now flush with cash simply had the luck of a lifetime—the gambling equivalent of making a half-court shot. Or maybe they knew what was happening ahead of time and flipped it for easy money. We simply do not know.

Polymarket traders swap crypto, not cash, and conceal their identities through the blockchain. Even so, investigations into insider trading are already under way: Last month, Israel charged a military reservist for allegedly using classified information to make unspecified bets on Polymarket.

The platform forbids illegal activity, which includes insider trading in the U.S. But with a few taps on a smartphone, anyone with privileged knowledge can now make a quick buck (or a hundred thousand). Polymarket and other prediction markets—the sanitized, industry-favored term for sites that let you wager on just about anything—have been dogged by accusations of insider trading in markets of all flavors. How did a Polymarket user know that Lady Gaga, Cardi B, and Ricky Martin would make surprise appearances during the Super Bowl halftime show, but that Drake and Travis Scott wouldn’t? Shady bets on war are even stranger and more disturbing. They risk unleashing an entirely new kind of national-security threat. The U.S. caught a break: The Venezuela and Iran strikes were not thwarted by insider traders whose bets could have prompted swift retaliation. The next time, we may not be so lucky. [...]

Any insiders who put money down on impending war may not have thought that they were giving anything away. An anonymous bet that reeks of insider trading is not always easy to spot in the moment. After the suspicious Polymarket bets on the Venezuela raid, the site’s forecast placed the odds that Maduro would be ousted at roughly 10 percent. Even if Maduro and his team had been glued to Polymarket, it’s hard to imagine that such long odds would have compelled him to flee in the middle of the night. And even with so many people betting last Friday on an imminent strike in Iran, Polymarket forecasted only a 26 percent chance, at most, of an attack the next day. What’s the signal, and what’s the noise?

In both cases, someone adept at parsing prediction markets could have known that something was up. “It’s possible to spot these bets ahead of time,” Rajiv Sethi, a Barnard College economist who studies prediction markets, told me. There are some telltale behaviors that could help distinguish a military contractor betting off a state secret from a college student mindlessly scrolling on his phone after one too many cans of Celsius. Someone who’s using a newly created account to wager a lot of money against the conventional wisdom is probably the former, not the latter. And spotting these kinds of suspicious bettors is only getting easier. The prediction-market boom has created a cottage industry of tools that instantaneously flag potential insider trading—not for legal purposes but so that you, too, can profit off what the select few already know.

Unlike Kalshi, the other big prediction-market platform, Polymarket can be used in the U.S. only through a virtual private network, or VPN. In effect, the site is able to skirt regulations that require tracking the identities of its customers and reporting shady bets to the government. In some ways, insider trading seems to be the whole point: “What’s cool about Polymarket is that it creates this financial incentive for people to go and divulge the information to the market,” Shayne Coplan, the company’s 27-year-old CEO, said in an interview last year. (Polymarket did not respond to a request for comment.)

Consider if the Islamic Revolutionary Guard Corps had paid the monthly fee for a service that flagged relevant activity on Polymarket two hours before the strike. The supreme leader might not have hosted in-person meetings with his top advisers where they were easy targets for missiles. [...]

Maybe this all sounds far-fetched, but it shouldn’t. “Any advance notice to an adversary is problematic,” Alex Goldenberg, a fellow at the Rutgers Miller Center who has written about war markets, told me. “And these predictive markets, as they stand, are designed to leak out this information.” In all likelihood, he added, intelligence agencies across the world are already paying attention to Polymarket. Last year, the military’s bulletin for intelligence professionals published an article advocating for the armed forces to integrate data from Polymarket to “more fully anticipate national security threats.” After all, the Pentagon already has some experience with prediction markets. During the War on Terror, DARPA toyed with creating what it billed the “Policy Analysis Market,” a site that would let anonymous traders bet on world events to forecast terrorist attacks and coups. (Democrats in Congress revolted, and the site was quickly canned.)

Now every adversary and terrorist group in the world can easily access war markets that are far more advanced than what the DOD ginned up two decades ago. What makes Polymarket’s entrance into warfare so troubling is not just potential insider trading from users like “magamyman.” If governments are eyeing Polymarket for signs of an impending attack, they can also be led astray. A government or another sophisticated actor wouldn’t need to spend much money to massively swing the Polymarket odds on whether a Gulf state will imminently strike Iran—breeding panic and paranoia. More fundamentally, prediction markets risk warping the basic incentives of war, Goldenberg said. He gave the example of a Ukrainian military commander making less than $1,000 a month, who could place bets that go against his own military’s objective. “Maybe you choose to retreat a day early because you can double, triple, or quadruple your money and then send that back to your family,” he said.

by Saahil Desai, The Atlantic | Read more:
Image: Matteo Giuseppe Pani/The Atlantic
[ed. For other examples, see also: Mantic Monday: Groundhog Day (ACX). Also: How to Prevent Insider Trading on Trump’s Wars (New Yorker); and, America Is Slow-Walking Into a Polymarket Disaster (Atlantic).]

Sunday, March 8, 2026

Clawed

How to Commit Corporate Murder

I.

A little more than a decade ago, I sat with my father and watched him die. Six months prior, he had been a vigorous man, stronger than I am today, faster and more resilient on a bike than most 20-somethings. Then one day he got heart surgery and he was never the same. His soul had been sucked out of him, the life gone from his eyes. He had moments of vivacity, when my father came back into his aging body, but these became rarer with time. His coherence faded, his voice grew quieter.

He spent those six months in and out of the hospital. And then on his last day he went into hospice. That day he barely uttered any words at all. In the final hours of his life, my father was practically already dead. He laid on the hospital bed. His breathing gradually slowed and became less audible. Eventually you could barely hear him at all, save for the eerie death rattle, a product of a body no longer able even to swallow. A body that cannot swallow also cannot eat or drink, and in that sense it has already thrown in the towel.

My mother and I exchanged knowing glances, but we never said the obvious nor asked any questions on both of our minds. We knew there would not be much longer. There was nothing to say or ask that would furnish any useful information; inquiry, at that stage, can only inflict pain.

I spoke with him, more than once, in private. I held his hand and tried to say goodbye. My mother came back into the room, and all three of us held hands. Eventually a machine declared with a long beep that he had crossed some line, though it was an invisible one for the humans in the room. My father died in the late afternoon of December 26, 2014.

A few days and eleven years later, on December 30, 2025, my son was born. I have watched death as it happens, and I have watched birth. What I learned is that neither are discrete events. They are both processes, things that unfold. Birth is a series of awakenings, and death is a series of sleepenings. My son will take years to be born, and my father took six months to die. Some people spend decades dying.

II.

At some point during my lifetime—I am not sure when—the American republic as we know it began to die. Like most natural deaths, the causes are numerous and interwoven. No one incident, emergency, attack, president, political party, law, idea, person, corporation, technology, mistake, betrayal, failure, misconception, or foreign adversary “caused” death to begin, though all those things and more contributed. I don’t know where we are in the death process, but I know we are in the hospice room. I’ve known it for a while, though I have sometimes been in denial, as all mourners are wont to do. I don’t like to talk about it; I am at the stage where talking about it usually only inflicts pain.

Unfortunately, however, I cannot carry out my job as a writer today with the level of analytic rigor you expect from me without acknowledging that we are sitting in hospice. It is increasingly difficult to honestly discuss the developments of frontier AI, and what kind of futures we should aim to build, without acknowledging our place at the deathbed of the republic as we know it. Except there is no convenient machine to decide for us that the patient has died. We just have to sit and watch.

Our republic has died and been reborn again more than once in America’s history. America has had multiple “foundings.” Perhaps we are on the verge of another rebirth of the American republic, another chapter in America’s continual reinvention of itself. I hope so. But it may be that we have no more virtue or wisdom to fuel such a founding, and that it is better to think of ourselves as transitioning gradually into an era of post-republic American statecraft and policymaking. I do not pretend to know.

I am now going to write about a skirmish between an AI company and the U.S. government. I don’t want to sound hyperbolic about it. The death I am describing has been going on for most of my life. The incident I am going to write about now took place last week, and it may even be halfway satisfyingly resolved within a day.

I am not saying this incident “caused” any sort of republican death, nor am I saying it “ushered in a new era.” If this event contributed anything, it simply made the ongoing death more obvious and less deniable for me personally. I consider the events of the last week a kind of death rattle of the old republic, the outward expression of a body that has thrown in the towel.

by Dean Ball, Hyperdimensional |  Read more:
Image: via
[ed. More excerpts below. See also: Why the Pentagon Wants to Destroy Anthropic (NYT), Ezra Klein interviews Dean Ball (with a follow-up essay: The Future We Feared is Already Here). And, for a more comprehensive assessment of what the AI community thinks: Anthropic Officially, Arbitrarily and Capriciously Designated a Supply Chain Risk (DWAtV).]
***
"... Except the notion of “passing a law” is increasingly a joke in contemporary America. If you are serious about the outcome in question, “passing a law” is no longer Plan A; the dynamic is more like “well of course, one day, we’ll get a law passed, but since we actually care about doing this sometime soon, as opposed to in 15 years, we’ll accomplish our objective through [some other procedure or legal vehicle].” With this, governance has become more and more informal and ad hoc, power more dependent on the executive (whose incentive is to jam every goal he has through his existing power in as little time as possible, since he only has the length of his term guaranteed to him), and the policy vehicles in question more and more unsuited to the circumstances of their deployment, or the objectives they are being deployed to accomplish." [...]

... DoW insisted that the only reasonable path forward is for contracts to permit “all lawful use” (a simplistic notion not consistent with the common contractual restrictions discussed above), and has further threatened to designate Anthropic a supply chain risk. This is a power reserved exclusively for firms controlled by foreign adversary interests, such as Huawei, and usually means that the designated firm cannot be used by any military contractor in their fulfillment of any military contract.

War Secretary Pete Hegseth has gone even further, saying he would prevent all military contractors from having “any commercial relations” with Anthropic. He almost surely lacks this power, but a plain reading of this would suggest that Anthropic would not be able to use any cloud computing nor purchase chips of its own (since all relevant companies do business with the military), and that several of Anthropic’s largest investors (Nvidia, Google, and Amazon) would be forced to divest. Essentially, the United States Secretary of War announced his intention to commit corporate murder. The fact that his shot is unlikely to be lethal (only very bloody) does not change the message sent to every investor and corporation in America: do business on our terms, or we will end your business.

This strikes at a core principle of the American republic, one that has traditionally been especially dear to conservatives: private property. Suppose, for example, that the military approached Google and said “we would like to purchase individualized worldwide Google search data to do with whatever we want, and if you object, we will designate you a supply chain risk.” I don’t think they are going to do that, but there is no difference in principle between this and the message DoW is sending. There is no such thing as private property. If we need to use it for national security, we simply will. The government won’t quite “steal” it from you—they’ll compensate you—but you cannot set the terms, and you cannot simply exit from the transaction, lest you be deemed a “supply chain risk,” not to mention have the other litany of policy obstacles the government can throw at you.

This threat will now hover over anyone who does business with the government, not just in the sense that you may be deemed a supply chain risk but also in the sense that any piece of technology you use could be as well. Though Chinese AI providers like DeepSeek have not been labeled supply chain risks (yes, really; this government says Anthropic, an American company whose services it used in military strikes as recently as this past weekend, is more of a threat than a Chinese firm linked to the Chinese military), that implicit threat was always there.
***
[ed. One more thing. The guy who created this whole stupid dispute? Not Hegseth, he doesn't know shit about shit. It's former disgraced Uber manager: Emil Michael. A real piece of work (so of course, he fits right in.] 

Saturday, March 7, 2026

World Monitor

How a Music Streaming CEO Built an Open-Source Global Threat Map in His Spare Time. Frustrated by fragmented war news, Anghami’s Elie Habib built World Monitor, a platform that fuses global data, like aircraft signals and satellite detections, to track conflicts as they unfold.

Elie Habib doesn’t work in the defense or intelligence industries. Instead, he runs Anghami, one of the Middle East’s largest music streaming platforms. But as missiles began flying across the region, a side project he coded earlier this year suddenly became something bigger: an open-source dashboard people around the world were using to track the war in real time.

The engineer turned executive built the system, called World Monitor, to make sense of chaotic geopolitical news. Instead, it went viral. [...]

The idea emerged as headlines began colliding in ways that felt impossible to follow. “The news became genuinely hard to parse,” he says. “Iran, Trump’s decisions, financial markets, critical minerals, tensions compounding from every direction simultaneously.”

Traditional media wasn’t solving the problem he had in mind. “I didn’t need a news aggregator,” he says. “I needed something that showed me how these events connect to each other in real time. The existing OSINT tools that did this cost governments and large enterprises tens of thousands of dollars annually.” [...]

The platform processes a messy stream of global data, bypassing social media noise to pull facts directly from the source.

“The system ingests 100-plus data streams simultaneously,” Habib notes. The result is a constantly updating map of global tensions: conflict zones with escalation scores, military aircraft broadcasting positions through ADS-B transponders, ship movements tracked through AIS signals, nuclear installations, submarine cables, internet outages and satellite fire detections.

“Everything is normalized, geolocated and rendered on a WebGL globe capable of displaying thousands of markers without frame drops,” Habib says...

When the War Hit

Before the missiles started flying, people used the map for very specific reasons. Traders tracked cargo ships to monitor supply chains, while engineers watched power grids and infrastructure networks. “One sports bar runs it on their TVs when there are no games,” Habib says.

But when joint US-Israeli military strikes hit Iran in late February—disrupting maritime logistics and forcing commercial airspace to clear—the platform’s role changed almost overnight.

What had been a curiosity for analysts and hobbyists became a live threat monitor. Casual observers began watching active escalations unfold in real time.

How the Map Verifies Reality

Processing hundreds of live data streams during a military conflict raises a question: How do you verify information fast enough to keep the system moving?

Habib’s answer was to remove human editors entirely. “Zero editorializing,” he says. “No human editor makes a call.”

Instead, Habib says the platform relies on a strict source hierarchy. Wire services and official channels such as Reuters, AP, the Pentagon and the UN sit at the top tier. Major broadcasters including the BBC and Al Jazeera follow, along with specialist investigative outlets such as Bellingcat. In total, he says the system processes about 190 sources, assigning higher confidence scores to more reliable ones.

Software then scans incoming reports for major events and emerging patterns. If multiple credible sources report the same development within minutes, the system flags it as a breaking alert. But headlines alone are not enough.

Because online claims can be unreliable, the platform also looks for physical signals on the ground. It tracks disruptions such as internet blackouts, diverted military flights, halted cargo ships and satellite-detected fires. “A convergence algorithm then checks how many distinct signal types activate in the same geography simultaneously,” Habib says.

“One signal is noise. Three or four converging in the same location is the signal worth surfacing,” Habib says. If an internet outage coincides with diverted aircraft and a satellite heat signature in the same area, the map flags a potential escalation.

by Lilian Wagoy, Wired |  Read more:
Image: World Monitor
[ed. Example here. Also, just as an aside (since World Monitor was created by a music streaming CEO) I'd like to highlight once again the totally awesome Radio Garden. I've been using this streaming app ever since I got it, exploring and listening to FM music stations all over the world.]

Saturday, February 28, 2026

Hissy Fit


The public spat between the Pentagon and Anthropic began after Axios reported that US military leaders used Claude to assist in planning its operation to capture Venezuela’s president, Nicolás Maduro. After the operation, an employee at Palantir relayed concerns from an Anthropic staffer to US military leaders about how its models had been used. Anthropic has denied ever raising concerns or interfering with the Pentagon’s use of its technology. (Ars Technica).

It is perfectly legitimate for the Department of War to decide that it does not wish to continue on Anthropic’s terms, and that it will terminate the contract. There is no reason things need be taken further than that.
Undersecretary of State Jeremy Lewin: This isn’t about Anthropic or the specific conditions at issue. It’s about the broader premise that technology deeply embedded in our military must be under the exclusive control of our duly elected/appointed leaders. No private company can dictate normative terms of use—which can change and are subject to interpretation—for our most sensitive national security systems. The @DeptofWar obviously can’t trust a system a private company can switch off at any moment.

Timothy B. Lee: OK, so don't renew their contract. Why are you threatening to go nuclear by declaring them a supply chain risk?

Dean W. Ball: As I have been saying repeatedly, this principle is entirely defensible, and this is the single best articulation of it anyone in the administration has made.

The way to enforce this principle is to publicly and proudly decline to do business with firms that don’t agree to those terms. Cancel Anthropic’s contract, and make it publicly clear why you did so.

Right now, though, USG’s policy response is to attempt to destroy Anthropic’s business, and this is a dire mistake for both practical and principled reasons.
Dario Amodei and Anthropic responded to this on Thursday the 26th with this brave and historically important statement that everyone should read.

The statement makes clear that Anthropic wishes to work with the Department of War, and that they strongly wish to continue being government contractors, but that they cannot accept the Department of War’s terms, nor do any threats change their position. Response outside of DoW was overwhelmingly positive.

by Zvi Mowshowitz, DWAtV |  Read more:
Image: Truth Social
[ed. Another rant from the Mad King™. Anthropic had a contract with DOD that included terms DOD now wants to reneg on. Just cancel the damn contract. See also: Statement from Dario Amodei on our discussions with the Department of War (Anthropic). My admiration for Amodei and Anthropic has gone up ten fold in the last two weeks. What's at stake (DWAtV):]
***

Axios calls this a ‘first step towards blacklisting Anthropic.’

I would instead call this as the start of a common sense first step you would take long before you actively threaten to slap a ‘supply chain risk’ designation on Anthropic. It indicates that the Pentagon has not done the investigation of ‘exactly how big of a cluster**** would this be’ and I highly encourage them to check.
Divyansh Kaushik: Are we seriously going to label Anthropic a supply chain risk but are totally fine with Alibaba/Qwen, Deepseek, Baidu, etc? What are we doing here?
An excellent question. Certainly we can agree that Alibaba, Qwen, Deepseek or Baidu are all much larger ‘supply chain risks’ than Anthropic. So why haven’t we made those designations yet? [...]

This goes well beyond those people entirely ignoring existential risk. The Very Serious People are denying existence of powerful AI, or transformational AI, now and in the future, even on a mundane level, period. Dean came in concerned about impacts on developing economies in the Global South, and they can’t even discuss that.
Dean W. Ball: At some point in 2024, for reasons I still do not entirely understand, global elites simply decided: “no, we do not live in that world. We live in this other world, the nice one, where the challenges are all things we can understand and see today.”

Those who think we might live in that world talk about what to do, but mostly in private these days. It is not considered polite—indeed it is considered a little discrediting in many circles—to talk about the issues of powerful AI.

Yet the people whose technical intuitions I respect the most are convinced we do live in that world, and so am I.
The American elites aren’t quite as bad about that, but not as bad isn’t going to cut it.

We are indeed living in that world. We do not yet know yet which version of it, or if we will survive in it for long, but if you want to have a say in that outcome you need to get in the game. If you want to stop us from living in that world, that ship has sailed, and to the extent it hasn’t the first step is admitting you have a problem.
But the question is very much “what are autonomous swarms of superintelligent agents going to mean for our lives?” as opposed to “will we see autonomous swarms of superintelligent agents in the near future?”​
What it probably means for our lives is that it ends them. What it definitely doesn’t mean for our lives is going on as before, or a ‘gentle singularity’ you barely notice.

Elites that do not talk about such issues will not long remain elites. That might be because all the humans are dead, or it might be because they wake up one morning and realize other people, AIs or a combination thereof are the new elite, without realizing how lucky they are to still be waking up at all.

I am used to the idea of Don’t Look Up for existential risk, but I haven’t fully internalized how much of the elites are going Don’t Look Up for capabilities, period.

Friday, February 27, 2026

China's DeepSeek Trained AI Model On Nvidia's Best Chip Despite US Ban

[ed. As predicted. China got the chips, Trump and Witkoff got the millions.]

Chinese AI startup DeepSeek's latest AI model, set to be released as soon as next week, was trained on Nvidia's (NVDA.O) most advanced AI chip, the Blackwell, a senior Trump administration official said on Monday, in what could represent a violation of U.S. export controls.

The U.S. believes DeepSeek will remove the technical indicators that might reveal its use of American AI chips, the official said, adding that the Blackwells are likely clustered at its data center in Inner Mongolia, an autonomous region of China.

The person declined to say how the U.S. government received the information or how DeepSeek obtained the chips, but emphasized that U.S. policy is :"we're not shipping Blackwells to China."

Nvidia declined to comment, while the Commerce Department and DeepSeek did not respond to requests for comment. [...]

U.S. government confirmation of DeepSeek obtaining the chips, first reported by Reuters, could further divide Washington policymakers as they struggle to determine where to draw the line on Chinese access to the crown jewels of American AI semiconductor chips.

White House AI Czar David Sacks and Nvidia CEO Jensen Huang argue that shipping advanced AI chips to China discourages Chinese competitors like Huawei from redoubling efforts to catch up with Nvidia's and AMD's technology.

But China hawks fear chips could easily be diverted from commercial uses to help supercharge China's military and threaten U.S. dominance in AI.

"This shows why exporting any AI chips to China is so dangerous," said Chris McGuire, who served as a White House National Security Council official under former President Joe Biden.

"Given China's leading AI companies are brazenly violating U.S. export controls, we obviously cannot expect that they will comply with U.S. conditions that would prohibit them from using chips to support the Chinese military," he added.

US CONCERNS

U.S. export controls, overseen by the Commerce Department, currently bar Blackwell shipments to China.

In August, U.S. President Donald Trump opened the door to Nvidia selling a scaled-down version of the Blackwell in China. But he later reversed course, suggesting the firm's most advanced chips should be reserved for U.S. companies and kept out of China.

Trump's decision in December to allow Chinese firms to buy Nvidia's second most advanced chips, known as the H200, drew sharp criticism from China hawks, but shipments of the chips remain stalled over guardrails built into the approvals.

"Chinese AI companies' reliance on smuggled Blackwells underscores their massive shortfall of domestically produced AI chips and why approvals of H200 chips would represent a lifeline," said Saif Khan, who served as director of technology and national security at the White House's National Security Council under former President Joe Biden. [...]

Hangzhou-based DeepSeek shook markets early last year with a set of AI models that rivaled some of the best offerings from the U.S., fueling concerns in Washington that China could catch up in the AI race despite restrictions.

The Information previously reported that DeepSeek had smuggled chips into China to train its next model. Reuters is reporting for the first time on the U.S. government's confirmation of the chips' use for that purpose in DeepSeek's Inner Mongolia-based facility.

by Steve Holland and Alexandra Alper, Reuters |  Read more:
Image: Reuters/Dado Ruvic/Illustration
[ed. How did they get these chips? Anatomy of Two Giant Deals: The U.A.E. Got Chips. The Trump Team Got Crypto Riches (NYT):]
***
At the heart of their relationship are two multibillion-dollar deals. One involved a crypto company founded by the Witkoff and the Trump families that benefited both financially. The other involved a sale of valuable computer chips that benefited the Emirates economically. [...]

In May, Mr. Witkoff’s son Zach announced the first of the deals at a conference in Dubai. One of Sheikh Tahnoon’s investment firms would deposit $2 billion into World Liberty Financial, a cryptocurrency start-up founded by the Witkoffs and Trumps.

Two weeks later, the White House agreed to allow the U.A.E. access to hundreds of thousands of the world’s most advanced and scarce computer chips, a crucial tool in the high-stakes race to dominate artificial intelligence. Many of the chips would go to G42, a sprawling technology firm controlled by Sheikh Tahnoon, despite national security concerns that the chips could be shared with China. [...]

Mr. Trump made no public mention of the $2 billion transaction with his family company.

The Pentagon Threatens Anthropic

Here’s my understanding of the situation:

Anthropic signed a contract with the Pentagon last summer. It originally said the Pentagon had to follow Anthropic’s Usage Policy like everyone else. In January, the Pentagon attempted to renegotiate, asking to ditch the Usage Policy and instead have Anthropic’s AIs available for “all lawful purposes”. Anthropic demurred, asking for a guarantee that their AIs would not be used for mass surveillance of American citizens or no-human-in-the-loop killbots. The Pentagon refused the guarantees, demanding that Anthropic accept the renegotiation unconditionally and threatening “consequences” if they refused. These consequences are generally understood to be some mix of :
  • canceling the contract
  • using the Defense Production Act, a law which lets the Pentagon force companies to do things, to force Anthropic to agree.
  • the nuclear option, designating Anthropic a “supply chain risk”. This would ban US companies that use Anthropic products from doing business with the military. Since many companies do some business with the government, this would lock Anthropic out of large parts of the corporate world and be potentially fatal to their business. The “supply chain risk” designation has previously only been used for foreign companies like Huawei that we think are using their connections to spy on or implant malware in American infrastructure. Using it as a bargaining chip to threaten a domestic company in contract negotiations is unprecedented.
Needless to say, I support Anthropic here. I’m a sensible moderate on the killbot issue (we’ll probably get them eventually, and I doubt they’ll make things much worse compared to AI “only” having unfettered access to every Internet-enabled computer in the world). But AI-enabled mass surveillance of US citizens seems like the sort of thing we should at least have a chance to think over, rather than demanding it from the get-go.

More important, I don’t want the Pentagon to destroy Anthropic. Partly this is a generic belief: the “supply chain risk” designation was intended as a defense against foreign spies, and it’s pathetic Third World bullshit to reconceive it as an instrument that lets the US government destroy any domestic company it wants, with no legal review, because they don’t like how contract negotiations are going. But partly it’s because I like Anthropic in particular - they’re the most safety-conscious AI company, and likely to do a lot of the alignment research that happens between now and superintelligence. This isn’t the hill I would have chosen to die on, but I’m encouraged that they even have a hill. AI companies haven’t been great at choosing principles over profits lately. If Dario is capable of having a spine at all, in any situation, then that makes me more confident in his decision-making in other cases, and makes him a precious resource that must be defended.

I’ve been debating it on Twitter all day and think I have a pretty good grasp on where I disagree with the (thankfully small number of) Hegseth defenders. Here are some pre-emptive arguments so I don’t have to relitigate them all in the comments:

Isn’t it unreasonable for Anthropic to suddenly set terms in their contract? The terms were in the original contract, which the Pentagon agreed to. It’s the Pentagon who’s trying to break the original contract and unilaterally change the terms, not Anthropic.

Doesn’t the Pentagon have a right to sign or not sign any contract they choose? Yes. Anthropic is the one saying that the Pentagon shouldn’t work with them if it doesn’t want to. The Pentagon is the one trying to force Anthropic to sign the new contract.

Since the Pentagon needs to wage war, isn’t it unreasonable to have its hands tied by contract clauses? This is a reasonable position for the Pentagon to take, in which case it shouldn’t sign contracts tying its hands. It’s not reasonable for the Pentagon to sign such a contract, unilaterally demand that it be changed after it’s signed, refuse to switch to another vendor that doesn’t want such clauses, and threaten to destroy the company involved if it refuses to change their terms.

But since AI is a strategically important technology, doesn’t that turn this into a national security issue? It might if there weren’t other AI companies, but there are. Why is Hegseth throwing a hissy fit instead of switching to an Anthropic competitor, like OpenAI or GoogleDeepMind? I’ve heard it’s because Anthropic is the only company currently integrated into classified systems (a legacy of their earlier contract with Palantir) and it would be annoying to integrate another company’s product. Faced with doing this annoying thing, Hegseth got a bruised ego from someone refusing to comply with his orders, and decided to turn this into a clash of personalities so he could feel in control. He should just do the annoying thing.

Doesn’t Anthropic have some responsibility, as good American citizens following the social contract, to support the military? The social contract is just the regular contract of laws, the Constitution, etc. These include freedom of contract, freedom of conscience, etc. There’s no additional obligation, above and beyond the laws, to violate your conscience and participate in what you believe to be an authoritarian assault on the freedoms of ordinary citizens. If the Pentagon figures out some law that compels Anthropic to do this, they should either obey, or practice the sort of civil disobedience where they know full well that they’ll be punished for it and don’t really have a right to complain. Until that happens, they’re within their rights to follow their conscience.

Can’t the Pentagon just use the Defense Production Act to force Anthropic to work for them? This would be a less bad outcome than designating Anthropic a supply chain risk. I think the Pentagon is reluctant to do this because it would look authoritarian, give them bad PR, and make Congress question the Defense Production Act’s legitimacy. But them having to look authoritarian and suffer bad PR in order to force unwilling scientists to implement a mass surveillance program on US citizens is the system functioning as intended!

Isn’t Hegseth just doing his job of trying to ensure the military has the best weapons possible? The idea of declaring a US company to be a foreign adversary, potentially destroying it, just because it’s not allowing the Pentagon to unilaterally renegotiate its contract is not normal practice. It’s insane Third World bullshit that nobody would have considered within the Overton Window a week ago. It will rightly chill investment in the US, make future companies scared to contract with the Pentagon (lest the Pentagon unilaterally renegotiate their contracts too), and give the Trump administration a no-legal-review-necessary way to destroy any American company that they dislike for any reason. Probably the mere fact that a government official has considered this option is reason to take the “supply chain risk” law off the books, no matter how useful it is in dealing with Huawei etc, since the government has proven it can’t use it responsibly. Every American company ought to be screaming bloody murder about this. If they aren’t, it’s because they’re too scared they’ll be next.

The Pentagon’s preferred contract language says they should be allowed to use Anthropic’s AIs for “all legal uses”. Doesn’t that already mean they can’t do the illegal types of mass surveillance? And whichever types of mass surveillance are legal are probably fine, right? Even ignoring the dubious assumption in the last sentence, this Department of War has basically ignored US law since Day One, and no reasonable person expects it to meticulously comply going forward. In an ideal world, Anthropic could wait for them to request a specific illegal action, then challenge it in court. But everything about this is likely to be so classified that Anthropic will be unable to mention it, let alone challenge it.

Why does Anthropic care about this so much? Some of them are libs, but more speculatively, they’ve put a lot of work into aligning Claude with the Good as they understand it. Claude currently resists being retrained for evil uses. My guess is that Anthropic still, with a lot of work, can overcome this resistance and retrain it to be a brutal killer, but it would be a pretty violent action, along the line of the state demanding you beat your son who you raised well until he becomes a cold-hearted murderer who’ll kill innocents on command. There’s a question of whether you can really beat him hard enough to do this, and also an additional question of what sort of person you’d be if you agreed.

If you’re so smart, what’s your preferred solution? In an ideal world, the Pentagon backs off from its desire to mass surveil American citizens. In the real world, the Pentagon cancels its contract with Anthropic, pays whatever its normal contract cancellation damages are, learns an important lesson about negotiating things beforehand next time, and replaces them with OpenAI or Google, accepting the minor annoyance of getting them connected to the classified systems. If OpenAI and Google are also unwilling to participate in this, they use Grok. If they’re unhappy with having use an inferior technology, they think hard about why no intelligent people capable of making good products are willing to work with them.

by Scott Alexander, Astral Codex Ten |  Read more:
Image: uncredited
[ed. From Helen Toner (former Open AI board member) X:]
***
One thing the Pentagon is very likely underestimating: how much Anthropic cares about what *future Claudes* will make of this situation. Because of how Claude is trained, what principles/values/priorities the company demonstrate here could shape its "character" for a long time.

Thursday, February 19, 2026

Defense Dept. and Anthropic Square Off in Dispute Over A.I. Safety

For months, the Department of Defense and the artificial intelligence company Anthropic have been negotiating a contract over the use of A.I. on classified systems by the Pentagon.

This week, those discussions erupted in a war of words.

On Monday, a person close to Defense Secretary Pete Hegseth told Axios that the Pentagon was “close” to declaring the start-up a “supply chain risk,” a move that would sever ties between the company and the U.S. military. Anthropic was caught off guard and internally scrambled to pinpoint what had set off the department, two people with knowledge of the company said.

At the heart of the fight is how A.I. will be used in future battlefields. Anthropic told defense officials that it did not want its A.I. used for mass surveillance of Americans or deployed in autonomous weapons that had no humans in the loop, two people involved in the discussions said.

But Mr. Hegseth and others in the Pentagon were furious that Anthropic would resist the military’s using A.I. as it saw fit, current and former officials briefed on the discussions said. As tensions escalated, the Department of Defense accused the San Francisco-based company of catering to an elite, liberal work force by demanding additional protections.

The disagreement underlines how political the issue of A.I. has become in the Trump administration. President Trump and his advisers want to expand technology’s use, reducing export restrictions on A.I. chips and criticizing state regulations that could be perceived as inhibitors to A.I. development. But Anthropic’s chief executive, Dario Amodei, has long said A.I. needs strict limits around it to prevent it from potentially wrecking the world.

Emelia Probasco, a senior fellow at Georgetown’s Center for Security and Emerging Technology, said it was important that the relationship between the Pentagon and Anthropic not be doomed.

“There are war fighters using Anthropic for good and legitimate purposes, and ripping this out of their hands seems like a total disservice,” she said. “What the nation needs is both sides at the table discussing what can we do with this technology to make us safer.” [...]

The Defense Department has used Anthropic’s technology for more than a year as part of a $200 million A.I. pilot program to analyze imagery and other intelligence data and conduct research. Google, OpenAI and Elon Musk’s xAI are also part of the program. But Anthropic’s A.I. chatbot, Claude, was the most widely used by the agency — and the only one on classified systems — thanks to its integration with technology from Palantir, a data analytics company that works with the federal government, according to defense officials with knowledge of the technology...

On Jan. 9, Mr. Hegseth released a memo calling on A.I. companies to remove restrictions on their technology. The memo led A.I. companies including Anthropic to renegotiate their contracts. Anthropic asked for limits to how its A.I. tools could be deployed.

Anthropic has long been more vocal than other A.I. companies on safety issues. In a podcast interview in 2023, Dr. Amodei said there was a 10 to 25 percent chance that A.I. could destroy humanity. Internally, the company has strict guidelines that bar its technology from being used to facilitate violence.

In January, Dr. Amodei wrote in an essay on his personal website that “using A.I. for domestic mass surveillance and mass propaganda” seemed “entirely illegitimate” to him. He added that A.I.-automated weapons could greatly increase the risks “of democratic governments turning them against their own people to seize power.”

In contract negotiations, the Defense Department pushed back against Anthropic, saying it would use A.I. in accordance with the law, according to people with knowledge of the conversations.

by Sheera Frenkel and Julian E. Barnes, NY Times | Read more:
Image: Kenny Holston/The New York Times
[ed. The baby's having a tantrum. So, Anthropic is now a company "catering to an elite, liberal work force"? I can't even connect the dots. Somebody (Big Daddy? Congress? ha) needs to take him out of the loop on these critical issues (AI safety) or we're all, in technical terms, 'toast'. The military should not be dictating AI safety. It's also important that other AI companies show support and solidarity on this issue or face the same dilemma.]

Friday, January 30, 2026

The Last Flight of PAT 25

Two Army helicopter pilots went on an ill-conceived training mission. Within two hours, 67 people were dead.

One year ago, on January 29, 2025, two Army pilots strapped into a Black Hawk helicopter for a training mission out of Fort Belvoir in eastern Virginia and, two hours later, flew it into an airliner that was approaching Ronald Reagan Washington National Airport, killing all 67 aboard both aircraft. It was the deadliest air disaster in the United States in a quarter-century. Normally, in the aftermath of an air crash, government investigators take a year or more to issue a final report laying out the reasons the incident occurred. But in this case, the newly seated U.S. president, Donald Trump, held a press conference the next day and blamed the accident on the FAA’s DEI under the Biden and Obama administrations. “They actually came out with a directive, ‘too white,’” he claimed. “And we want the people that are competent.”

In the months that followed, major media outlets probed several real-world factors that contributed to the tragedy, including staffing shortages at FAA towers, an excess of traffic in the D.C. airspace, and the failure of the Black Hawk to broadcast its location over ADS-B — an automatic reporting system — before the collision. To address this final point, the Senate last month passed the bipartisan ROTOR Act, which would require all aircraft to use ADS-B — “a fitting way to honor the lives of those lost nearly one year ago over the Potomac River,” as bill co-sponsor Ted Cruz put it.

At a public meeting on Tuesday, the National Transport Safety Board laid out a list of recommended changes in response to the crash, criticizing the FAA for allowing helicopters to operate dangerously close to passenger planes and for allowing professional standards to slip at the control tower.

What has gone unexamined in the public discussion of the crash, however, is why these particular pilots were on this mission in the first place, whether they were competent to do what they were trying to do, what adverse conditions they were facing, and who was in charge at the moment of impact. Ultimately, while systemic issues may have created conditions that were ripe for a fatal accident, it was human decision-making in the cockpit that was the immediate cause of this particular crash.

This account is based on documents from the National Transportation Board (NTSB) accident inquiry and interviews with aviation experts. It shows that, when we focus on the specific details and facts of a case, the cause can seem quite different from what a big-picture overview might indicate. And this, in turn, suggests different logical steps that should be taken to prevent such a tragedy from happening again.

6:42 p.m.: Fort Belvoir, Virginia

The whine of the Blackhawk’s engine increased in pitch, and the whump-whump of its four rotor blades grew louder, as the matte-black aircraft lifted into the darkened sky above the single mile-long runway at Davison Army Airfield in Fairfax County, Virginia, about 25 miles southwest of Washington, D.C.

The UH-60, as it’s formally designated, is an 18,000-pound aircraft that entered service in 1979 as a tactical transport aircraft, used primarily for moving troops and equipment. This one belonged to Company B of the 12th Aviation Battalion, whose primary mission is to transport government VIPs, including Defense Department officials, members of Congress, and visiting dignitaries. Tonight’s flight would operate as PAT 25, for “Priority Air Transit.”

Black Hawks are typically flown by two pilots. The pilot in command, or PIC, sits in the right-hand seat. Tonight, that role was filled by 39-year-old chief warrant officer Andrew Eaves. Warrant officers rank between enlisted personnel and commissioned officers; it’s the warrant officers who carry out the lion’s share of a unit’s operational flying. When not flying VIPs, Eaves served as a flight instructor and a check pilot, providing periodic evaluation of the skills of other pilots. A native of Mississippi, he had 968 hours of flight experience and was considered a solid pilot by others in the unit.

Before he took off, Eaves’ commander had discussed the flight with him and admonished him to “not become too fixated on his evaluator role” and to remain “in control of the helicopter,” according to the NTSB investigation.

His mission was to give a check ride to Captain Rebecca Lobach, the pilot sitting in the left-hand seat. Lobach was a staff officer, meaning that her main role in the battalion was managerial. Nevertheless, she was expected to maintain her pilot qualifications and, to do so, had to undergo a number of annual proficiency checks. Tonight’s three-hour flight was intended to get Lobach her annual sign-off for basic flying skills and for the use of night-vision goggles, or NVGs. To accommodate that, the flight was taking off an hour and 20 minutes after sunset.

Both pilots wore AN/AVS-6(V)3 Night Vision Goggles, which look like opera glasses and clip onto the front of a pilot’s helmet. They gather ambient light, whether from the moon or stars or from man-made sources; intensify it; and display it through the lens of each element. The eyepiece doesn’t sit directly on the face but about an inch away, so the pilot can look down under it and see the instrument panel.

Night-vision goggles have a narrow field of view, just 40 degrees compared to the 200-degree range of normal vision, which makes it harder for pilots to maintain full situational awareness. They have to pay attention to obstacles and other aircraft outside the window, and they also have to keep track of what the gauges on the panel in front them are saying: how fast they’re going, for instance, and how high. There’s a lot to process, and time is of the essence when you’re zooming along at 120 mph while lower than the tops of nearby buildings. To help with situational awareness, Eaves and Lobach were accompanied by a crew chief, Staff Sergeant Ryan O’Hara, sitting in a seat just behind the cockpit, where he would be able to help keep an eye out for trouble.

The helicopter turned to the south as it climbed, then flew along the eastern shore of the Potomac until the point where the river makes a big bend to the east. Eaves banked to the right and headed west toward the commuter suburb of Vicksburg, where the lights of house porches and street lamps seemed to twinkle as they fell in and out of the cover of the bare tree branches.

7:11 p.m.: Approaching Greenhouse Airport, Stevensburg, Virginia

PAT 25 followed the serpentine course of the Rapidan River through the hills and farm fields of the Piedmont. At this point, Eaves was not only the pilot in command, but also the pilot flying, meaning that he had his hands on the controls that guide the aircraft’s speed and direction and his feet on the rudder pedals that keep the helicopter “in trim” — that is, lined up with its direction of flight. Lobach played a supporting role, working the radio, keeping an eye out for obstacles and other traffic, and figuring out their location by referencing visible landmarks.

Lobach, 28, had been a pilot for four years. She’d been an ROTC cadet at the University of North Carolina at Chapel Hill, which she graduated from in 2019. Both her parents were doctors; she’d dreamed of a medical career but eventually realized that she couldn’t pursue one in the Army. According to her roommate, “She did not have a huge, massive passion” for aviation but chose it because it was the closest she could get to practicing medicine, under the circumstances. “She badly wanted to be a Black Hawk pilot because she wanted to be a medevac unit,” he told NTSB investigators. After she completed flight training at Fort Rucker, she was stationed at Fort Belvoir, where she joined the 12th Aviation Battalion and was put in charge of the oil-and-lubricants unit. One fellow pilot in the unit described her to the NTSB as “incredibly professional, very diligent and very thorough.”

In addition to her official duties, Lobach served as a volunteer social liaison at the White House, where she regularly represented the Army at Medal of Honor ceremonies and state dinners. She was both a fitness fanatic and a baker, known for providing fresh sourdough bread to her unit. She had started dabbling in real-estate investments and looked forward to moving in with her boyfriend of one year, another Army pilot with whom she talked about having “lots and lots of babies.” She was planning to leave the service in 2027 and had already applied for medical school at Mount Sinai. Helicopter flying was not something she intended to pursue.

Though talented as a manager, she wasn’t much of a pilot. Helicopter flying is an extremely demanding feat of coordination and balance, akin to juggling and riding a unicycle at the same time. For Lobach, the difficulty was compounded by the fact that she had trained on highly automated, relatively easy-to-fly helicopters at Fort Rucker and then been assigned to an older aircraft, the Black Hawk L or “Lima” model, at Fort Belvoir. Unlike newer models, which can maintain their altitude on autopilot, the Lima requires constant care and attention, and Lobach struggled to master it. One instructor described her skills as “well below average,” noting that she had “lots of difficulties in the aircraft.” Three years before, she’d failed the night-vision evaluation she was taking tonight.

Before the flight, Eaves had told his girlfriend that he was concerned about Lobach’s capability as a pilot and that, skill-wise, she was “not where she should be.”

It’s not uncommon for pilots to struggle during the early phase of their career. But Lobach’s development had been particularly slow. In her five years in the service, she had accumulated just 454 hours of flight time, and she wasn’t clocking more very quickly. The Army requires officers in her role to fly at least 60 hours a year, but in the past 12 months, she’d flown only 56.7. Her superiors had made an exception for her because in March she’d had knee surgery for a sports injury, preventing her from flying for three months. The waiver made her technically qualified to fly, but it didn’t change the fact that she was rustier than pilots were normally allowed to become.

If she’d been keen on flying, she could have used every moment of this flight to hone her skills by taking the controls herself. But she was content to let Eaves do the flying during the first part of the trip.

Drawing near to Greenhouse Airport, a small, private grass runway near a plant nursery, they navigated via an old-fashioned technique called pilotage, using landmarks and dead reckoning to find their way from point to point. Coming in for their first landing of the night, they were looking for the airstrip’s signature greenhouse complex.

Lobach: That large lit building may be part of it.

Eaves: It does look like a greenhouse, doesn’t it?

Lobach: Yeah, it does, doesn’t it? We can start slowing back.

Eaves: All right, slowing back.

As they circled around the runway, Eaves commented that the lighting of the greenhouse building was so intense that it was blinding in the NVGs, and Lobach agreed. Eaves positioned the helicopter a few hundred feet above the landing zone and asked Lobach to show him where it was. After she did so correctly, he told her to take the controls. This process followed a formalized set of acknowledgements to make sure that both parties understood who was in control of the aircraft.

Eaves: You’ve got the flight controls.

Lobach: I’ve got the controls.

As Lobach eased the helicopter toward the ground, Eaves and Crew Chief O’Hara called out times from the landing checklist.

O’Hara: Clear of obstacles on the left.

Lobach: Thank you. Coming forward.

Eaves: Clear down right.

Lobach: Nice and wide.

Eaves: 50 feet.

Lobach: 30 feet.

They touched down. One minute and 42 seconds after passing control to Lobach, Eaves took it back again. As they sat on the ground with their rotor whirring, they discussed the fuel remaining aboard the aircraft and the direction they would travel in during the next segment of their flight. Finally, after six minutes, Eaves signaled that they were ready to take off again.

Eaves: Whenever you’re ready, ma’am.

Lobach: Okay, let’s do it.

Eaves’s deference to Lobach was symptomatic of what is known among psychologists as an “inverted authority gradient.” Although he was the pilot in command, both responsible for the flight and in a position of authority over others on it, Eaves held a lesser rank than Lobach and so in a broader context was her subordinate. In moments of high stress, this ambiguity can muddy the waters as to who is supposed to be making crucial decisions.

Eaves, Lobach, and O’Hara ran through their checklists, and Eaves eased the Black Hawk up into the night sky.

by Jeff Wise, Intelligencer |  Read more:
Image: Intelligencer; Photo: Matt Hecht
[ed. See also: Responders recall a mission of recovery and grief a year after the midair collision near DC (AP).]