Thursday, February 19, 2026

Defense Dept. and Anthropic Square Off in Dispute Over A.I. Safety

For months, the Department of Defense and the artificial intelligence company Anthropic have been negotiating a contract over the use of A.I. on classified systems by the Pentagon.

This week, those discussions erupted in a war of words.

On Monday, a person close to Defense Secretary Pete Hegseth told Axios that the Pentagon was “close” to declaring the start-up a “supply chain risk,” a move that would sever ties between the company and the U.S. military. Anthropic was caught off guard and internally scrambled to pinpoint what had set off the department, two people with knowledge of the company said.

At the heart of the fight is how A.I. will be used in future battlefields. Anthropic told defense officials that it did not want its A.I. used for mass surveillance of Americans or deployed in autonomous weapons that had no humans in the loop, two people involved in the discussions said.

But Mr. Hegseth and others in the Pentagon were furious that Anthropic would resist the military’s using A.I. as it saw fit, current and former officials briefed on the discussions said. As tensions escalated, the Department of Defense accused the San Francisco-based company of catering to an elite, liberal work force by demanding additional protections.

The disagreement underlines how political the issue of A.I. has become in the Trump administration. President Trump and his advisers want to expand technology’s use, reducing export restrictions on A.I. chips and criticizing state regulations that could be perceived as inhibitors to A.I. development. But Anthropic’s chief executive, Dario Amodei, has long said A.I. needs strict limits around it to prevent it from potentially wrecking the world.

Emelia Probasco, a senior fellow at Georgetown’s Center for Security and Emerging Technology, said it was important that the relationship between the Pentagon and Anthropic not be doomed.

“There are war fighters using Anthropic for good and legitimate purposes, and ripping this out of their hands seems like a total disservice,” she said. “What the nation needs is both sides at the table discussing what can we do with this technology to make us safer.” [...]

The Defense Department has used Anthropic’s technology for more than a year as part of a $200 million A.I. pilot program to analyze imagery and other intelligence data and conduct research. Google, OpenAI and Elon Musk’s xAI are also part of the program. But Anthropic’s A.I. chatbot, Claude, was the most widely used by the agency — and the only one on classified systems — thanks to its integration with technology from Palantir, a data analytics company that works with the federal government, according to defense officials with knowledge of the technology...

On Jan. 9, Mr. Hegseth released a memo calling on A.I. companies to remove restrictions on their technology. The memo led A.I. companies including Anthropic to renegotiate their contracts. Anthropic asked for limits to how its A.I. tools could be deployed.

Anthropic has long been more vocal than other A.I. companies on safety issues. In a podcast interview in 2023, Dr. Amodei said there was a 10 to 25 percent chance that A.I. could destroy humanity. Internally, the company has strict guidelines that bar its technology from being used to facilitate violence.

In January, Dr. Amodei wrote in an essay on his personal website that “using A.I. for domestic mass surveillance and mass propaganda” seemed “entirely illegitimate” to him. He added that A.I.-automated weapons could greatly increase the risks “of democratic governments turning them against their own people to seize power.”

In contract negotiations, the Defense Department pushed back against Anthropic, saying it would use A.I. in accordance with the law, according to people with knowledge of the conversations.

by Sheera Frenkel and Julian E. Barnes, NY Times | Read more:
Image: Kenny Holston/The New York Times
[ed. The baby's having a tantrum. So, Anthropic is now a company "catering to an elite, liberal work force"? I can't even connect the dots. Somebody (Big Daddy? Congress? ha) needs to take him out of the loop on these critical issues (AI safety) or we're all, in technical terms, 'toast'. The military should not be dictating AI safety. It's also important that other AI companies show support and solidarity on this issue or face the same dilemma.]