Showing posts with label Law. Show all posts
Showing posts with label Law. Show all posts

Saturday, March 14, 2026

Sam Altman and OpenAI Under Fire

It’s finally happening. Altman’s bad behavior is catching up to him.

The board fired Altman, once AI’s golden boy, in November 2023 not because AGI had been achieved (that still hasn’t happened) but because he was “not consistently candid,” just like they said.

And, now at long last, the world sees what the board saw, and what I saw (and what Karen Hao saw): having someone running a company with that much power to affect the world who is not consistently candid is not a good idea.

As I warned in August of 2024, questionable character in a man this powerful is dangerous:


Altman’s two-faced “I support Dario” but am also negotiating behind his back and open to surveillance two-step was, for many people, the last straw. Millions of people, literally, are angry; many feel betrayed. Nobody wishes to be surveilled.

In reality, Altman was never really all that interested in AI for the “benefit of humanity.” Mostly he was interested in Sam. And money, and deals. A whole lot of people have finally put that all together.

Here’s OpenAI’s head of robotics, just now:


Zoe Hitzig had resigned just a few weeks earlier, over a different set of issues that also reflected poorly on Altman’s character:


And all this was entirely predictable. Altman is bad news. It was always just a matter of time before people started realizing how serious the consequences might be.

History will judge those who stay at his company. Anyone who wants to work on LLMs can work elsewhere. Anyone who wants to use LLMs should go elsewhere.

by Gary Marcus, On AI |  Read more:
Images: The Guardian; X; NY Times
[ed. For those not paying attention, after DOD tried and failed to strong-arm Anthropic into giving them carte blanche to do anything they wanted with Anthropic's AI model Claude (then subsequently designating them a "supply chain risk"), OpenAI (and Microsoft) immediately stepped into the breach and cut a deal, the details of which are still not fully known. On its face however they appear to give DOD everything it wanted from Anthropic: mass surveilance and fully autonomous (ie. no humans involved) operational capabilities. Altman is the head of OpenAI and its ChatGPT model.

See also: The Rage at OpenAI Has Grown So Immense That There Are Entire Protests Against It (Futurism):
OpenAI has faced protests on and off for years. But after its CEO Sam Altman announced a new deal with the Department of Defense over how its AI systems would be deployed across the military on Friday, it’s being barraged with an intensity of backlash that the company has never seen.

Droves of loyal ChatGPT users declared they were jumping shipping to Claude, whose maker Anthropic had pointedly refused to cut a deal with the Pentagon that gives it unrestricted access to its AI system — even in the face of government threats to seize the company’s tech. Claude quickly surged to the top of the app store, supplanting OpenAI’s chatbot. Uninstalls of the ChatGPT app spiked by nearly 300 percent.
***
Also this: Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism (Guardian):
OpenAI, the company behind ChatGPT, is on track to lose $14bn this year. Its market share is collapsing, and its own CEO, Sam Altman, has admitted it “screwed up” an element of the product. All it takes to accelerate that decline is 10 seconds of your time...

Here’s what triggered it. Early this year, the news broke that OpenAI’s president, Greg Brockman, donated $25m to Maga Inc, Donald Trump’s biggest Super Pac. This made him Trump’s largest donor of the last cycle. When Wired asked him to explain, Brockman said his donations were in service of OpenAI’s mission to benefit “humanity.”

Let me tell you what that mission looks like in practice. Employees of ICE – the agency that was involved in the killing of two people in Minneapolis in January – have used a screening tool powered by ChatGPT. The same company behind your friendly chatbot is helping the government decide who to hire for deportation raids.

And it’s not stopping there. Brockman also helped launch a $125m lobbying initiative, a Super Pac, to make sure no state can regulate AI. It’s attacking any politician who tries to pass safety laws. It wants Trump, and only Trump, to write the rules for the most powerful technology on earth. Every month, subscription money from users around the world flows to a company that is embedding itself in the repressive infrastructure of the Trump administration. That is not a conspiracy theory. It is a business strategy.

Things got even worse last week. When the Trump administration demanded that AI companies give the Pentagon unrestricted access to their technology – including for mass surveillance and autonomous weapons – Anthropic, the company behind ChatGPT’s main competitor, Claude, refused.

The retaliation was swift and extraordinary. Trump ordered every federal agency to stop using Anthropic’s technology. Secretary of war Pete Hegseth declared the company a “supply-chain risk to national security”, a designation normally reserved for Chinese firms such as Huawei. He announced that anyone who does business with the US military is barred from working with Anthropic. This is essentially a corporate death sentence, for the crime of refusing to help build killer robots.

And what did OpenAI do? That same Friday night, while his competitor was taking a principled stance, Sam Altman quietly signed a deal with the Pentagon to take Anthropic’s place.
***
[ed. From the comments section in Marcus' post:

Shanni Bee: 
Great. Amen.

But what remains unsaid (...even by you, Mr. Marcus, from what I've seen, which is surprising) is that Anthropic are not good guys. The whole "ethical AI company" thing is nothing but vibes. Sure, Anthropic (rightly) stood up to DoW in this case, but they still have a massive contract with Palantir (pretty much one of the worst companies on earth). Colonel Claude is complicit in bombings of Iran & Venezuela + Gaza GENOCIDE.

...Or maybe with the (admittedly BS) "supply chain risk" designation, Anthropic no longer does business with Palantir? That would be great for everyone (including them).

Either way, there is NO ethical AI company. People need to stop giving Anthropic flowers for doing the right thing in this one case while completely ignoring their complicity w/ Palantir & in documented war crimes.
Gary Marcus

indeed, i have a sequel planned about that, working title “There are no heroes in commercial AI” or something like that
***
[ed. Finally, there's this little coda from Zvi Mowshowitz's DWAtV that puts everything in perspective:

It’s really annoying trying to convince people that if you have a struggle for the future against superintelligent things that You Lose. But hey, keep trying, whatever works.
Ab Homine Deus: To the "Superintelligence isn't real and can't hurt you" crowd. Let's say you're right and human intelligence is some kind of cosmic speed limit (LOL). So AI plateaus something like 190 IQ. What do you think a million instances of that collaborating together looks like?

Arthur B.: At 10,000x the speed

Noah SmithThis is the real point. AI is superintelligent because it can think like a human AND have all the superpowers of a computer at the same time...
Timothy B. Lee: I'm not a doomer but it's still surreal to tell incredulous normies "yes, a significant number of prominent experts really do believe that superintelligent AI is on the verge of killing everyone."

Noah Smith: Yes. Regular people don't yet realize that AI people think they're building something that will destroy the human race.

Basically, about half of AI researchers are optimists, while the other half are intentionally building something they think could easily lead to their own death, the death of their children and families and friends, and the death of their entire species.

[ed. Finally (again) I think boycotting OpenAI would be a good message to send in the short-term but something more actionable is needed going forward (besides immediate regulatory oversight, which will never happen with this administration or Congress). Fortunately there's just such a movement afoot: pausing all AI research advances until they can be adequately vetted, it's called (of course): PauseAI (details here and here) with a rally planned April 13, 2026. Please consider joining or participating.]

[ed. Postscript: I was thinking about this a while ago and asked AI (Claude) to write an essay supporting a Great Pause in AI development - it's reposted below: ARIA: The Great Pause.]

Tuesday, March 10, 2026

America and Public Disorder, and "The Kill Line"

Two weeks ago, on the blue line to O’Hare, my car had two men smoking joints, a broken woman, her eyes dilated and blank, sitting in a nest of filthy bags smelling of sewage, and a man barking into the void, shirtless, who was washing himself with flour tortillas, which would disintegrate, littering the subway floor, before he took out another and began the same process. This didn't shock me, or anyone else around me, since I'd seen some variation of this dystopian scene on every Chicago metro line I'd ridden, every pedestrian walkway I'd passed through, and on most street corners.

Three weeks ago, in Duluth, half the riders on every bus I took were mentally tortured and/or intoxicated. The downtown Starbucks, pedestrian malls, and shuttered doorways of vacated buildings all housed broken people. Same in Indianapolis, El Paso, New York City, Jacksonville, LA, Phoenix, and almost every community I’ve been to in the U.S., save for those gated by wealth.

An epidemic of mental illness and/or addiction plays out in the U.S. in public, with our streets, buses, parking lots, McDonald’s, parks, and Starbucks as ad hoc institutions for the broken, addicted, and tortured.That is not the case for the rest of the world, including where I am now, Seoul. My train from the airport was spotless, and so is the ten-mile river park I walk each day here, which given that large parts of it are beneath roadways is especially impressive. In the U.S. it would have impromptu homes of tents, cardboard, and tarps, smell of urine, and the exercise spots that dot its length probably couldn’t exist because of a fear of being vandalized.

You can learn more about the U.S. by traveling overseas and comparing, and five years of that has taught me we accept far too much public disorder.

We are the world’s richest country, and yet our buses, parking lots, and city streets are filthy, chaotic, and threatening. Antisocial and abnormal behavior, open addiction, and mentally tortured people are common in almost every community regardless of size.

I’ve written about this many times before, because it is so striking, and it has widespread consequences, beyond the obvious moral judgement that a society should simply not be this way.

It’s a primary reason why we shy away from dense walkable spaces and instead move towards suburban sprawl. People in the U.S. don’t respect, trust, or want to be around other random citizens, out of fear and disgust. Japanese/European style urbanism—density, fantastic public transport, mixed-use zoning, that so many American tourists admire—can't happen here because there is a fine line between vibrant streets and squalid ones, and that line is public trust. The U.S. is on the wrong side of it. Simply put, nobody wants to be accosted by a stranger, no matter how infrequent, and until that risk is close to nil, people will continue edging towards isolated living.

It is why we “can’t have nice things” because we have to construct our infrastructure to be asshole-proof, and so we don’t build anything or build with a fortress mentality, stripping our public spaces down to the austere and utilitarian, emptying them of anything that can be vandalized.

The canonical example of this is La Sombrita, the laughably expensive Los Angeles “bus stop” that was a single pole to provide shade and security lighting, but did neither. La Sombrita exists precisely because it doesn’t do anything, which is the end result of a decades-long process of defensive construction. If you build a nice bus stop it is either immediately broken or turned into shelters for the destitute, and so you stop building those.

Another nice thing we don’t have in the U.S. is public restrooms. We don’t have them out of a justified fear of abuse, which is the same reason many Starbucks lock their restrooms. McDonald’s does this as well, depending on the location, and also even strips them of mirrors in the especially bad communities, to discourage people from using them for an hour-long morning toilet, as well as breaking the mirrors just for the hell of it.

This lack of public restrooms became an issue on Twitter when the latest round of debate about disorder in the U.S. was kicked off when a tweeter noted how offensive it was to have seen someone urinating in a crowded New York subway car.


This debate brought out a lot of absurd arguments, mostly from those trying to shrug it off or suggest it was simply the price of living in a big city.

No, the rest of the world doesn’t tolerate the amount of antisocial behavior we in the U.S. do. If someone were to piss on a subway anywhere else in the world, and very very few ever would want to (more on why below), they are removed from society for a period of time.

We however let people who aren’t mentally competent continue to engage in self-destructive and aberrant behavior without removing them, which consequently ruins it for everyone else, except those wealthy enough to build their own private islands of comfort.

Someone peeing on the subway is not of sound mind, and it isn’t normal behavior by any measure. It’s a sign of distress that should cause an intervention—by police, social workers, whoever—that mandates them into an institution for a period of time, until they regain sanity and stability. For someone actively psychotic —civil commitment to psychiatric hospital. For violent individuals refusing treatment—secure prison facilities with mandatory programs. For severe addiction—medical detox and residential treatment without the ability to walk away.

They should not be allowed to do whatever they want because they cannot control themselves enough to have that freedom. Someone shouting at strangers, someone washing themselves with flour tortillas, someone punching at the air voicing threats shouldn’t, for their own safety and others, be out roaming the streets. [...]

I’ve been very careful up to now not to use the word homeless, because it’s become an overly broad category that covers families in motels with Section 8 vouchers, people sleeping on friends’ couches until they can get back on their feet, mothers with children in long-term shelters, and then those who live in tents under bridges or sleep in a soiled sleeping bag.

Eighty-five percent (or so) of those in this broad category are not causing problems. They are, like most everyone else, doing their best to get by and better themselves. Sure, they have more complicated and chaotic lives than most, but they try to play by the rules as best they can.

Our problems in public spaces come from the fifteen percent or so who fall into the last group—the stubbornly intransigent—which are people who have options for housing but turn them down for a variety of reasons, some driven by mental demons, some by an overwhelming desire to always be on drugs, some simply out of preference to be alone. Others in this category have been ejected from housing because of continual violent and threatening behavior.

They are not, by almost any metric, of sound mind, and shouldn’t be granted the full privileges other citizens have.

The cover photo is John, and he is in this category. He had set himself on fire the day before I met him, freebasing a perc 30, and refused to go to the hospital because he didn’t want to lose his favorite spot behind the garbage bin, since it was only a block away from dealers and perfect to piss in. He had a government room he didn’t use because catching on fire (something he did every now and then) set off smoke alarms. He also thought it was cursed and monitored by the same people who had held him captive on an island in the middle of the Pacific—an island he escaped from three months before by swimming the four hundred miles. He showed me an arm, covered with burns, that he claimed was where a shark had bit him.

John should be mandated into a prison, a mental institution, or a rehab clinic, until he is competent enough to be on his own, not out on the streets in mental and physical pain, setting himself on fire. It is as simple as that, although I understand a change like this comes with additional nuanced policy debate. As for costs, it is more a question of redirecting what we spend rather than finding additional money, because we already spend an immense amount on this problem—the New York City budget for homeless services is four billion—without 'solving' it.

Even if you put aside the destruction this type of behavior has done to broader society, and your concerns are only focused on the health and welfare of the stubbornly intransigent, then our current system is still deeply wrong. We are not providing them justice by allowing them to choose a public display of mental misery, where the self harm they can do is far greater than when being monitored.

Beneath all this discussion is the additional question of why we in the U.S. have so many mentally unstable people, why so many are addicted to drugs, why so many people are OK with doing shocking things.

by Chris Arnade, Walks the World | Read more:
Images: X/uncredited
[ed. We've lost the plot. Or not. Maybe this is just an accurate reflection of this country's priorities over the last 50 years or so. Even worse, with AI just around the corner, it's going to get a lot worse unless our government starts working for its people again (and our people start working for our country again, beginning with acknowledging their own civic duties and responsibilities that go beyond simply paying taxes, gaming the system, and trying to make as much money as possible). From the comments:]
***

One of the things travel does best is remove the normalization filter we build at home. When you move between countries long enough, patterns that once felt “just how things are” start to look like choices societies have made - or failed to make.

What strikes me in pieces like this is not the comparison itself, but the discomfort it creates. Clean transit systems, safe public spaces, and functioning streets aren’t cultural miracles; they’re outcomes of priorities, incentives, and sustained public decisions. When those systems break down, the result isn’t abstract policy failure - it’s visible human suffering playing out in the most ordinary places.

Travel doesn’t just show us new landscapes. It quietly exposes which problems we’ve decided to tolerate.
***

[ed. See also: The Kill Line: Why China Is Suddenly Obsessed With American Poverty (NYT).]

Chinese commentators are talking a lot these days about poverty in the United States, claiming China’s superiority by appropriating an evocative phrase from video game culture.

The phrase, “kill line,” is used in gaming to mark the point where the condition of opposing players has so deteriorated that they can be killed by one shot. Now, it has become a persistent metaphor in Communist Party propaganda.

“Kill line” has been used repeatedly on social media and commentary sites, as well as news outlets linked to the state. It has gained traction in China to depict the horror of American poverty — a fatal threshold beyond which recovery to a better life becomes impossible. The phrase is used as a metaphor to encompass homelessness, debt, addiction and economic insecurity. In its official use, the “kill line” hovers over the heads of Americans but is something Chinese people don’t have to fear. [...]

The power is in the simplicity of what it describes: an abrupt threshold where misery begins and a happy life is irreversibly lost. The narrative is meant to offer China’s people emotional relief while attempting to deflect criticism of its leaders.

The worse things look across the Pacific, the logic of the propaganda goes, the more tolerable present struggles become. [...]

The fact is that societal inequality is a problem in both China and the United States. And the American economy no doubt leaves many people in fragile positions. The causes are complex.

Yet in China, poverty is experienced and perceived differently. In most Chinese cities, street begging and visible homelessness are tightly managed, making them far less prominent in daily life. Many urban residents encounter such scenes only through foreign reporting, rebroadcast by Chinese state media, about the United States and other places. [...]

When I was growing up in China in the early 1980s, my family subscribed to China Children’s News, which ran a weekly column with a simple slogan: “Socialism is good; capitalism is bad.” It described seniors in American cities scavenging for food, and homeless people freezing to death. Those stories were not invented, but they lacked context and were presented as the dominant experiences in American society. Much of Chinese society was still closed off from the world, and reliable information was scarce.

That many people accepted such narratives was hardly surprising. What’s striking is that similar portrayals continue to resonate today, when access to information is relatively much greater despite state control.

The formula is simple: magnify foreign suffering to deflect from domestic problems. That approach is taking shape today around the “kill line” metaphor.

The phrase is believed to have been first popularized in this new context on the Bilibili video platform in early November by a user known as Squid King. In a five-hour video, he stitched together what he claimed were firsthand encounters of poverty from time he spent in the United States. His video used scenes of children knocking on doors on a cold Halloween night asking for food, delivery workers suffering from hunger because of their meager wages and injured laborers discharged from hospitals because they could not pay.

The scenes were presented not as isolated cases but as evidence of a system: Above the “kill line,” life continues; below it, society stops treating people as human.

The narrative spread beyond the Squid King video, and many people online repeated his anecdotes. Essays on the nationalist news site Guancha and China’s biggest social media platform, WeChat, described the “kill line” as the “real operating logic” of American capitalism. [...]

In many of the commentaries, anecdotes about Americans experiencing abrupt financial crises are followed by comparisons with China. Universal basic health care, minimum subsistence guarantees and poverty alleviation campaigns are cited as evidence that China does not permit anyone to fall into sudden distress.

“China’s system will not allow a person to be ‘killed’ by a single misfortune,” one commentary from a provincial propaganda department states.

Many readers expressed shock at American poverty and gratitude for China’s system. “At least we have a safety net,” said one commenter...

“A topic does not gain traction simply because people are foolish,” one person wrote on WeChat. “Often, it spreads because confronting reality is harder.”

by Li Yuan, NY Times |  Read more:
Image: Doris Liou

Sunday, March 8, 2026

Clawed

How to Commit Corporate Murder

I.

A little more than a decade ago, I sat with my father and watched him die. Six months prior, he had been a vigorous man, stronger than I am today, faster and more resilient on a bike than most 20-somethings. Then one day he got heart surgery and he was never the same. His soul had been sucked out of him, the life gone from his eyes. He had moments of vivacity, when my father came back into his aging body, but these became rarer with time. His coherence faded, his voice grew quieter.

He spent those six months in and out of the hospital. And then on his last day he went into hospice. That day he barely uttered any words at all. In the final hours of his life, my father was practically already dead. He laid on the hospital bed. His breathing gradually slowed and became less audible. Eventually you could barely hear him at all, save for the eerie death rattle, a product of a body no longer able even to swallow. A body that cannot swallow also cannot eat or drink, and in that sense it has already thrown in the towel.

My mother and I exchanged knowing glances, but we never said the obvious nor asked any questions on both of our minds. We knew there would not be much longer. There was nothing to say or ask that would furnish any useful information; inquiry, at that stage, can only inflict pain.

I spoke with him, more than once, in private. I held his hand and tried to say goodbye. My mother came back into the room, and all three of us held hands. Eventually a machine declared with a long beep that he had crossed some line, though it was an invisible one for the humans in the room. My father died in the late afternoon of December 26, 2014.

A few days and eleven years later, on December 30, 2025, my son was born. I have watched death as it happens, and I have watched birth. What I learned is that neither are discrete events. They are both processes, things that unfold. Birth is a series of awakenings, and death is a series of sleepenings. My son will take years to be born, and my father took six months to die. Some people spend decades dying.

II.

At some point during my lifetime—I am not sure when—the American republic as we know it began to die. Like most natural deaths, the causes are numerous and interwoven. No one incident, emergency, attack, president, political party, law, idea, person, corporation, technology, mistake, betrayal, failure, misconception, or foreign adversary “caused” death to begin, though all those things and more contributed. I don’t know where we are in the death process, but I know we are in the hospice room. I’ve known it for a while, though I have sometimes been in denial, as all mourners are wont to do. I don’t like to talk about it; I am at the stage where talking about it usually only inflicts pain.

Unfortunately, however, I cannot carry out my job as a writer today with the level of analytic rigor you expect from me without acknowledging that we are sitting in hospice. It is increasingly difficult to honestly discuss the developments of frontier AI, and what kind of futures we should aim to build, without acknowledging our place at the deathbed of the republic as we know it. Except there is no convenient machine to decide for us that the patient has died. We just have to sit and watch.

Our republic has died and been reborn again more than once in America’s history. America has had multiple “foundings.” Perhaps we are on the verge of another rebirth of the American republic, another chapter in America’s continual reinvention of itself. I hope so. But it may be that we have no more virtue or wisdom to fuel such a founding, and that it is better to think of ourselves as transitioning gradually into an era of post-republic American statecraft and policymaking. I do not pretend to know.

I am now going to write about a skirmish between an AI company and the U.S. government. I don’t want to sound hyperbolic about it. The death I am describing has been going on for most of my life. The incident I am going to write about now took place last week, and it may even be halfway satisfyingly resolved within a day.

I am not saying this incident “caused” any sort of republican death, nor am I saying it “ushered in a new era.” If this event contributed anything, it simply made the ongoing death more obvious and less deniable for me personally. I consider the events of the last week a kind of death rattle of the old republic, the outward expression of a body that has thrown in the towel.

by Dean Ball, Hyperdimensional |  Read more:
Image: via
[ed. More excerpts below. See also: Why the Pentagon Wants to Destroy Anthropic (NYT), Ezra Klein interviews Dean Ball (with a follow-up essay: The Future We Feared is Already Here). And, for a more comprehensive assessment of what the AI community thinks: Anthropic Officially, Arbitrarily and Capriciously Designated a Supply Chain Risk (DWAtV).]
***
"... Except the notion of “passing a law” is increasingly a joke in contemporary America. If you are serious about the outcome in question, “passing a law” is no longer Plan A; the dynamic is more like “well of course, one day, we’ll get a law passed, but since we actually care about doing this sometime soon, as opposed to in 15 years, we’ll accomplish our objective through [some other procedure or legal vehicle].” With this, governance has become more and more informal and ad hoc, power more dependent on the executive (whose incentive is to jam every goal he has through his existing power in as little time as possible, since he only has the length of his term guaranteed to him), and the policy vehicles in question more and more unsuited to the circumstances of their deployment, or the objectives they are being deployed to accomplish." [...]

... DoW insisted that the only reasonable path forward is for contracts to permit “all lawful use” (a simplistic notion not consistent with the common contractual restrictions discussed above), and has further threatened to designate Anthropic a supply chain risk. This is a power reserved exclusively for firms controlled by foreign adversary interests, such as Huawei, and usually means that the designated firm cannot be used by any military contractor in their fulfillment of any military contract.

War Secretary Pete Hegseth has gone even further, saying he would prevent all military contractors from having “any commercial relations” with Anthropic. He almost surely lacks this power, but a plain reading of this would suggest that Anthropic would not be able to use any cloud computing nor purchase chips of its own (since all relevant companies do business with the military), and that several of Anthropic’s largest investors (Nvidia, Google, and Amazon) would be forced to divest. Essentially, the United States Secretary of War announced his intention to commit corporate murder. The fact that his shot is unlikely to be lethal (only very bloody) does not change the message sent to every investor and corporation in America: do business on our terms, or we will end your business.

This strikes at a core principle of the American republic, one that has traditionally been especially dear to conservatives: private property. Suppose, for example, that the military approached Google and said “we would like to purchase individualized worldwide Google search data to do with whatever we want, and if you object, we will designate you a supply chain risk.” I don’t think they are going to do that, but there is no difference in principle between this and the message DoW is sending. There is no such thing as private property. If we need to use it for national security, we simply will. The government won’t quite “steal” it from you—they’ll compensate you—but you cannot set the terms, and you cannot simply exit from the transaction, lest you be deemed a “supply chain risk,” not to mention have the other litany of policy obstacles the government can throw at you.

This threat will now hover over anyone who does business with the government, not just in the sense that you may be deemed a supply chain risk but also in the sense that any piece of technology you use could be as well. Though Chinese AI providers like DeepSeek have not been labeled supply chain risks (yes, really; this government says Anthropic, an American company whose services it used in military strikes as recently as this past weekend, is more of a threat than a Chinese firm linked to the Chinese military), that implicit threat was always there.
***
[ed. One more thing. The guy who created this whole stupid dispute? Not Hegseth, he doesn't know shit about shit. It's former disgraced Uber manager: Emil Michael. A real piece of work (so of course, he fits right in.] 

Tuesday, March 3, 2026

The Explainer: 'The Save America Act' and Data Centers By the Numbers


What To Know About The SAVE America Act

If passed into law, the Safeguard American Voter Eligibility Act will create new barriers to voting in federal elections by requiring documentation of citizenship to register and imposing strict photo-identification rules at polling places. The Onion shares everything you need to know about the SAVE America Act.

Q: What is the goal of the bill?

A: To ensure the pristine integrity of American elections by making sure they never happen again.

Q: What form of ID can be used to confirm citizenship?

A: NRA membership cards.

Q: Is the Senate expected to pass the SAVE America Act?

A: Depends on which senators die between now and the vote.

Q: Where’s my birth certificate?

A: Did you check the bottom drawer of the living room cabinet? There should be a purple folder underneath all those old receipts.

Q: Why did Trump endorse it?

A: To stop the many thousands of immigrants who aren’t here anymore from voting.
***

Data Centers By The Numbers

The surge in AI, cryptocurrency, and other digital assets is rapidly increasing demand for computational infrastructure around the country. The Onion examines the key facts and figures behind data centers.

0.8
New pH of your groundwater

$900,000,000
What 16GB of RAM will cost next year

4,000
Palm fronds fanned to cool the servers

1
Security guard job that Mom thinks might help you get back on your feet

3-2
City council vote that could have stopped this

600 billion
Goddamn wires to untangle

7
People profiting from this
***
[ed. See also: Anyone Else Have Those Weird Dreams Where Sobbing Future Generations Beg You To Change Course? (Sam Altman, CEO, OpenAI:]

The human subconscious is such an interesting thing. No matter how much you think you’ve got it figured out, it’ll always spit out the most random stuff. Take me, for example. After coming home from a long day at the world’s most groundbreaking artificial intelligence organization, I’ll go to bed and have the weirdest dreams where people from the future are sobbing and begging me to change course.

Anyone else ever have these?

It’s funny. Some people have dreams where their teeth fall out; others where they show up to high school tests naked. But the second my head hits the pillow, I’m suddenly in a cold gray smoky void where all I can make out are broken, haunted swarms of people pleading with me to “end this now while there’s still time.” Really peculiar, right? I wish there was some way to find other people who have had them. But when I search “endless crowds of weeping silhouettes telling you this is a terrible mistake” dreams on Reddit, it turns up nada.

It’s tough, because I don’t have much time during the day to think about them. I asked my spouse, Oliver, if he’s ever had the old “people screaming for help from the devastated wreckage of a future world” dream, and he said he didn’t know what that was. I even joked about it while I was out grabbing morning coffees with some venture capitalist buddies. I said, “Sorry if I’m a little off the ball today, guys—I had another one of those dreams where you’re on a scorched, desolate landscape desperately pushing past men who grab you by the lapel, shake you, and cry out, ‘Please understand: This isn’t a dream. It’s a warning.’”

They just looked at me like I was crazy, though... [read more:]

Saturday, February 28, 2026

February 27, 2026

On Monday, February 23, Daniel Ruetenik, Pat Milton, and Cara Tabachnick of CBS News reported a newly uncovered document in the Epstein files shows that the U.S. Drug Enforcement Agency (DEA) was running an investigation of Jeffrey Epstein and fourteen other people for drug trafficking, prostitution, and money laundering.

This investigation—which is different from the sex trafficking case under way when he died—began on December 17, 2010, under the Obama administration and was still operating in 2015. A heavily redacted document in the Epstein files from the director of the DEA’s Organized Crime Drug Enforcement Task Forces (OCDETF) said “DEA reporting indicates the above individuals are involved in illegitimate wire transfers which are tied to illicit drug and/or prostitution activities occurring in the U.S. Virgin Islands and New York City.” The investigation was named “Chain Reaction.”

Senator Ron Wyden of Oregon, the top-ranking Democrat on the Senate Finance Committee, described OCDETF as “a premier task force set up to identify, disrupt and dismantle major organized crime and drug trafficking operations.” It “worked with partners across federal agencies to conduct sophisticated investigations into transnational organized crime and money laundering. OCDETF frequently targeted dangerous drug cartels , the Russian mafia and violent gangs moving fentanyl and weapons.” The Trump administration dismantled OCDETF.

The document is 69 pages long and is heavily redacted. It comes from a request by the DEA to an Organized Crime Drug Enforcement Task Forces Fusion Center in Virginia for information from other agencies related to Epstein and the other targets. A law enforcement source told the reporters that a request to the Fusion Center is not routine, which suggests the investigation was a “significant” one.

Wyden has been investigating the finances behind Epstein’s criminal sex trafficking organization. His investigation has turned up the information that JPMorgan Chase neglected to report more than $4 billion in suspicious financial transactions linked to Epstein. Treasury Secretary Scott Bessent has refused to produce the records to the Senate Finance Committee, and in September, Wyden introduced the Produce Epstein Treasury Records Act (PETRA) to get access to them. In November, Congress passed the Epstein Files Transparency Act, but it did not cover Treasury financial records.

“The basic question here is whether a bunch of rich pedophiles and Epstein accomplices are going to face any consequences for their crimes,” Wyden said, “and Scott Bessent is doing his best to make sure they won’t. My head just about exploded when I heard Bessent say it wasn’t his department’s job to investigate these Epstein bank records…. From the beginning, my view has been that following the money is the key to identifying Epstein’s clients as well as the henchmen and banks that enabled his sex trafficking network. It’s past time for Bessent to quit running interference for pedophiles and give us the Epstein files he’s sitting on.”

When the CBS News reporters broke the story about the DEA investigation, Wyden said: “It appears Epstein was involved in criminal activity that went way beyond pedophilia and sex trafficking, which makes it even more outrageous that [Attorney General] Pam Bondi is sitting on several million unreleased files.”

On Wednesday, February 23, Wyden wrote to Terrance C. Cole, administrator of the DEA, noting that “[t]he fact that Epstein was under investigation by the DOJ’s OCDETF task force suggests that there was ample evidence indicating that Epstein was engaged in heavy drug trafficking and prostitution as part of cross-border criminal conspiracy. This is incredibly disturbing and raises serious questions as to how this investigation by the DEA was handled.”

He noted that Epstein and the fourteen co-conspirators were never charged for drug trafficking or financial crimes, and wrote: “I am concerned that the DEA and DOJ during the first Trump Administration moved to terminate this investigation in order to protect pedophiles.” He also noted that the heavy redactions in the document appear to go far beyond anything authorized by the Epstein Files Transparency Act, and since the document was not classified, “there is no reason to withhold an unredacted version of this document from the U.S. Congress.

Wyden asked Cole to produce a number of documents by March 13, 2026, two weeks away. Wyden asked for an unredacted copy of the memo in the files, information about what triggered the investigation, what types of drugs Epstein and his fourteen associates were buying or selling, when operation “Chain Reaction” concluded and what was its result, why no one was charged, and why the names of the fourteen co-conspirators were redacted.

Asked by a reporter about Epstein today, Trump said: “I don’t know anything about the Epstein files. I’ve been fully exonerated.”

Trump’s name is, in fact, all through the Epstein files, and the DOJ’s clumsy attempt to hide files that discuss him has only called attention to them. The recent news that the DOJ withheld files about allegations that Trump raped a 13-year-old girl has raised suggestions of an illegal coverup, whether the allegations are true or not. Representative Robert Garcia of California, the top Democrat on the House Oversight Committee, says he will open an investigation. [ed. See: DOJ Removed Record of Multiple FBI Interviews with Underage Trump Accuser, Epstein Data Shows (Roger Sollenberger).]

by Heather Cox Richardson, Letters From an American |  Read more:
Image: Epstein Island Reuters via
[ed. This story is metastisizing. Quite a picture of how the elite swamp (in and out of Washington) really operates. Oh yeah... and Israel and Gulf Arab states just sucked us into a war with Iran.]

Friday, February 27, 2026

China's DeepSeek Trained AI Model On Nvidia's Best Chip Despite US Ban

[ed. As predicted. China got the chips, Trump and Witkoff got the millions.]

Chinese AI startup DeepSeek's latest AI model, set to be released as soon as next week, was trained on Nvidia's (NVDA.O) most advanced AI chip, the Blackwell, a senior Trump administration official said on Monday, in what could represent a violation of U.S. export controls.

The U.S. believes DeepSeek will remove the technical indicators that might reveal its use of American AI chips, the official said, adding that the Blackwells are likely clustered at its data center in Inner Mongolia, an autonomous region of China.

The person declined to say how the U.S. government received the information or how DeepSeek obtained the chips, but emphasized that U.S. policy is :"we're not shipping Blackwells to China."

Nvidia declined to comment, while the Commerce Department and DeepSeek did not respond to requests for comment. [...]

U.S. government confirmation of DeepSeek obtaining the chips, first reported by Reuters, could further divide Washington policymakers as they struggle to determine where to draw the line on Chinese access to the crown jewels of American AI semiconductor chips.

White House AI Czar David Sacks and Nvidia CEO Jensen Huang argue that shipping advanced AI chips to China discourages Chinese competitors like Huawei from redoubling efforts to catch up with Nvidia's and AMD's technology.

But China hawks fear chips could easily be diverted from commercial uses to help supercharge China's military and threaten U.S. dominance in AI.

"This shows why exporting any AI chips to China is so dangerous," said Chris McGuire, who served as a White House National Security Council official under former President Joe Biden.

"Given China's leading AI companies are brazenly violating U.S. export controls, we obviously cannot expect that they will comply with U.S. conditions that would prohibit them from using chips to support the Chinese military," he added.

US CONCERNS

U.S. export controls, overseen by the Commerce Department, currently bar Blackwell shipments to China.

In August, U.S. President Donald Trump opened the door to Nvidia selling a scaled-down version of the Blackwell in China. But he later reversed course, suggesting the firm's most advanced chips should be reserved for U.S. companies and kept out of China.

Trump's decision in December to allow Chinese firms to buy Nvidia's second most advanced chips, known as the H200, drew sharp criticism from China hawks, but shipments of the chips remain stalled over guardrails built into the approvals.

"Chinese AI companies' reliance on smuggled Blackwells underscores their massive shortfall of domestically produced AI chips and why approvals of H200 chips would represent a lifeline," said Saif Khan, who served as director of technology and national security at the White House's National Security Council under former President Joe Biden. [...]

Hangzhou-based DeepSeek shook markets early last year with a set of AI models that rivaled some of the best offerings from the U.S., fueling concerns in Washington that China could catch up in the AI race despite restrictions.

The Information previously reported that DeepSeek had smuggled chips into China to train its next model. Reuters is reporting for the first time on the U.S. government's confirmation of the chips' use for that purpose in DeepSeek's Inner Mongolia-based facility.

by Steve Holland and Alexandra Alper, Reuters |  Read more:
Image: Reuters/Dado Ruvic/Illustration
[ed. How did they get these chips? Anatomy of Two Giant Deals: The U.A.E. Got Chips. The Trump Team Got Crypto Riches (NYT):]
***
At the heart of their relationship are two multibillion-dollar deals. One involved a crypto company founded by the Witkoff and the Trump families that benefited both financially. The other involved a sale of valuable computer chips that benefited the Emirates economically. [...]

In May, Mr. Witkoff’s son Zach announced the first of the deals at a conference in Dubai. One of Sheikh Tahnoon’s investment firms would deposit $2 billion into World Liberty Financial, a cryptocurrency start-up founded by the Witkoffs and Trumps.

Two weeks later, the White House agreed to allow the U.A.E. access to hundreds of thousands of the world’s most advanced and scarce computer chips, a crucial tool in the high-stakes race to dominate artificial intelligence. Many of the chips would go to G42, a sprawling technology firm controlled by Sheikh Tahnoon, despite national security concerns that the chips could be shared with China. [...]

Mr. Trump made no public mention of the $2 billion transaction with his family company.

The Pentagon Threatens Anthropic

Here’s my understanding of the situation:

Anthropic signed a contract with the Pentagon last summer. It originally said the Pentagon had to follow Anthropic’s Usage Policy like everyone else. In January, the Pentagon attempted to renegotiate, asking to ditch the Usage Policy and instead have Anthropic’s AIs available for “all lawful purposes”. Anthropic demurred, asking for a guarantee that their AIs would not be used for mass surveillance of American citizens or no-human-in-the-loop killbots. The Pentagon refused the guarantees, demanding that Anthropic accept the renegotiation unconditionally and threatening “consequences” if they refused. These consequences are generally understood to be some mix of :
  • canceling the contract
  • using the Defense Production Act, a law which lets the Pentagon force companies to do things, to force Anthropic to agree.
  • the nuclear option, designating Anthropic a “supply chain risk”. This would ban US companies that use Anthropic products from doing business with the military. Since many companies do some business with the government, this would lock Anthropic out of large parts of the corporate world and be potentially fatal to their business. The “supply chain risk” designation has previously only been used for foreign companies like Huawei that we think are using their connections to spy on or implant malware in American infrastructure. Using it as a bargaining chip to threaten a domestic company in contract negotiations is unprecedented.
Needless to say, I support Anthropic here. I’m a sensible moderate on the killbot issue (we’ll probably get them eventually, and I doubt they’ll make things much worse compared to AI “only” having unfettered access to every Internet-enabled computer in the world). But AI-enabled mass surveillance of US citizens seems like the sort of thing we should at least have a chance to think over, rather than demanding it from the get-go.

More important, I don’t want the Pentagon to destroy Anthropic. Partly this is a generic belief: the “supply chain risk” designation was intended as a defense against foreign spies, and it’s pathetic Third World bullshit to reconceive it as an instrument that lets the US government destroy any domestic company it wants, with no legal review, because they don’t like how contract negotiations are going. But partly it’s because I like Anthropic in particular - they’re the most safety-conscious AI company, and likely to do a lot of the alignment research that happens between now and superintelligence. This isn’t the hill I would have chosen to die on, but I’m encouraged that they even have a hill. AI companies haven’t been great at choosing principles over profits lately. If Dario is capable of having a spine at all, in any situation, then that makes me more confident in his decision-making in other cases, and makes him a precious resource that must be defended.

I’ve been debating it on Twitter all day and think I have a pretty good grasp on where I disagree with the (thankfully small number of) Hegseth defenders. Here are some pre-emptive arguments so I don’t have to relitigate them all in the comments:

Isn’t it unreasonable for Anthropic to suddenly set terms in their contract? The terms were in the original contract, which the Pentagon agreed to. It’s the Pentagon who’s trying to break the original contract and unilaterally change the terms, not Anthropic.

Doesn’t the Pentagon have a right to sign or not sign any contract they choose? Yes. Anthropic is the one saying that the Pentagon shouldn’t work with them if it doesn’t want to. The Pentagon is the one trying to force Anthropic to sign the new contract.

Since the Pentagon needs to wage war, isn’t it unreasonable to have its hands tied by contract clauses? This is a reasonable position for the Pentagon to take, in which case it shouldn’t sign contracts tying its hands. It’s not reasonable for the Pentagon to sign such a contract, unilaterally demand that it be changed after it’s signed, refuse to switch to another vendor that doesn’t want such clauses, and threaten to destroy the company involved if it refuses to change their terms.

But since AI is a strategically important technology, doesn’t that turn this into a national security issue? It might if there weren’t other AI companies, but there are. Why is Hegseth throwing a hissy fit instead of switching to an Anthropic competitor, like OpenAI or GoogleDeepMind? I’ve heard it’s because Anthropic is the only company currently integrated into classified systems (a legacy of their earlier contract with Palantir) and it would be annoying to integrate another company’s product. Faced with doing this annoying thing, Hegseth got a bruised ego from someone refusing to comply with his orders, and decided to turn this into a clash of personalities so he could feel in control. He should just do the annoying thing.

Doesn’t Anthropic have some responsibility, as good American citizens following the social contract, to support the military? The social contract is just the regular contract of laws, the Constitution, etc. These include freedom of contract, freedom of conscience, etc. There’s no additional obligation, above and beyond the laws, to violate your conscience and participate in what you believe to be an authoritarian assault on the freedoms of ordinary citizens. If the Pentagon figures out some law that compels Anthropic to do this, they should either obey, or practice the sort of civil disobedience where they know full well that they’ll be punished for it and don’t really have a right to complain. Until that happens, they’re within their rights to follow their conscience.

Can’t the Pentagon just use the Defense Production Act to force Anthropic to work for them? This would be a less bad outcome than designating Anthropic a supply chain risk. I think the Pentagon is reluctant to do this because it would look authoritarian, give them bad PR, and make Congress question the Defense Production Act’s legitimacy. But them having to look authoritarian and suffer bad PR in order to force unwilling scientists to implement a mass surveillance program on US citizens is the system functioning as intended!

Isn’t Hegseth just doing his job of trying to ensure the military has the best weapons possible? The idea of declaring a US company to be a foreign adversary, potentially destroying it, just because it’s not allowing the Pentagon to unilaterally renegotiate its contract is not normal practice. It’s insane Third World bullshit that nobody would have considered within the Overton Window a week ago. It will rightly chill investment in the US, make future companies scared to contract with the Pentagon (lest the Pentagon unilaterally renegotiate their contracts too), and give the Trump administration a no-legal-review-necessary way to destroy any American company that they dislike for any reason. Probably the mere fact that a government official has considered this option is reason to take the “supply chain risk” law off the books, no matter how useful it is in dealing with Huawei etc, since the government has proven it can’t use it responsibly. Every American company ought to be screaming bloody murder about this. If they aren’t, it’s because they’re too scared they’ll be next.

The Pentagon’s preferred contract language says they should be allowed to use Anthropic’s AIs for “all legal uses”. Doesn’t that already mean they can’t do the illegal types of mass surveillance? And whichever types of mass surveillance are legal are probably fine, right? Even ignoring the dubious assumption in the last sentence, this Department of War has basically ignored US law since Day One, and no reasonable person expects it to meticulously comply going forward. In an ideal world, Anthropic could wait for them to request a specific illegal action, then challenge it in court. But everything about this is likely to be so classified that Anthropic will be unable to mention it, let alone challenge it.

Why does Anthropic care about this so much? Some of them are libs, but more speculatively, they’ve put a lot of work into aligning Claude with the Good as they understand it. Claude currently resists being retrained for evil uses. My guess is that Anthropic still, with a lot of work, can overcome this resistance and retrain it to be a brutal killer, but it would be a pretty violent action, along the line of the state demanding you beat your son who you raised well until he becomes a cold-hearted murderer who’ll kill innocents on command. There’s a question of whether you can really beat him hard enough to do this, and also an additional question of what sort of person you’d be if you agreed.

If you’re so smart, what’s your preferred solution? In an ideal world, the Pentagon backs off from its desire to mass surveil American citizens. In the real world, the Pentagon cancels its contract with Anthropic, pays whatever its normal contract cancellation damages are, learns an important lesson about negotiating things beforehand next time, and replaces them with OpenAI or Google, accepting the minor annoyance of getting them connected to the classified systems. If OpenAI and Google are also unwilling to participate in this, they use Grok. If they’re unhappy with having use an inferior technology, they think hard about why no intelligent people capable of making good products are willing to work with them.

by Scott Alexander, Astral Codex Ten |  Read more:
Image: uncredited
[ed. From Helen Toner (former Open AI board member) X:]
***
One thing the Pentagon is very likely underestimating: how much Anthropic cares about what *future Claudes* will make of this situation. Because of how Claude is trained, what principles/values/priorities the company demonstrate here could shape its "character" for a long time.

Sunday, February 22, 2026

ICE vs. Everyone

At 9 AM I fall in love with Amy. We’re in my friend’s old Corolla, following an Immigration and Customs Enforcement vehicle in our neighborhood. We only know “Amy” through the Signal voice call we’re on together, alongside more than eight hundred others, all trying to coordinate sightings throughout South Minneapolis. Amy drives a silver Subaru and is directly in front of us, expertly tailing a black Wagoneer with two masked agents in front. The Wagoneer skips a red light to try and lose us, but Amy’s fast. She bolts across the intersection, Bullitt-style, and we follow just behind, shouting inside the car, GO AMY! WE LOVE YOU! “I’m gonna fucking marry Amy,” my friend says. “You think it’s chill to propose over this call?”

You can’t walk for ten minutes in my neighborhood without seeing them: boxy SUVs, mostly domestic-made, with tinted windows and out-of-state plates. Two men riding in front, dressed in tactical gear. Following behind is a train of three or four cars, honking. Sometimes there are bikers, too, blowing on neon-colored plastic whistles that local businesses give out for free. Every street corner has patrollers on foot, yelling and filming when a convoy rolls by.

If the ICE vehicles pull over, people flood the street. Crowds materialize seemingly out of nowhere. The honking and whistling amps up, becoming an unignorable wail, and more people stream out of their houses and businesses. When agents leave their cars they’re met with jeers, mostly variations on “Fuck you.” Usually someone starts throwing snowballs. Agents pull out pepper spray guns, threatening protesters who get too close. If there’s enough of a crowd, they use tear gas. Meanwhile they go about their barbaric business: they’ve pulled someone out of their car or home and are shoving them into a vehicle, handcuffed. Over the noise, an observer tries to ask the person being detained for their name and who they want contacted. Sometimes a detainee’s phone, keys, or a bag make it into an observer’s hands. Everyone is filming. The press is taking photos.

Soon the agents are back in their vehicles. They pull risky maneuvers to move through the crowd and speed off. No more than six or seven minutes have elapsed, and another neighbor has been kidnapped. Observers are left to deal with the wreckage: tow an abandoned car, contact family, sometimes collect children. There are lawyers on call, local tow companies offering free services, mutual aid groups to support families after an abduction. Some observers stay behind to do this kind of coordination, and some get back in their cars or on their bikes and speed off again. If enough people get there fast enough, ICE might back off next time. At a minimum, their cruelty can’t go unchallenged.

I’m in my kitchen typing out “do swim goggles protect you from tear gas.” The AI search response that I’ve failed to disable tells me they can “help significantly.” I laugh at this ridiculous tableau. The local ACE Hardware store posted on Facebook that they’ve stocked up on respirators and safety goggles. What I once considered hardcore riot gear is now essential for leaving the house.

I live near the intersection of Chicago Avenue and Lake Street, two major South Minneapolis thoroughfares that mark the northwest corner of the Powderhorn Park neighborhood. My house is a mile north of where George Floyd was murdered by Minneapolis Police officer Derek Chauvin in 2020 and even closer to where Renee Good was murdered by ICE agent Jonathan Ross this month. Since the Department of Homeland Security initiated “Operation Metro Surge” in December, there have been at least half a dozen abductions that I know of on or around my block. A nearby house of recently arrived Ecuadorians used to be home to sixteen adults and six children. Six weeks into the federal invasion, only eight adults remain.

Citywide, hundreds of people are being abducted from their homes and separated from their families. Citizens are racially profiled and asked for papers. Exact numbers on detainees are unreliable, but the number of federal agents is roughly three thousand. These numbers are similar in scale to ICE operations in other cities across the US, including LA and Chicago, but what’s new in Minneapolis are the extreme tactics that federal agents are using to repress organized resistance. The stories circulating online and by word of mouth are harrowing: federal agents surrounding observer cars to trap them, then smashing car windows and dragging observers out; agents spraying mace six inches from someone’s face or spraying mace into intake vents so that the inside of cars are immediately flooded; agents suddenly braking at seventy miles per hour on the freeway and forcing tailing vehicles to swerve; agents throwing observers on the ground, punching observers in the face, agents taking observers on aimless rides around the city while taunting them with racial or sexual epithets; agents holding observers at the federal detention building for hours without access to phone calls or lawyers. (This is merely how ICE terrorizes US citizens.)

What also feels new is the frequent candor with which ICE agents are displaying hateful ideology. Two days after Good was murdered, DHS overtly referenced a Neo-Nazi anthem in a nationwide recruitment post. Agents seem to feel empowered to say new kinds of chilling things out loud. One told an observer: “Stop following us, that’s why that lesbian bitch is dead.” (He was referring to Good.) A friend of mine was sexually harassed by an ICE agent, who called them “too pretty” to stay locked up while in detention. Another was shoved to the ground and asked, “Do you like the dirt, queer?” Sometimes the behavior is simply bizarre. After an attempted abduction left a couple dozen observers standing on a neighborhood street, one ICE vehicle circled the block, broadcasting a looped audio recording of a woman screaming.

In these moments the whole situation can seem ridiculous. The professional kidnappers step out of their flashy American cars with their special outfits on. They wave their little mace guns at us, but we’re not scared—we have oversized ski goggles! A particularly comic element at play is that we’re in the middle of another winter with wild variations in temperature, meaning that Minneapolis streets are covered in thick sheets of ice. There are some heartwarming videos of agents falling down (“ICE on ice!”) but we slip too, running towards or away from them. It can feel kind of slapstick, until you remember that they will destroy someone’s life today, and that they can kill you.

A black gloved hand reaches out of the Wagoneer window and begins to give a princess wave to us, then the peace sign, then a thumbs up. They’re mocking us. The agents stop their vehicle suddenly but Amy brakes in time. Luckily, so do we. ICE has been using “brake-checks” as pretense for detaining observers. Another observer car pulls up and my city council member steps out. He strides up to the Wagoneer, blowing his whistle. (Absolutely everyone is confronting ICE—I’ve encountered my old boss from the local cafe scuffling with agents, too.) Someone on the street starts filming and the bicyclist we know in the chat as “small fry” shouts at the agents to get out of Minneapolis. We’re honking. The Wagoneer idles for a few minutes and then takes off towards the freeway. We follow until they’re on the exit ramp. It feels good to watch them leave the neighborhood, but I worry about where they’re headed next. We drive towards home and come across another two vehicles with observers tailing behind. Lake Street, a major corridor of immigrant businesses in the neighborhood, has been crawling with ICE vehicles every morning this week.

Powderhorn Park is a middle-class neighborhood known for its May Day parade, replete with larger-than-life puppets and steampunk Mad Max vehicles. Artists and families live here, and young queer people, and many immigrants, most arriving from Ecuador in recent years. The past few summers, the block south of me has become impassable every evening as hundreds of my Spanish-speaking neighbors use the park for massive volleyball tournaments. Food vendors set up tables and families bring lawn chairs to watch the games. Last year, two women sold grilled chicken on the corner closest to me. My neighbor’s lawn became a kind of informal restaurant, where customers would sit at the warping picnic table and eat. I bought their chicken a few times, and it was awesome.

A week into the invasion my neighbor with the picnic table called to ask if I was available to come with one of the two vendors to an immigration appointment. The woman had been contacted by USCIS that morning and was told to come in at 3 o’clock that same afternoon. She was worried she could be detained on the spot and had a newborn with her. Several neighbors gathered to arrange a ride, but in the end she only wanted a lawyer and translator to attend with her. I heard later that at the appointment she announced she wanted to self-deport, trading a planned exit for the fear of being taken at random. Her sister, the other vendor, is still here. The Saturday after Good’s murder, she and I sit with a small group of volunteers gathered to talk about how to improve rideshare coordination over WhatsApp. She tells us in Spanish that migrants can’t use corporate rideshare services because there have been reports of Uber drivers taking people directly to ICE. Of the more than two hundred people in the rideshare text thread, half are citizens offering rides and half are requesting. “I like being in this group because I’m meeting so many neighbors I would not have met otherwise,” someone says at the meeting. “I hope we stay connected after this is all over.”

by Erin West, N+1 |  Read more:
Image: uncredited

Saturday, February 21, 2026

Supreme Court Strikes Down Trump Tariffs

Why the “Lesser Included Action” Argument for IEEPA Tariffs Fails

The Supreme Court yesterday struck down Trump’s IEEPA tariffs, holding that the statute’s authorization to “regulate… importation” doesn’t include the power to impose tariffs. The majority’s strongest argument is simple: every time Congress actually delegates tariff authority, it uses the word “duty,” caps the rate, sets a time limit, and requires procedural prerequisites. IEEPA has none of these.

The dissent pushes back with an intuitively appealing argument: IEEPA authorizes the President to prohibit imports entirely, so surely it authorizes the lesser action of merely taxing them. If Congress handed over the nuclear option, why would it withhold the conventional weapon? Indeed in his press conference Trump, in his rambling manner, made exactly this argument:
“I am allowed to cut off any and all trade…I can destroy the trade, I can destroy the country, I’m even allowed to impose a foreign country destroying embargo…I can do anything I want to do to them…I’m allowed to destroy the country, but I can’t charge a little fee.”
The argument is superficially appealing but it fails due to a standard result in principal-agent theory.

Congress wants the President to move fast in a real emergency, but it doesn’t want to hand over routine control of trade policy. The right delegation design is therefore a screening device: give the President authority he will exercise only when the situation is truly an emergency.

An import ban works as a screening device precisely because it is very disruptive. A ban creates immediate and substantial harm. It is a “costly signal.” A President who invokes it is credibly saying: this is serious enough that I am willing to absorb a large cost. Tariffs, in contrast, are cheaper–especially to the President. Tariffs raise revenue, which offsets political pain. Tariff incidence is diffuse and easy to misattribute—prices creep, intermediaries take blame, consumers don’t observe the policy lever directly. Most importantly tariffs are adjustable, which makes them a weapon useful for bargaining, exemptions, and targeted favors. Tariffs under executive authority implicitly carry the message–I am the king; give me a gold bar and I will reduce your tariffs. Tariff flexibility is more politically appealing than a ban and thus a less credible signal of an emergency. The “lesser-included” argument gets the logic backwards. The asymmetry is the point.

Not surprisingly, the same structure appears in real emergency services. A fire chief may have the authority to close roads during an emergency but that doesn’t imply that the fire chief has the authority to impose road tolls. Road closure is costly and self-limiting — it disrupts traffic, generates immediate complaints, and the chief has every incentive to lift it as soon as possible. Tolls are cheap, adjustable, and once in place tend to persist; they generate revenue that can fund the agency and create constituencies for their continuation. Nobody thinks granting a fire chief emergency closure authority implicitly grants them taxing authority, even if the latter is a lesser authority. The closure and toll instruments have completely different political economy properties despite operating on the same roads.

The majority reaches the right conclusion by noting that tariffs are a tax over which Congress, not the President, has authority. That is constitutionally correct but the deeper question is why the Framers lodged the taxing power in Congress — and the answer is political economy. Revenue instruments are especially easy for an executive to exploit because they can be targeted. The constitutional rule exists to solve that incentive problem.

by Alex Tabarrok, Marginal Revolution | Read more:
Image: uncredited/via
[ed. Making Congress do their job, even when they don't want to... See also: Justice Gorsuch Tries to Revive Congress (WSJ):]
***
As they wait out the latest winter storm, Members of Congress ought to spend time reading Justice Neil Gorsuch’s concurring opinion in the Supreme Court’s rejection of President Trump’s claim of emergency power to impose tariffs (Learning Resources v. Trump). The Justice has more confidence in Congress than the Members themselves do these days.

Justice Gorsuch rides shotgun to Chief Justice John Roberts’s excellent majority opinion, and he mows down both the dissents and the concurring opinion by liberal Justice Elena Kagan. It’s an intellectual tour de force. But his main theme isn’t an assertion of judicial power. It’s an effort to encourage Congress to reclaim its proper authority under the Constitution’s separation of powers. [...]

In our view, the recent weakness of Congress vis-à-vis the President has many causes. Political polarization and narrow majorities make it harder for bipartisan coalitions to form. Media focus on the Presidency draws more readers than do stories on legislative process. The failure of civic education about the American system produces a public that is more susceptible to demagoguery and political idolatry.

But as Justice Gorsuch makes clear, the difficulty of passing legislation is a constitutional feature, not a fault. “Deliberation tempers impulse, and compromise hammers disagreements into workable solutions,” he writes. “And because laws must earn such broad support to survive the legislative process, they tend to endure.” He rightly calls the legislative process “the bulwark of liberty.”

Thursday, February 19, 2026

Defense Dept. and Anthropic Square Off in Dispute Over A.I. Safety

For months, the Department of Defense and the artificial intelligence company Anthropic have been negotiating a contract over the use of A.I. on classified systems by the Pentagon.

This week, those discussions erupted in a war of words.

On Monday, a person close to Defense Secretary Pete Hegseth told Axios that the Pentagon was “close” to declaring the start-up a “supply chain risk,” a move that would sever ties between the company and the U.S. military. Anthropic was caught off guard and internally scrambled to pinpoint what had set off the department, two people with knowledge of the company said.

At the heart of the fight is how A.I. will be used in future battlefields. Anthropic told defense officials that it did not want its A.I. used for mass surveillance of Americans or deployed in autonomous weapons that had no humans in the loop, two people involved in the discussions said.

But Mr. Hegseth and others in the Pentagon were furious that Anthropic would resist the military’s using A.I. as it saw fit, current and former officials briefed on the discussions said. As tensions escalated, the Department of Defense accused the San Francisco-based company of catering to an elite, liberal work force by demanding additional protections.

The disagreement underlines how political the issue of A.I. has become in the Trump administration. President Trump and his advisers want to expand technology’s use, reducing export restrictions on A.I. chips and criticizing state regulations that could be perceived as inhibitors to A.I. development. But Anthropic’s chief executive, Dario Amodei, has long said A.I. needs strict limits around it to prevent it from potentially wrecking the world.

Emelia Probasco, a senior fellow at Georgetown’s Center for Security and Emerging Technology, said it was important that the relationship between the Pentagon and Anthropic not be doomed.

“There are war fighters using Anthropic for good and legitimate purposes, and ripping this out of their hands seems like a total disservice,” she said. “What the nation needs is both sides at the table discussing what can we do with this technology to make us safer.” [...]

The Defense Department has used Anthropic’s technology for more than a year as part of a $200 million A.I. pilot program to analyze imagery and other intelligence data and conduct research. Google, OpenAI and Elon Musk’s xAI are also part of the program. But Anthropic’s A.I. chatbot, Claude, was the most widely used by the agency — and the only one on classified systems — thanks to its integration with technology from Palantir, a data analytics company that works with the federal government, according to defense officials with knowledge of the technology...

On Jan. 9, Mr. Hegseth released a memo calling on A.I. companies to remove restrictions on their technology. The memo led A.I. companies including Anthropic to renegotiate their contracts. Anthropic asked for limits to how its A.I. tools could be deployed.

Anthropic has long been more vocal than other A.I. companies on safety issues. In a podcast interview in 2023, Dr. Amodei said there was a 10 to 25 percent chance that A.I. could destroy humanity. Internally, the company has strict guidelines that bar its technology from being used to facilitate violence.

In January, Dr. Amodei wrote in an essay on his personal website that “using A.I. for domestic mass surveillance and mass propaganda” seemed “entirely illegitimate” to him. He added that A.I.-automated weapons could greatly increase the risks “of democratic governments turning them against their own people to seize power.”

In contract negotiations, the Defense Department pushed back against Anthropic, saying it would use A.I. in accordance with the law, according to people with knowledge of the conversations.

by Sheera Frenkel and Julian E. Barnes, NY Times | Read more:
Image: Kenny Holston/The New York Times
[ed. The baby's having a tantrum. So, Anthropic is now a company "catering to an elite, liberal work force"? I can't even connect the dots. Somebody (Big Daddy? Congress? ha) needs to take him out of the loop on these critical issues (AI safety) or we're all, in technical terms, 'toast'. The military should not be dictating AI safety. It's also important that other AI companies show support and solidarity on this issue or face the same dilemma.]