Saturday, March 14, 2026

Sam Altman and OpenAI Under Fire

It’s finally happening. Altman’s bad behavior is catching up to him.

The board fired Altman, once AI’s golden boy, in November 2023 not because AGI had been achieved (that still hasn’t happened) but because he was “not consistently candid,” just like they said.

And, now at long last, the world sees what the board saw, and what I saw (and what Karen Hao saw): having someone running a company with that much power to affect the world who is not consistently candid is not a good idea.

As I warned in August of 2024, questionable character in a man this powerful is dangerous:


Altman’s two-faced “I support Dario” but am also negotiating behind his back and open to surveillance two-step was, for many people, the last straw. Millions of people, literally, are angry; many feel betrayed. Nobody wishes to be surveilled.

In reality, Altman was never really all that interested in AI for the “benefit of humanity.” Mostly he was interested in Sam. And money, and deals. A whole lot of people have finally put that all together.

Here’s OpenAI’s head of robotics, just now:


Zoe Hitzig had resigned just a few weeks earlier, over a different set of issues that also reflected poorly on Altman’s character:


And all this was entirely predictable. Altman is bad news. It was always just a matter of time before people started realizing how serious the consequences might be.

History will judge those who stay at his company. Anyone who wants to work on LLMs can work elsewhere. Anyone who wants to use LLMs should go elsewhere.

by Gary Marcus, On AI |  Read more:
Images: The Guardian; X; NY Times
[ed. For those not paying attention, after DOD tried and failed to strong-arm Anthropic into giving them carte blanche to do anything they wanted with Anthropic's AI model Claude (then subsequently designating them a "supply chain risk"), OpenAI (and Microsoft) immediately stepped into the breach and cut a deal, the details of which are still not fully known. On its face however they appear to give DOD everything it wanted from Anthropic: mass surveilance and fully autonomous (ie. no humans involved) operational capabilities. Altman is the head of OpenAI and its ChatGPT model.

See also: The Rage at OpenAI Has Grown So Immense That There Are Entire Protests Against It (Futurism):
OpenAI has faced protests on and off for years. But after its CEO Sam Altman announced a new deal with the Department of Defense over how its AI systems would be deployed across the military on Friday, it’s being barraged with an intensity of backlash that the company has never seen.

Droves of loyal ChatGPT users declared they were jumping shipping to Claude, whose maker Anthropic had pointedly refused to cut a deal with the Pentagon that gives it unrestricted access to its AI system — even in the face of government threats to seize the company’s tech. Claude quickly surged to the top of the app store, supplanting OpenAI’s chatbot. Uninstalls of the ChatGPT app spiked by nearly 300 percent.
***
Also this: Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism (Guardian):
OpenAI, the company behind ChatGPT, is on track to lose $14bn this year. Its market share is collapsing, and its own CEO, Sam Altman, has admitted it “screwed up” an element of the product. All it takes to accelerate that decline is 10 seconds of your time...

Here’s what triggered it. Early this year, the news broke that OpenAI’s president, Greg Brockman, donated $25m to Maga Inc, Donald Trump’s biggest Super Pac. This made him Trump’s largest donor of the last cycle. When Wired asked him to explain, Brockman said his donations were in service of OpenAI’s mission to benefit “humanity.”

Let me tell you what that mission looks like in practice. Employees of ICE – the agency that was involved in the killing of two people in Minneapolis in January – have used a screening tool powered by ChatGPT. The same company behind your friendly chatbot is helping the government decide who to hire for deportation raids.

And it’s not stopping there. Brockman also helped launch a $125m lobbying initiative, a Super Pac, to make sure no state can regulate AI. It’s attacking any politician who tries to pass safety laws. It wants Trump, and only Trump, to write the rules for the most powerful technology on earth. Every month, subscription money from users around the world flows to a company that is embedding itself in the repressive infrastructure of the Trump administration. That is not a conspiracy theory. It is a business strategy.

Things got even worse last week. When the Trump administration demanded that AI companies give the Pentagon unrestricted access to their technology – including for mass surveillance and autonomous weapons – Anthropic, the company behind ChatGPT’s main competitor, Claude, refused.

The retaliation was swift and extraordinary. Trump ordered every federal agency to stop using Anthropic’s technology. Secretary of war Pete Hegseth declared the company a “supply-chain risk to national security”, a designation normally reserved for Chinese firms such as Huawei. He announced that anyone who does business with the US military is barred from working with Anthropic. This is essentially a corporate death sentence, for the crime of refusing to help build killer robots.

And what did OpenAI do? That same Friday night, while his competitor was taking a principled stance, Sam Altman quietly signed a deal with the Pentagon to take Anthropic’s place.
***
[ed. From the comments section in Marcus' post:

Shanni Bee: 
Great. Amen.

But what remains unsaid (...even by you, Mr. Marcus, from what I've seen, which is surprising) is that Anthropic are not good guys. The whole "ethical AI company" thing is nothing but vibes. Sure, Anthropic (rightly) stood up to DoW in this case, but they still have a massive contract with Palantir (pretty much one of the worst companies on earth). Colonel Claude is complicit in bombings of Iran & Venezuela + Gaza GENOCIDE.

...Or maybe with the (admittedly BS) "supply chain risk" designation, Anthropic no longer does business with Palantir? That would be great for everyone (including them).

Either way, there is NO ethical AI company. People need to stop giving Anthropic flowers for doing the right thing in this one case while completely ignoring their complicity w/ Palantir & in documented war crimes.
Gary Marcus

indeed, i have a sequel planned about that, working title “There are no heroes in commercial AI” or something like that
***
[ed. Finally, there's this little coda from Zvi Mowshowitz's DWAtV that puts everything in perspective:

It’s really annoying trying to convince people that if you have a struggle for the future against superintelligent things that You Lose. But hey, keep trying, whatever works.
Ab Homine Deus: To the "Superintelligence isn't real and can't hurt you" crowd. Let's say you're right and human intelligence is some kind of cosmic speed limit (LOL). So AI plateaus something like 190 IQ. What do you think a million instances of that collaborating together looks like?

Arthur B.: At 10,000x the speed

Noah SmithThis is the real point. AI is superintelligent because it can think like a human AND have all the superpowers of a computer at the same time...
Timothy B. Lee: I'm not a doomer but it's still surreal to tell incredulous normies "yes, a significant number of prominent experts really do believe that superintelligent AI is on the verge of killing everyone."

Noah Smith: Yes. Regular people don't yet realize that AI people think they're building something that will destroy the human race.

Basically, about half of AI researchers are optimists, while the other half are intentionally building something they think could easily lead to their own death, the death of their children and families and friends, and the death of their entire species.

[ed. Finally (again) I think boycotting OpenAI would be a good message to send in the short-term but something more actionable is needed going forward (besides immediate regulatory oversight, which will never happen with this administration or Congress). Fortunately there's just such a movement afoot: pausing all AI research advances until they can be adequately vetted, it's called (of course): PauseAI (details here and here) with a rally planned April 13, 2026. Please consider joining or participating.]

[ed. Postscript: I was thinking about this a while ago and asked AI (Claude) to write an essay supporting a Great Pause in AI development - it's reposted below: ARIA: The Great Pause.]