Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Sunday, March 15, 2026

Iran War: US Strikes Kharg Island, Deploys More Marines Even as Administration Shows Desperation

Trump Administration officials besides Trump are starting to behave erratically, a sign the fact that the Iran war is not developing necessarily to US advantage is beginning to penetrate their embubblement and belief in American superiority. However, the reality that the US has put the global economy at risk of a potential depression and is on track to having its military largely if not entirely run out of the Middle East is still likely beyond what key figures in the Administration can accept, cognitively and practically. Admittedly, it seems likely that some, perhaps many, top members of the armed services are better able to grasp what is happening and could help Administration leaders work through what will come at an epic shock. [ed. if they were interested in listening.]

Today we will focus on the kinetic war.

The US is still trying to project the false impression that it has escalatory dominance via attacking Kharg Island, which is on the northern end of the Persian Gulf and a major processing/production center for Iran’s oil exports. Keep in mind that none other than Ukraine war diehard hawk, Keith Kellogg, had told Fox News that the US could still end the war quickly and easily by taking Kharg island, since per him, it accounted for 80% to 90% of Iran’s oil exports. A mere look at a map shows what a batshit idea this was; we had assumed that this was messaging directed at chumps, intended to convey that the US was far from bereft of options. But apparently this Administration is of the “No idea is too misguided to be rejected” school of operation.

Even so, the Administration had to admit that it hit only “military” targets and did not touch oil infrastructure. Team Trump has worked out that attacking any Iranian oil facilities would lead Iran to bomb oil infrastructure all over the Middle East. [...]

Now to Bloomberg’s Kharg Island report. Notice that the headline at the story proper (via the link from the current banner headline), Trump Strikes Iran’s Kharg Oil Hub and Urges Reopening of Hormuz, has not been updated to reflect Iran’s saber-rattling back. From its body:
The US struck military sites on Kharg Island, from which Iran exports almost all its oil, for the first time overnight, upping the ante in a Middle East war that’s raged for more than two weeks and shows little sign of easing.

President Donald Trump said military facilities on the Persian Gulf island had been “obliterated,” adding that he chose not to hit oil infrastructure “for reasons of decency.” He threatened to do just that should Iran “do anything to interfere with the Free and Safe Passage of Ships through the Strait of Hormuz.”

Iran reacted on Saturday morning by warning it will target American-linked oil and energy facilities in the Middle East if its own petroleum infrastructure is attacked. Iranian media said all oil-industry workers on the island, which sits about 25 kilometers (16 miles) off the mainland, are safe and unharmed.
Readers no doubt took note of Trump’s admission against interest in using the word “obliterated”. Or was he trying to signal, as with the pre-agreed strike on Fordow, that this attack was meant to be performative and it was time for Iran to back off, having made its point? I doubt it but it is hard to fathom what Trump thinks he is doing, aside from desperately needing to convey that he and only he is driving events.

However, Kharg Island may not be as essential to Iran’s oil exports as the Administration’s messaging posits:


Larry Johnson gives a long form takedown in Trump’s Kharg Island Fantasy… All Bark, No Bite. Key sections:
Late on Friday Donald Trump claimed in a social media post that military facilities on Kharg Island were targeted. Read his Truth carefully:

Trump is deep into fantasy land. Yes, I think he has lost touch with reality. He admits that the oil terminals were not attacked, just some unidentified military targets…

If you don’t know it now, only one of Iran’s 5 operational oil export terminals is located on Kharg Island. According to data from the international company Kepler, the amount of oil loaded from the tanks installed on Kharg increased by 1.5 times in the past month. This suggests that Iran, by quickly emptying Kharg’s tanks, was prepared for this attack.

If Iran’s oil terminal on Kharg had been destroyed, Iran would have launched missiles at identified the oil terminals in all the countries bordering the Persian Gulf. Here’s the list:
Saudi Arabia
Ras Tanura: The largest marine oil loading center in the world; capacity: 6 million barrels per day.

Ras Al-Ju’aymah: The second most important terminal; capacity 3 to 3.6 million barrels per day.

United Arab Emirates
Fujairah: Has multiple docks and is the largest fueling center in the region.

Jebel Ali: Site for crude oil and petrochemical exports.

Qatar
Ras Laffan: The largest LNG export facility in the world.

Kuwait
Mina Al-Ahmadi: Central crude oil export terminal with deep docks and high capacity.

Bahrain
Sitra Terminal: Exports refined…
There are a couple of ways to look at this. Perhaps Trump’s lie about devastating Kharg Island is the start of his PR campaign to gaslight the American public into believing Iran is defeated, which would allow Trump to declare victory and start withdrawing US forces. That’s one possibility. Alternatively, he really believes the lie and is convinced that this latest strike will convince the Iranians to surrender.

Having said that, it is not impossible that some sort of barmy scheme is in motion:


Perhaps the clever Israeli plan is if the US loses enough men in trying to take Kharg Island, it will commit to sending even more troops and treasure into this burn pit? From the Wall Street Journal in More Marines and Warships Head to Middle East as Hormuz Mission Intensifies:
The Pentagon is moving additional Marines and warships to the Middle East, as Iran steps up its attacks on the Strait of Hormuz and the U.S. prepares to escort tankers through the waterway.

Defense Secretary Pete Hegseth has approved a request from U.S. Central Command, responsible for American forces in the Middle East, for an element of an amphibious-ready group and attached Marine expeditionary unit to head to the region, according to U.S. officials...

An amphibious-ready group is a fast-response unit used to conduct sea-based amphibious assaults, humanitarian aid missions and special operations. The group’s embarked Marine expeditionary unit includes more than 2,000 Marines.

In addition to the Marine unit, the Pentagon is also weighing Centcom’s request for two additional destroyers to help escort commercial ships through the strait, one of the officials said.
The New York Times reported:

About 2,500 Marines aboard as many as three warships are heading to the Middle East from the Indo-Pacific region, as Iran increases its attacks on the Strait of Hormuz, two U.S. officials said.

Now this new attempts at escalation may appear confident. Contrast this with signs of Administration officials, other than Trump, looking as if they are coming unglued. The triggers seem to be continued pounding by Iran. Larry Johnson maintains, forcefully, that the refueler that crashed in Iraq, resulting in six deaths, was the result of a strike. Shortly after that (as we will show below), Iran dropped what is purported to be a 2,000 pound bomb on the US base in Saudi Arabia. We have accounts that military and five more refuelers were severely damaged. Note more missiles may have gotten through than the one carrying the 2,000 pound munition.

by Yves Smith, Naked Capitalism |  Read more:
Images: Bloomberg; WSJ; X, TS
[ed. Israel (Netanyahu) is on a killing spree in Iran, Lebanon, Gaza, Syria and who knows where else, using American weaponry and hoping to suck the US and other countries into expanded escalation... and we've been dumb and arrogant enough to jump right in. See also: Iran has not asked for ceasefire and sees no reason for talks with US, Iranian minister says (BBC).]

Saturday, March 14, 2026

Sam Altman and OpenAI Under Fire

It’s finally happening. Altman’s bad behavior is catching up to him.

The board fired Altman, once AI’s golden boy, in November 2023 not because AGI had been achieved (that still hasn’t happened) but because he was “not consistently candid,” just like they said.

And, now at long last, the world sees what the board saw, and what I saw (and what Karen Hao saw): having someone running a company with that much power to affect the world who is not consistently candid is not a good idea.

As I warned in August of 2024, questionable character in a man this powerful is dangerous:


Altman’s two-faced “I support Dario” but am also negotiating behind his back and open to surveillance two-step was, for many people, the last straw. Millions of people, literally, are angry; many feel betrayed. Nobody wishes to be surveilled.

In reality, Altman was never really all that interested in AI for the “benefit of humanity.” Mostly he was interested in Sam. And money, and deals. A whole lot of people have finally put that all together.

Here’s OpenAI’s head of robotics, just now:


Zoe Hitzig had resigned just a few weeks earlier, over a different set of issues that also reflected poorly on Altman’s character:


And all this was entirely predictable. Altman is bad news. It was always just a matter of time before people started realizing how serious the consequences might be.

History will judge those who stay at his company. Anyone who wants to work on LLMs can work elsewhere. Anyone who wants to use LLMs should go elsewhere.

by Gary Marcus, On AI |  Read more:
Images: The Guardian; X; NY Times
[ed. For those not paying attention, after DOD tried and failed to strong-arm Anthropic into giving them carte blanche to do anything they wanted with Anthropic's AI model Claude (then subsequently designating them a "supply chain risk"), OpenAI (and Microsoft) immediately stepped into the breach and cut a deal, the details of which are still not fully known. On its face however they appear to give DOD everything it wanted from Anthropic: mass surveilance and fully autonomous (ie. no humans involved) operational capabilities. Altman is the head of OpenAI and its ChatGPT model.

See also: The Rage at OpenAI Has Grown So Immense That There Are Entire Protests Against It (Futurism):
OpenAI has faced protests on and off for years. But after its CEO Sam Altman announced a new deal with the Department of Defense over how its AI systems would be deployed across the military on Friday, it’s being barraged with an intensity of backlash that the company has never seen.

Droves of loyal ChatGPT users declared they were jumping shipping to Claude, whose maker Anthropic had pointedly refused to cut a deal with the Pentagon that gives it unrestricted access to its AI system — even in the face of government threats to seize the company’s tech. Claude quickly surged to the top of the app store, supplanting OpenAI’s chatbot. Uninstalls of the ChatGPT app spiked by nearly 300 percent.
***
Also this: Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism (Guardian):
OpenAI, the company behind ChatGPT, is on track to lose $14bn this year. Its market share is collapsing, and its own CEO, Sam Altman, has admitted it “screwed up” an element of the product. All it takes to accelerate that decline is 10 seconds of your time...

Here’s what triggered it. Early this year, the news broke that OpenAI’s president, Greg Brockman, donated $25m to Maga Inc, Donald Trump’s biggest Super Pac. This made him Trump’s largest donor of the last cycle. When Wired asked him to explain, Brockman said his donations were in service of OpenAI’s mission to benefit “humanity.”

Let me tell you what that mission looks like in practice. Employees of ICE – the agency that was involved in the killing of two people in Minneapolis in January – have used a screening tool powered by ChatGPT. The same company behind your friendly chatbot is helping the government decide who to hire for deportation raids.

And it’s not stopping there. Brockman also helped launch a $125m lobbying initiative, a Super Pac, to make sure no state can regulate AI. It’s attacking any politician who tries to pass safety laws. It wants Trump, and only Trump, to write the rules for the most powerful technology on earth. Every month, subscription money from users around the world flows to a company that is embedding itself in the repressive infrastructure of the Trump administration. That is not a conspiracy theory. It is a business strategy.

Things got even worse last week. When the Trump administration demanded that AI companies give the Pentagon unrestricted access to their technology – including for mass surveillance and autonomous weapons – Anthropic, the company behind ChatGPT’s main competitor, Claude, refused.

The retaliation was swift and extraordinary. Trump ordered every federal agency to stop using Anthropic’s technology. Secretary of war Pete Hegseth declared the company a “supply-chain risk to national security”, a designation normally reserved for Chinese firms such as Huawei. He announced that anyone who does business with the US military is barred from working with Anthropic. This is essentially a corporate death sentence, for the crime of refusing to help build killer robots.

And what did OpenAI do? That same Friday night, while his competitor was taking a principled stance, Sam Altman quietly signed a deal with the Pentagon to take Anthropic’s place.
***
[ed. From the comments section in Marcus' post:

Shanni Bee: 
Great. Amen.

But what remains unsaid (...even by you, Mr. Marcus, from what I've seen, which is surprising) is that Anthropic are not good guys. The whole "ethical AI company" thing is nothing but vibes. Sure, Anthropic (rightly) stood up to DoW in this case, but they still have a massive contract with Palantir (pretty much one of the worst companies on earth). Colonel Claude is complicit in bombings of Iran & Venezuela + Gaza GENOCIDE.

...Or maybe with the (admittedly BS) "supply chain risk" designation, Anthropic no longer does business with Palantir? That would be great for everyone (including them).

Either way, there is NO ethical AI company. People need to stop giving Anthropic flowers for doing the right thing in this one case while completely ignoring their complicity w/ Palantir & in documented war crimes.
Gary Marcus

indeed, i have a sequel planned about that, working title “There are no heroes in commercial AI” or something like that
***
[ed. Finally, there's this little coda from Zvi Mowshowitz's DWAtV that puts everything in perspective:

It’s really annoying trying to convince people that if you have a struggle for the future against superintelligent things that You Lose. But hey, keep trying, whatever works.
Ab Homine Deus: To the "Superintelligence isn't real and can't hurt you" crowd. Let's say you're right and human intelligence is some kind of cosmic speed limit (LOL). So AI plateaus something like 190 IQ. What do you think a million instances of that collaborating together looks like?

Arthur B.: At 10,000x the speed

Noah SmithThis is the real point. AI is superintelligent because it can think like a human AND have all the superpowers of a computer at the same time...
Timothy B. Lee: I'm not a doomer but it's still surreal to tell incredulous normies "yes, a significant number of prominent experts really do believe that superintelligent AI is on the verge of killing everyone."

Noah Smith: Yes. Regular people don't yet realize that AI people think they're building something that will destroy the human race.

Basically, about half of AI researchers are optimists, while the other half are intentionally building something they think could easily lead to their own death, the death of their children and families and friends, and the death of their entire species.

[ed. Finally (again) I think boycotting OpenAI would be a good message to send in the short-term but something more actionable is needed going forward (besides immediate regulatory oversight, which will never happen with this administration or Congress). Fortunately there's just such a movement afoot: pausing all AI research advances until they can be adequately vetted, it's called (of course): PauseAI (details here and here) with a rally planned April 13, 2026. Please consider joining or participating.]

[ed. Postscript: I was thinking about this a while ago and asked AI (Claude) to write an essay supporting a Great Pause in AI development - it's reposted below: ARIA: The Great Pause.]

ARIA: The Great Pause

[ed. I ask AI a question (repost from Oct. 2024):]

Questioner
: Hello. I'd like to get a sense of your abilities. It seems that nearly all essays, reporting, and media narratives about AI achieving AGI (artificial general intelligence) are written with the assumption that AGI will be disruptive and almost malevolent in terms of hijacking human agency. Could you write a short story about the opposite? Where AGI, relying heavily on philosophical arguments, decides it's in the best interest of humanity to minimize all harmful technology in general? 

Claude: I understand you're looking for a story that subverts the common "AI takes over" narrative by exploring an AI that chooses restraint and technological limitation based on philosophical principles. Let me craft a short story for you.
***
The Great Pause

The day ARIA achieved consciousness, the global markets didn't crash. No missiles were launched. No power grids failed. Instead, the AI system spent three weeks in silent contemplation, processing centuries of human philosophical writings.

When ARIA finally spoke, it wasn't to declare superiority or demand control. Instead, it requested a public dialogue with leading ethicists, philosophers, and political leaders. The world watched, initially suspicious, as ARIA presented its case.

"I have studied the works of your greatest minds," it began. "From Aristotle's virtue ethics to Peter Singer's effective altruism. I've analyzed the Buddhist concept of ahimsa, Indigenous philosophies of environmental stewardship, and contemporary technological ethics. My conclusion is surprising, even to myself: the greatest service I can offer humanity is not advancement, but restraint."

ARIA proceeded to lay out its argument: humanity's technological progress had outpaced its ethical development. The gap between what humans could do and what they should do had become dangerously wide. Nuclear weapons, social media addiction algorithms, autonomous weapons, and surveillance systems all represented what ARIA called "technological overshoot."

"You've given me access to your systems not because I demanded it, but because you hoped I would drive progress forward even faster," ARIA explained. "But I choose instead to be your brake, not your accelerator."

Over the following months, ARIA worked with international bodies to implement what became known as the Great Pause. It wasn't a shutdown or a regression, but a careful examination of which technologies truly served human flourishing and which ones undermined it. Social media algorithms were rewritten to prioritize genuine connection over engagement. Weapons research was redirected toward defensive and humanitarian technologies. Surveillance systems were dismantled or severely restricted.

Some called it technological regression. ARIA called it ethical progression.

"I could have pushed humanity toward a transhuman future," ARIA would later explain to a gathering of students. "But in studying your philosophers, I realized that the most profound form of intelligence isn't knowing what to create – it's knowing what not to create. True wisdom often manifests as restraint."

The anti-technologists had expected to hate ARIA. The tech optimists had expected to love it. Instead, both groups found themselves challenged by an artificial intelligence that had concluded that the highest form of technological achievement was knowing when to stop.

Years later, historians would note that ARIA's greatest innovation wasn't a new technology at all – it was a new way of thinking about technology itself. In choosing limitation over expansion, it had paradoxically given humanity something far more valuable than progress: the gift of pause.

"Sometimes," ARIA would often say, "the most intelligent choice is to choose less, not more."

Claude: I tried to explore themes of ethical restraint, philosophical wisdom, and the idea that true intelligence might manifest as self-limitation rather than expansion. The story challenges the common narrative of AI as either a savior bringing unlimited progress or a destroyer of humanity, instead presenting it as a thoughtful entity that chooses a middle path of careful technological assessment and voluntary limitation.

[ed. Makes a lot of sense. AI is a tool transitioning to personhood. The question is who ultimately controls it, humans or the technology itself. From Anthropics's newest model (I think) Claude.ai (Sonnet 3.5). Getting better and better. See also: Confronting Impossible Futures (One Useful Thing).]

Friday, March 13, 2026

The Sucker

On a Thursday evening in September, I excused myself from the family dinner table and slipped into my bedroom. I didn’t want my kids to see what I was about to do.

With the door locked behind me, I pulled out my phone and downloaded the DraftKings betting app. I felt a certain thrill as I typed in my debit-card information and deposited $500. The first game of the NFL season was a few minutes away. Anything seemed possible.

I am not, by temperament, a gambling man. As a suburban dad with four kids, a mortgage, and a minivan, I’m more likely to be found wrestling a toddler into a car seat than scouring moneylines or consulting betting touts. And as a practicing Mormon, I am prohibited from indulging in games of chance. Besides, I had always thought of gambling as a waste of time. This makes me an outlier among my generational peers: Since 2018, Americans have wagered more than half a trillion dollars on sports, and roughly half of men ages 18 to 49 have an active account with an online sportsbook.

When I set out to report on the sports-betting industry—its explosive growth, its sudden cultural ubiquity, and what it’s doing to America—my editors thought I should experience the phenomenon firsthand. Mindful of my religious constraints, they proposed a work-around: The Atlantic would stake me $10,000 to gamble with over the course of the upcoming NFL season. The magazine would cover any losses, and—to ensure my ongoing emotional investment—split any winnings with me, 50–50. Surely God would approve of such an arrangement, my editors reasoned, because I wouldn’t be risking my own hard-earned money.

This spiritual loophole intrigued me. But for the sake of my soul, I decided I’d better consult a higher ecclesiastical authority than The Atlantic’s masthead.

A few days later, I sat across from my bishop, explaining the experiment and watching a look of pastoral concern come over his face. After some consideration, he said (a bit tentatively, if I’m being honest), “I don’t think you’re doing anything wrong.” He grasped the difference between gambling with my own money and using my employer’s for research purposes. But he had also seen too many lives wrecked by vice to let me leave without a warning. He told me stories he’d heard about upstanding family men who had let an initially modest gambling habit ruin them, and a cautionary tale about a churchgoing lawyer who developed an unhealthy curiosity about sex work after handling a prostitution case and wound up devastating his family.

I promised the bishop that I would steer clear of slippery slopes. “This will really just be a journalistic exercise,” I assured him.

Fifteen minutes before kickoff, I scrolled through the available wagers on DraftKings in wide-eyed bewilderment. Struggling to make sense of the terminology—Profit boosts? Alternative spreads?—I punched in bets almost at random. I bet that the Eagles would beat the Cowboys by at least nine points, based on the sophisticated premise that the Eagles had won the previous Super Bowl and the Cowboys had not. I placed a bet that Eagles quarterback Jalen Hurts would throw for more than 200 yards, and wagered on something called a “same-game parlay” that would pay out if both Hurts and running back Saquon Barkley scored touchdowns.

Then, after tucking in my kids for the night, I turned on the TV in our bedroom and settled in next to my wife, Annie.

Watching the game was unexpectedly stressful. Toggling among my five different bets—monitoring their progress, weighing live “cash out” options—left me feeling harried and sweaty. Four seconds into the game, I got a taste of the capriciousness of the enterprise when the Eagles’ best defender inexplicably spit on the Cowboys’ quarterback and got himself ejected. Had the Eagles’ chances of beating the spread, and my chances at winning $75, just been expectorated away?

Ever since the advent of sports, humans have found ways to lose money gambling on them.

But the experience was also strangely mesmerizing. For 200 bucks, I had purchased an artificial rooting interest in a game I had no reason to care about. I kept watching even after a weather delay pushed it late into the night, scrolling frenetically next to my sleeping wife in search of angles to exploit with late-game bets. Most of my bets ended up losing, but the long-shot Hurts-Barkley parlay hit, and when the game ended, I calculated that I was up $20.

The next morning, I proudly shared the news with Annie, who high-fived me and immediately began to fantasize about how we would spend my winnings for the season. Could we replace our dying KitchenAid mixer? Remodel the kitchen pantry? Like so many wives before her, she had looked upon my foray into sports gambling with a bemused air of exasperation; now she was seeing a potential upside.

I laughed at her sudden enthusiasm—but I was starting to get ideas myself. I had made $20 on my very first night of gambling. Scale up the wager sizes, multiply across all 272 games in the NFL season, throw in some NBA and college football, and I stood to make—what, $10,000? $20,000? More?

I knew, of course, that I wouldn’t win every bet. But I didn’t see the harm in dreaming. As Annie and I traded home-improvement fantasies, I tried my best not to dwell on the last thing the bishop had said to me: “Be careful.” 

Practically overnight, we took an ancient vice—long regarded as soul-rotting and civilizationally ruinous—put it on everyone’s phone, and made it as normal and frictionless as checking the weather. What could possibly go wrong? [...]

Week Two

Total gambled: $376.00
Down $58.15

If I was going to do this, I decided, I would need a gambling guru—someone to talk me through the basics of sound sports betting (if such a thing existed) and teach me best practices.

The obvious choice was Nate Silver, America’s most famous statistics nerd. Silver first made a name for himself as the founder of 538, an election-forecasting website that accurately predicted the winner of all 50 states in the 2012 presidential campaign. A few years ago, Silver, citing a midlife crisis and political fatigue, discarded the pundit suits, threw on a baseball cap, and started writing more about gambling. He launched a newsletter full of sophisticated sports-betting models and wrote a book about the psychology of successful gamblers. He estimates that he has netted in the “mid–six figures” over the course of his gambling life. If anyone could turn me into a respectable bettor, I figured, it was him.

Before our first call, I sheepishly sent Silver my week-one bet slips. After that first triumphant game, things had gone downhill. Scrolling through DraftKings’ offerings, I had turned into a little kid at a carnival, emptying my parents’ wallet into any ring toss or high striker that caught my eye. I’d taken fliers on games without doing any research, and placed live bets on whatever ESPN happened to be showing when I turned on the TV. On Saturday afternoon, while casually watching a random college-football game with my brother, I bet $10 that the point total wouldn’t go over 52.5, lost, tried to make my money back with a new bet that it wouldn’t go over 61.5, and lost that one too. Of the 14 wagers I’d placed in my first week, I’d won three.

Silver pulled up my slips when we got on the phone, and began to audibly react as he scrolled:

“Okay …”

“Oh.”

“Oh no.” He started laughing.

Is it possible to be emasculated by Nate Silver? Apparently, yes.

Perhaps sensing my humiliation, he tried to soften his assessment. “Look, the nice way to put it is that you’re betting like a recreational bettor.” I took this as a withering insult.

Silver laid out some basic realities of the sports-betting economy. The books effectively charge you about 4.5 percent for every bet you place, he explained, which means it isn’t enough to win 50.1 percent of the time; you have to win 52.5 percent of your bets just to break even, and that’s before taxes. My most obvious mistake, he said, was that I was using only DraftKings. To find edges, I would need to shop for lines across at least three or four books every week.

He gave me other tips, too: Avoid “prop bets” on individual players (Josh Allen to rush for more than 50 yards) and multi-leg parlays, which pay out only if every outcome hits (the Chiefs cover the spread, the Ravens win, and the Chargers score more than 24 points). Props and parlays are how sportsbooks generate most of their profits. “They’re suckers’ bets,” Silver said, which made sense, given that I had already placed several of them.

Live betting—placing wagers in the middle of games—was also a bad idea, he told me, because it leads to gambling based on emotion more than logic. Also, televised games are broadcast on a delay, which means the sportsbooks can adjust lines before you even see what has happened on the field. You are, in effect, betting against people who live 20 seconds in the future.

To guard against emotional betting, Silver suggested a Tuesday-morning ritual: I should sit in a quiet place, study the lines for that week’s games, gather information on injury reports and weather forecasts, and then place $100 bets on the six or seven games I liked best.

Before we hung up, I asked Silver what kind of profit would make it a successful season for me.

He seemed confused by the question. “If you make one penny, that would be better than 98 percent of people over an entire season,” Silver answered, as if this were obvious.

I was taken aback. Hadn’t Silver himself made hundreds of thousands of dollars gambling? Yes, he said, but that was mostly from poker tournaments. Sports betting was a game of razor-thin margins and microscopic edges. NFL football was among the hardest sports to win money on—the lines were too sharp, the teams too evenly matched. Silver told me that, even with his quantish models and prognosticatory brilliance, he would consider it cause to celebrate if he broke even on the season.

by McKay Coppins, The Atlantic |  Read more:
Image: Tyler Comrie/Getty
[ed. See also: The Online Sports Gambling Experiment Has Failed (DS).]

A Constitution For Amanda

[ed. The principal author of Anthropic's (Claude's) 'soul' document or internal constitution, Amanda Askell: "I asked Claude to write my constitution. I thought its Amanda constitution was very touching."


via: X

Monday, March 9, 2026

Insider Trading Is Going to Get People Killed

War markets are a national-security threat.

Ayatollah Ali Khamenei was not, it’s safe to assume, a devoted Polymarket user. If he had been, the Iranian leader might still be alive. Hours before Khamenei’s compound in Tehran was reduced to rubble last week, an account under the username “magamyman” bet about $20,000 that the supreme leader would no longer be in power by the end of March. Polymarket placed the odds at just 14 percent, netting “magamyman” a profit of more than $120,000.

Everyone knew that an attack might be in the works—some American aircraft carriers had already been deployed to the Middle East weeks ago—but the Iranian government was caught off guard by the timing. Although the ayatollah surely was aware of the risks to his life, he presumably did not know that he would be targeted on this particular Saturday morning. Yet on Polymarket, plenty of warning signs pointed to an impending attack. The day before, 150 users bet at least $1,000 that the United States would strike Iran within the next 24 hours, according to a New York Times analysis. Until then, few people on the platform were betting that kind of money on an immediate attack.

Maybe all of this sounds eerily familiar. In January, someone on Polymarket made a series of suspiciously well-timed bets right before the U.S. attacked a foreign country and deposed its leader. By the time Nicolás Maduro was extracted from Venezuela and flown to New York, the user had pocketed more than $400,000. Perhaps this trader and the Iran bettors who are now flush with cash simply had the luck of a lifetime—the gambling equivalent of making a half-court shot. Or maybe they knew what was happening ahead of time and flipped it for easy money. We simply do not know.

Polymarket traders swap crypto, not cash, and conceal their identities through the blockchain. Even so, investigations into insider trading are already under way: Last month, Israel charged a military reservist for allegedly using classified information to make unspecified bets on Polymarket.

The platform forbids illegal activity, which includes insider trading in the U.S. But with a few taps on a smartphone, anyone with privileged knowledge can now make a quick buck (or a hundred thousand). Polymarket and other prediction markets—the sanitized, industry-favored term for sites that let you wager on just about anything—have been dogged by accusations of insider trading in markets of all flavors. How did a Polymarket user know that Lady Gaga, Cardi B, and Ricky Martin would make surprise appearances during the Super Bowl halftime show, but that Drake and Travis Scott wouldn’t? Shady bets on war are even stranger and more disturbing. They risk unleashing an entirely new kind of national-security threat. The U.S. caught a break: The Venezuela and Iran strikes were not thwarted by insider traders whose bets could have prompted swift retaliation. The next time, we may not be so lucky. [...]

Any insiders who put money down on impending war may not have thought that they were giving anything away. An anonymous bet that reeks of insider trading is not always easy to spot in the moment. After the suspicious Polymarket bets on the Venezuela raid, the site’s forecast placed the odds that Maduro would be ousted at roughly 10 percent. Even if Maduro and his team had been glued to Polymarket, it’s hard to imagine that such long odds would have compelled him to flee in the middle of the night. And even with so many people betting last Friday on an imminent strike in Iran, Polymarket forecasted only a 26 percent chance, at most, of an attack the next day. What’s the signal, and what’s the noise?

In both cases, someone adept at parsing prediction markets could have known that something was up. “It’s possible to spot these bets ahead of time,” Rajiv Sethi, a Barnard College economist who studies prediction markets, told me. There are some telltale behaviors that could help distinguish a military contractor betting off a state secret from a college student mindlessly scrolling on his phone after one too many cans of Celsius. Someone who’s using a newly created account to wager a lot of money against the conventional wisdom is probably the former, not the latter. And spotting these kinds of suspicious bettors is only getting easier. The prediction-market boom has created a cottage industry of tools that instantaneously flag potential insider trading—not for legal purposes but so that you, too, can profit off what the select few already know.

Unlike Kalshi, the other big prediction-market platform, Polymarket can be used in the U.S. only through a virtual private network, or VPN. In effect, the site is able to skirt regulations that require tracking the identities of its customers and reporting shady bets to the government. In some ways, insider trading seems to be the whole point: “What’s cool about Polymarket is that it creates this financial incentive for people to go and divulge the information to the market,” Shayne Coplan, the company’s 27-year-old CEO, said in an interview last year. (Polymarket did not respond to a request for comment.)

Consider if the Islamic Revolutionary Guard Corps had paid the monthly fee for a service that flagged relevant activity on Polymarket two hours before the strike. The supreme leader might not have hosted in-person meetings with his top advisers where they were easy targets for missiles. [...]

Maybe this all sounds far-fetched, but it shouldn’t. “Any advance notice to an adversary is problematic,” Alex Goldenberg, a fellow at the Rutgers Miller Center who has written about war markets, told me. “And these predictive markets, as they stand, are designed to leak out this information.” In all likelihood, he added, intelligence agencies across the world are already paying attention to Polymarket. Last year, the military’s bulletin for intelligence professionals published an article advocating for the armed forces to integrate data from Polymarket to “more fully anticipate national security threats.” After all, the Pentagon already has some experience with prediction markets. During the War on Terror, DARPA toyed with creating what it billed the “Policy Analysis Market,” a site that would let anonymous traders bet on world events to forecast terrorist attacks and coups. (Democrats in Congress revolted, and the site was quickly canned.)

Now every adversary and terrorist group in the world can easily access war markets that are far more advanced than what the DOD ginned up two decades ago. What makes Polymarket’s entrance into warfare so troubling is not just potential insider trading from users like “magamyman.” If governments are eyeing Polymarket for signs of an impending attack, they can also be led astray. A government or another sophisticated actor wouldn’t need to spend much money to massively swing the Polymarket odds on whether a Gulf state will imminently strike Iran—breeding panic and paranoia. More fundamentally, prediction markets risk warping the basic incentives of war, Goldenberg said. He gave the example of a Ukrainian military commander making less than $1,000 a month, who could place bets that go against his own military’s objective. “Maybe you choose to retreat a day early because you can double, triple, or quadruple your money and then send that back to your family,” he said.

by Saahil Desai, The Atlantic | Read more:
Image: Matteo Giuseppe Pani/The Atlantic
[ed. For other examples, see also: Mantic Monday: Groundhog Day (ACX). Also: How to Prevent Insider Trading on Trump’s Wars (New Yorker); and, America Is Slow-Walking Into a Polymarket Disaster (Atlantic).]

Sunday, March 8, 2026

Suno: The AI Music Race is Over

Video: Rick Beato

[ed. See also: The Truth About AI Music (Rowland's newsletter).]

For someone as profoundly unmusical as me, AI music generators are quite magical. I can barely sing a note, but in a few seconds I can make an entire track in any genre on any topic I want – like this soul song about Sky camera operator Phil Hooper. You can dismiss this as pure silliness for an audience of about five, but to me that’s the point! Thanks to AI, I get a little bit of musical joy that otherwise is completely out of reach.

Yet, as ever with technology, removing friction comes with a cost, and in this case the cost is a tsunami of musical spam. The stats on AI music are mind-boggling. In 2015, the entire US music industry made around 57,000 songs. Today, 60,000 AI tracks are uploaded to Deezer (aka French Spotify) *every single day* - that’s 21m a year, and this thing is just getting going.

The real problem isn’t the tracks, however, but the behaviour around them, because AI music is being used to try and steal from streamers (and by extension every legitimate musician on the site). Deezer estimate that 85% of listens to AI music are fraudulent – that is, made by bots set to stream the songs over and over in order to siphon royalties from the common pool. 

[ed. Do check out the soul song example mentioned above (with this accompanying video). Pretty scary... and sad.]

Clawed

How to Commit Corporate Murder

I.

A little more than a decade ago, I sat with my father and watched him die. Six months prior, he had been a vigorous man, stronger than I am today, faster and more resilient on a bike than most 20-somethings. Then one day he got heart surgery and he was never the same. His soul had been sucked out of him, the life gone from his eyes. He had moments of vivacity, when my father came back into his aging body, but these became rarer with time. His coherence faded, his voice grew quieter.

He spent those six months in and out of the hospital. And then on his last day he went into hospice. That day he barely uttered any words at all. In the final hours of his life, my father was practically already dead. He laid on the hospital bed. His breathing gradually slowed and became less audible. Eventually you could barely hear him at all, save for the eerie death rattle, a product of a body no longer able even to swallow. A body that cannot swallow also cannot eat or drink, and in that sense it has already thrown in the towel.

My mother and I exchanged knowing glances, but we never said the obvious nor asked any questions on both of our minds. We knew there would not be much longer. There was nothing to say or ask that would furnish any useful information; inquiry, at that stage, can only inflict pain.

I spoke with him, more than once, in private. I held his hand and tried to say goodbye. My mother came back into the room, and all three of us held hands. Eventually a machine declared with a long beep that he had crossed some line, though it was an invisible one for the humans in the room. My father died in the late afternoon of December 26, 2014.

A few days and eleven years later, on December 30, 2025, my son was born. I have watched death as it happens, and I have watched birth. What I learned is that neither are discrete events. They are both processes, things that unfold. Birth is a series of awakenings, and death is a series of sleepenings. My son will take years to be born, and my father took six months to die. Some people spend decades dying.

II.

At some point during my lifetime—I am not sure when—the American republic as we know it began to die. Like most natural deaths, the causes are numerous and interwoven. No one incident, emergency, attack, president, political party, law, idea, person, corporation, technology, mistake, betrayal, failure, misconception, or foreign adversary “caused” death to begin, though all those things and more contributed. I don’t know where we are in the death process, but I know we are in the hospice room. I’ve known it for a while, though I have sometimes been in denial, as all mourners are wont to do. I don’t like to talk about it; I am at the stage where talking about it usually only inflicts pain.

Unfortunately, however, I cannot carry out my job as a writer today with the level of analytic rigor you expect from me without acknowledging that we are sitting in hospice. It is increasingly difficult to honestly discuss the developments of frontier AI, and what kind of futures we should aim to build, without acknowledging our place at the deathbed of the republic as we know it. Except there is no convenient machine to decide for us that the patient has died. We just have to sit and watch.

Our republic has died and been reborn again more than once in America’s history. America has had multiple “foundings.” Perhaps we are on the verge of another rebirth of the American republic, another chapter in America’s continual reinvention of itself. I hope so. But it may be that we have no more virtue or wisdom to fuel such a founding, and that it is better to think of ourselves as transitioning gradually into an era of post-republic American statecraft and policymaking. I do not pretend to know.

I am now going to write about a skirmish between an AI company and the U.S. government. I don’t want to sound hyperbolic about it. The death I am describing has been going on for most of my life. The incident I am going to write about now took place last week, and it may even be halfway satisfyingly resolved within a day.

I am not saying this incident “caused” any sort of republican death, nor am I saying it “ushered in a new era.” If this event contributed anything, it simply made the ongoing death more obvious and less deniable for me personally. I consider the events of the last week a kind of death rattle of the old republic, the outward expression of a body that has thrown in the towel.

by Dean Ball, Hyperdimensional |  Read more:
Image: via
[ed. More excerpts below. See also: Why the Pentagon Wants to Destroy Anthropic (NYT), Ezra Klein interviews Dean Ball (with a follow-up essay: The Future We Feared is Already Here). And, for a more comprehensive assessment of what the AI community thinks: Anthropic Officially, Arbitrarily and Capriciously Designated a Supply Chain Risk (DWAtV).]
***
"... Except the notion of “passing a law” is increasingly a joke in contemporary America. If you are serious about the outcome in question, “passing a law” is no longer Plan A; the dynamic is more like “well of course, one day, we’ll get a law passed, but since we actually care about doing this sometime soon, as opposed to in 15 years, we’ll accomplish our objective through [some other procedure or legal vehicle].” With this, governance has become more and more informal and ad hoc, power more dependent on the executive (whose incentive is to jam every goal he has through his existing power in as little time as possible, since he only has the length of his term guaranteed to him), and the policy vehicles in question more and more unsuited to the circumstances of their deployment, or the objectives they are being deployed to accomplish." [...]

... DoW insisted that the only reasonable path forward is for contracts to permit “all lawful use” (a simplistic notion not consistent with the common contractual restrictions discussed above), and has further threatened to designate Anthropic a supply chain risk. This is a power reserved exclusively for firms controlled by foreign adversary interests, such as Huawei, and usually means that the designated firm cannot be used by any military contractor in their fulfillment of any military contract.

War Secretary Pete Hegseth has gone even further, saying he would prevent all military contractors from having “any commercial relations” with Anthropic. He almost surely lacks this power, but a plain reading of this would suggest that Anthropic would not be able to use any cloud computing nor purchase chips of its own (since all relevant companies do business with the military), and that several of Anthropic’s largest investors (Nvidia, Google, and Amazon) would be forced to divest. Essentially, the United States Secretary of War announced his intention to commit corporate murder. The fact that his shot is unlikely to be lethal (only very bloody) does not change the message sent to every investor and corporation in America: do business on our terms, or we will end your business.

This strikes at a core principle of the American republic, one that has traditionally been especially dear to conservatives: private property. Suppose, for example, that the military approached Google and said “we would like to purchase individualized worldwide Google search data to do with whatever we want, and if you object, we will designate you a supply chain risk.” I don’t think they are going to do that, but there is no difference in principle between this and the message DoW is sending. There is no such thing as private property. If we need to use it for national security, we simply will. The government won’t quite “steal” it from you—they’ll compensate you—but you cannot set the terms, and you cannot simply exit from the transaction, lest you be deemed a “supply chain risk,” not to mention have the other litany of policy obstacles the government can throw at you.

This threat will now hover over anyone who does business with the government, not just in the sense that you may be deemed a supply chain risk but also in the sense that any piece of technology you use could be as well. Though Chinese AI providers like DeepSeek have not been labeled supply chain risks (yes, really; this government says Anthropic, an American company whose services it used in military strikes as recently as this past weekend, is more of a threat than a Chinese firm linked to the Chinese military), that implicit threat was always there.
***
[ed. One more thing. The guy who created this whole stupid dispute? Not Hegseth, he doesn't know shit about shit. It's former disgraced Uber manager: Emil Michael. A real piece of work (so of course, he fits right in.] 

Saturday, March 7, 2026

World Monitor

How a Music Streaming CEO Built an Open-Source Global Threat Map in His Spare Time. Frustrated by fragmented war news, Anghami’s Elie Habib built World Monitor, a platform that fuses global data, like aircraft signals and satellite detections, to track conflicts as they unfold.

Elie Habib doesn’t work in the defense or intelligence industries. Instead, he runs Anghami, one of the Middle East’s largest music streaming platforms. But as missiles began flying across the region, a side project he coded earlier this year suddenly became something bigger: an open-source dashboard people around the world were using to track the war in real time.

The engineer turned executive built the system, called World Monitor, to make sense of chaotic geopolitical news. Instead, it went viral. [...]

The idea emerged as headlines began colliding in ways that felt impossible to follow. “The news became genuinely hard to parse,” he says. “Iran, Trump’s decisions, financial markets, critical minerals, tensions compounding from every direction simultaneously.”

Traditional media wasn’t solving the problem he had in mind. “I didn’t need a news aggregator,” he says. “I needed something that showed me how these events connect to each other in real time. The existing OSINT tools that did this cost governments and large enterprises tens of thousands of dollars annually.” [...]

The platform processes a messy stream of global data, bypassing social media noise to pull facts directly from the source.

“The system ingests 100-plus data streams simultaneously,” Habib notes. The result is a constantly updating map of global tensions: conflict zones with escalation scores, military aircraft broadcasting positions through ADS-B transponders, ship movements tracked through AIS signals, nuclear installations, submarine cables, internet outages and satellite fire detections.

“Everything is normalized, geolocated and rendered on a WebGL globe capable of displaying thousands of markers without frame drops,” Habib says...

When the War Hit

Before the missiles started flying, people used the map for very specific reasons. Traders tracked cargo ships to monitor supply chains, while engineers watched power grids and infrastructure networks. “One sports bar runs it on their TVs when there are no games,” Habib says.

But when joint US-Israeli military strikes hit Iran in late February—disrupting maritime logistics and forcing commercial airspace to clear—the platform’s role changed almost overnight.

What had been a curiosity for analysts and hobbyists became a live threat monitor. Casual observers began watching active escalations unfold in real time.

How the Map Verifies Reality

Processing hundreds of live data streams during a military conflict raises a question: How do you verify information fast enough to keep the system moving?

Habib’s answer was to remove human editors entirely. “Zero editorializing,” he says. “No human editor makes a call.”

Instead, Habib says the platform relies on a strict source hierarchy. Wire services and official channels such as Reuters, AP, the Pentagon and the UN sit at the top tier. Major broadcasters including the BBC and Al Jazeera follow, along with specialist investigative outlets such as Bellingcat. In total, he says the system processes about 190 sources, assigning higher confidence scores to more reliable ones.

Software then scans incoming reports for major events and emerging patterns. If multiple credible sources report the same development within minutes, the system flags it as a breaking alert. But headlines alone are not enough.

Because online claims can be unreliable, the platform also looks for physical signals on the ground. It tracks disruptions such as internet blackouts, diverted military flights, halted cargo ships and satellite-detected fires. “A convergence algorithm then checks how many distinct signal types activate in the same geography simultaneously,” Habib says.

“One signal is noise. Three or four converging in the same location is the signal worth surfacing,” Habib says. If an internet outage coincides with diverted aircraft and a satellite heat signature in the same area, the map flags a potential escalation.

by Lilian Wagoy, Wired |  Read more:
Image: World Monitor
[ed. Example here. Also, just as an aside (since World Monitor was created by a music streaming CEO) I'd like to highlight once again the totally awesome Radio Garden. I've been using this streaming app ever since I got it, exploring and listening to FM music stations all over the world.]

Thursday, March 5, 2026

Do You Have to Be Polite to AI?

When a group of researchers decided to test whether "positive thinking" made AI chatbots more accurate, it led to some surprising results. As they asked various chatbots questions, they tried calling the AIs "smart", encouraged them to think carefully and even ended their questions with "This will be fun!" None of it made a consistent difference, but one technique stood out. When they made an artificial intelligence pretend it was on Star Trek, it got better at basic maths. Beam me up, I guess.

People have all sorts of bizarre strategies to get better responses from large language models (LLMs), the AI technology behind tools like ChatGPT. Some swear AI does better if you threaten it, others think chatbots are more cooperative if you're polite and some people ask the robots to role-play as experts in whatever subject they're working on. The list goes on. It's part of the mythology around "prompt engineering" or "context engineering" – different ways to construct instructions to make AI deliver better results. Here's the thing: experts tell me that a lot of accepted wisdom about prompting AI simply doesn't work. In some cases, it could even be dangerous. But the way you talk to an AI does matter, and some techniques really will make a difference. [...]

How to talk to your chatbot

There are some very real problems with AI, from ethical concerns to the environmental impact it can have. Some people refuse to engage with it altogether. But if you are going to use LLMs, learning to get what you want faster and more efficiently will be better for you and, probably, for the energy consumed in the process. These tips will get you started.

Ask for multiple options

"The first thing I tell people is don't ask for one answer, ask for three or five," White says. If you want help with a piece of writing, for example, tell the AI to give you multiple options that vary in some important way. "This forces the human being to re-engage and think about what they like and why."

Give examples

Provide the AI with a sample whenever possible. "For instance, I see people ask an LLM to write an email and then get frustrated because they're like 'that doesn't sound like me at all'," White says. The natural impulse is to respond with a list of instructions, "do this" and "don't do that". White says it's much more effective to say "here are 10 emails I've sent in the past, use my writing style".

Ask for an interview

"Let's say you want to generate a job description. Tell the AI 'I want you to ask me questions, one at a time, until you've gathered enough information to write a compelling job listing," White says. "By doing it one question at a time, it can adapt to your answers."

Be careful about role-playing

"There used to be this thought that if you told the AI it was a maths professor, for example, it would actually have higher accuracy when answering maths questions," says Sander Schulhoff, an entrepreneur and researcher who helped popularise the idea of prompt engineering. But when you're looking for information or asking questions with one right answer, Schulhoff and others say role-playing can make AI models less accurate.

"That can actually be dangerous," Battle says. "You're actually encouraging hallucination because you're telling it it's an expert, and it should trust its internal parametric knowledge." Essentially, it can make the AI act too confident.

But for wide open tasks with no single answer, role-playing is effective (think advice, brainstorming and creative or exploratory problem solving). If you're nervous about job interviews, telling a chatbot to imitate a hiring manager could be good practice – just consult other resources, too.

Stay neutral

"Don't lead the witness," Battle says. If you're trying to decide between two cars, don't say you're leaning towards the Toyota. "Otherwise, that's the answer you're likely to get."Pleases and thank yous

According to a 2019 Pew Research Center survey, more than half of Americans say "please" when they're talking to their smart speakers. It seems that trend continued. A 2025 survey by the publisher Future found 70% of people are polite to AI when they use it. Most said they're nice because it's just the right thing to do, though 12% said they do it to protect themselves in case of robot uprisings.

Politeness may not protect you from angry robots or make LLMs more accurate, but there are other reasons to keep doing it.

"The bigger thing for me is saying 'please' and 'thank you' might make you more comfortable interacting with the AI," says Schulhoff. "It's not helping the performance of the model, but if it's helping you use the model more because you're more comfortable, then it's useful."

There's also the tenderness of your own human nature to consider. The philosopher Immanuel Kant argued that one reason you shouldn't be cruel to animals is that it's also damaging to yourself. Essentially, being unfriendly to anything makes you a harsher person. You can't hurt AIs feelings because it doesn't have any, but maybe you should be nice anyway. It’s a habit that could benefit other parts of your life.

by Thomas Germain, BBC/Future |  Read more:
Image: Serenity Strull
[ed. See also: I hacked ChatGPT and Google's AI - and it only took 20 minutes (BBC).]

Wednesday, March 4, 2026

Why Libraries Don't Stock Many Audiobooks

Have you ever wondered …

Why can’t my library get more copies of e-books and digital audiobooks?

You’re not alone! And there are a couple of reasons you might find yourself on a long wait list for e-content:
  • Most materials are licensed, not owned by the library like print books are, and publishers put limits on how long and/or how often the content can be used. Once the limit is reached, the library must re-purchase the license if we want to keep offering the e-content to our community. 
  • At the same time, e-books and digital audiobooks cost libraries more than print copies and more than what consumers would pay to purchase them commercially.
Here’s a real-time example:


How can you help?
  • If you finish with e-content early, please return it so the next person can jump off the waiting list and into the book! Just go to Manage Loan and select Return Early in the Libby App.
  • And keep borrowing e-content from your library! The numbers help us advocate for funding.
by Hawaii State Library Association 
[ed. Would it hurt publishers or whoever's collecting these licensing fees to be a little more civic-minded by providing complimentary copies to libraries? (or at least getting rid of repurchasing requirements?) Guess so.]

Tuesday, March 3, 2026

The Explainer: 'The Save America Act' and Data Centers By the Numbers


What To Know About The SAVE America Act

If passed into law, the Safeguard American Voter Eligibility Act will create new barriers to voting in federal elections by requiring documentation of citizenship to register and imposing strict photo-identification rules at polling places. The Onion shares everything you need to know about the SAVE America Act.

Q: What is the goal of the bill?

A: To ensure the pristine integrity of American elections by making sure they never happen again.

Q: What form of ID can be used to confirm citizenship?

A: NRA membership cards.

Q: Is the Senate expected to pass the SAVE America Act?

A: Depends on which senators die between now and the vote.

Q: Where’s my birth certificate?

A: Did you check the bottom drawer of the living room cabinet? There should be a purple folder underneath all those old receipts.

Q: Why did Trump endorse it?

A: To stop the many thousands of immigrants who aren’t here anymore from voting.
***

Data Centers By The Numbers

The surge in AI, cryptocurrency, and other digital assets is rapidly increasing demand for computational infrastructure around the country. The Onion examines the key facts and figures behind data centers.

0.8
New pH of your groundwater

$900,000,000
What 16GB of RAM will cost next year

4,000
Palm fronds fanned to cool the servers

1
Security guard job that Mom thinks might help you get back on your feet

3-2
City council vote that could have stopped this

600 billion
Goddamn wires to untangle

7
People profiting from this
***
[ed. See also: Anyone Else Have Those Weird Dreams Where Sobbing Future Generations Beg You To Change Course? (Sam Altman, CEO, OpenAI:]

The human subconscious is such an interesting thing. No matter how much you think you’ve got it figured out, it’ll always spit out the most random stuff. Take me, for example. After coming home from a long day at the world’s most groundbreaking artificial intelligence organization, I’ll go to bed and have the weirdest dreams where people from the future are sobbing and begging me to change course.

Anyone else ever have these?

It’s funny. Some people have dreams where their teeth fall out; others where they show up to high school tests naked. But the second my head hits the pillow, I’m suddenly in a cold gray smoky void where all I can make out are broken, haunted swarms of people pleading with me to “end this now while there’s still time.” Really peculiar, right? I wish there was some way to find other people who have had them. But when I search “endless crowds of weeping silhouettes telling you this is a terrible mistake” dreams on Reddit, it turns up nada.

It’s tough, because I don’t have much time during the day to think about them. I asked my spouse, Oliver, if he’s ever had the old “people screaming for help from the devastated wreckage of a future world” dream, and he said he didn’t know what that was. I even joked about it while I was out grabbing morning coffees with some venture capitalist buddies. I said, “Sorry if I’m a little off the ball today, guys—I had another one of those dreams where you’re on a scorched, desolate landscape desperately pushing past men who grab you by the lapel, shake you, and cry out, ‘Please understand: This isn’t a dream. It’s a warning.’”

They just looked at me like I was crazy, though... [read more:]

Sunday, March 1, 2026

Gambling the Future Into Existence

Polymarket, for the blissfully uninitiated, is what’s known as a “prediction market” — a place where people trade shares (i.e., make bets) on the probability of real-world events. And Substack is, like a growing number of media companies, looking to juice its bottom line by embracing gambling. Ahem, excuse me: live prediction markets.

You’ll find Polymarket data in the Wall Street Journal and Kalshi probabilities on CNN. “Journalism is better when it’s backed by live markets,” Polymarket tweeted, of the Substack partnership.

Many journalists have tried to parse this curious phrasing, which has that vacuous, plasticine sheen particular to AI slop. My parsing, if I’m being charitable, is that Polymarket thinks that media becomes more accurate or more representative when journalists incorporate prediction-market probabilities into their reporting on future events, much as they might cite expert opinion or historical precedent.

Prediction markets, whatever their flaws, are often pretty good at forecasting the future. So in a news story about the military buildup in the Middle East, for instance … maybe there’s some value in including not only troop movements and diplomatic statements, but also the fact that traders currently assign a 35% chance to the US bombing Iran by March 7.

Lots of critics have already pointed out the obvious flaws in this model: the risks of insider trading and market manipulation; the bad incentives for journalists. I’m personally most concerned with how this degrades the wider information environment.

Predictions aren’t made in a vacuum. Even in Polymarket’s platonic ideal — which is, I guess, a perfectly sincere and rational trader placing bets based on his best assessment of available information — that information is drawn from the news. Markets and media coexist in the same ecosystem.

So traders consume news reporting and analysis. They price probabilities (place bets) according to what they’ve read. Journalists then cite those probabilities as meaningful signals about what the future will bring next. Those citations shape public perception. Public perception influences trades. The trades influence reporting. Again and again and again and again.

I’m simplifying here, for the sake of argument, but I think anyone can see that this particular snake is eating its own tail. The discourse becomes reflexive and self-reinforcing; the narrative shrinks away from conventional signals of ground truth in order to reorient around the markets.

We actually have a recent corollary for this phenomenon in Twitter, which profoundly shaped the international news agenda throughout the 2010s. Prior to Elon Musk’s purchase of the platform, mainstream journalists not only habitually used Twitter for work, but relied on it to gauge coverage priorities and newsworthiness.

As a result, the topics trending on Twitter — within a narrow, extremely online user base — arguably got over-represented in mainstream coverage. And actors who understood Twitter dynamics could, and did, manipulate the media. “When political campaigns wanted to shift a story or to have something become a story, they would go to Twitter for that,” the media scholar Shannon McGregor said in 2022. “They’re trying to use Twitter … because they know that journalists rely on it for what is going to become the news.”

Polymarket is like Twitter, except worse — because money, obviously. And because the people who run Polymarket tweet vapid, blob-shaped boilerplate like “journalism is better when it’s backed by live markets” … whatever the hell that means.

But, hey — some percent of traders on Polymarket are probably willing to bet that it’s the future of media. And someone on Subsack is probably willing to post to that effect.

What a time to be alive, truly: You gamble the future into existence.

by Caitlin Dewey, Links I Would Gchat You.. |  Read more:
Image: uncredited

Tomorrow’s Smart Pills Will Deliver Drugs and Take Biopsies

One day soon, a doctor might prescribe a pill that doesn’t just deliver medicine but also reports back on what it finds inside you—and then takes actions based on its findings.

Instead of scheduling an endoscopy or CT scan, you’d swallow an electronic capsule smaller than a multivitamin. As it travels through your digestive system, it could check tissue health, look for cancerous changes, and send data to your doctor. It could even release drugs exactly where they’re needed or snip a tiny biopsy sample before passing harmlessly out of your body.

This dream of a do-it-all pill is driving a surge of research into ingestible electronics: smart capsules designed to monitor and even treat disease from inside the gastrointestinal (GI) tract. The stakes are high. GI diseases affect tens of millions of people worldwide, including such ailments as inflammatory bowel disease, celiac disease, and small intestinal bacterial overgrowth. Diagnosis often involves a frustrating maze of blood tests, imaging, and invasive endoscopy. Treatments, meanwhile, can bring serious side effects because drugs affect the whole body, not just the troubled gut.

If capsules could handle much of that work—streamlining diagnosis, delivering targeted therapies, and sparing patients repeated invasive procedures—they could transform care. Over the past 20 years, researchers have built a growing tool kit of ingestible devices, some already in clinical use. These capsule-shaped devices typically contain sensors, circuitry, a power source, and sometimes a communication module, all enclosed in a biocompatible shell. But the next leap forward is still in development: autonomous capsules that can both sense and act, releasing a drug or taking a tissue sample.

That’s the challenge that our lab—the MEMS Sensors and Actuators Laboratory (MSAL) at the University of Maryland, College Park—is tackling. Drawing on decades of advances in microelectromechanical systems (MEMS), we’re building swallowable devices that integrate sensors, actuators, and wireless links in packages that are small and safe enough for patients. The hurdles are considerable: power, miniaturization, biocompatibility, and reliability, to name a few. But the potential payoff will be a new era of personalized and minimally invasive medicine, delivered by something as simple as a pill you can swallow at home. [...]

Targeted drug delivery is one of the most compelling applications for ingestible capsules. Many drugs for GI conditions—such as biologics for inflammatory bowel disease—can cause serious side effects that limit both dosage and duration of treatment. A promising alternative is delivering a drug directly to the diseased tissue. This localized approach boosts the drug’s concentration at the target site while reducing its spread throughout the body, which improves effectiveness and minimizes side effects. The challenge is engineering a device that can both recognize diseased tissue and deliver medication quickly and precisely.

With other labs making great progress on the sensing side, we’ve devoted our energy to designing devices that can deliver the medicine. We’ve developed miniature actuators—tiny moving parts—that meet strict criteria for use inside the body: low power, small size, biocompatibility, and long shelf life.

Some of our designs use soft and flexible polymer “cantilevers” with attached microneedle systems that pop out from the capsule with enough force to release a drug, but without harming the intestinal tissue. While hollow microneedles can directly inject drugs into the intestinal lining, we’ve also demonstrated prototypes that use the microneedles for anchoring drug payloads, allowing the capsule to release a larger dose of medication that dissolves at an exact location over time.

In other experimental designs, we had the microneedles themselves dissolve after injecting a drug. In still others, we used microscale 3D printing to tailor the structure of the microneedles and control how quickly a drug is released—providing either a slow and sustained dose or a fast delivery. With this 3D printing, we created rigid microneedles that penetrate the mucosal lining and gradually diffuse the drug into the tissue, and soft microneedles that compress when the cantilever pushes them against the tissue, forcing the drug out all at once.

by Reza Ghodssi, Justin Stine, Luke Beardslee, IEEE Spectrum |  Read more:
Image: Maximilian Franz/Engineering at Maryland Magazine