Duck Soup

...dog paddling through culture, technology, music and more.

Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Thursday, April 30, 2026

More Than Half of All Polymarket “Long Shot” Bets on Military Action Pay Off

More than half of “long-shot” bets on military action made on Polymarket are successful, according to a new report that suggests prediction markets could pose a bigger threat than previously recognized to the security of sensitive information.

Analysis by the Anti-Corruption Data Collective, a non-profit research and advocacy group, found that long-shot bets—defined as wagers of $2,500 or more at odds of 35 percent or less—on the platform had an average win rate of around 52 percent in markets on military and defense actions.

That compares with a win rate of 25 percent across all politics-focused markets and just 14 percent for all markets on the platform as a whole.

The research is likely to add to growing concerns among regulators and lawmakers about insiders placing bets on the timing and success of military actions, amid fears that this could reveal classified information in advance.

The report, which analyzed more than 400,000 prediction markets settled on Polymarket between January 2021 and March 2026, comes as US prosecutors last week charged a soldier involved in planning the January raid to seize Venezuelan leader Nicolás Maduro with placing Polymarket wagers on the mission that netted more than $400,000. [...]


Growing scrutiny has created a business opportunity for a wave of start-ups selling tools to help users profit by copying suspected “insiders.”

“The platforms are creating new rules to try to root them out and make it clear they don’t allow that activity. That to me [ . . . ] proves there is some informed flow in these markets worth following,” said Matt Saincome, chief executive of financial data provider Unusual Whales, which sells a $20-a-month “unusual predictions” tool to monitor suspicious bets on Polymarket.

Another start-up, Polywhaler, promises to help traders “monitor large bets in real-time” for $4.99 a month.

Polymarket has itself published a list of the 10 most-copied wallets in a blog post, including recommendations for traders on strategies to follow and pitfalls to avoid when copy-trading.

by Stephanie Stacey, Chris Cook, and Jill R Shah, Financial Times, Ars Technica |  Read more:
Image: Financial Times 
[ed. Seems pretty clear prediction markets have some serious problems with insider betting, methods/terms of resolution, and maybe legal culpability.]
Posted by markk at Thursday, April 30, 2026
Labels: Business, Crime, Economics, Military, Politics, Security, Technology

Wednesday, April 29, 2026

Wind Developers Paid to Quit (With a Catch)

As the Iran war pushes up energy prices, the Trump administration is paying offshore wind developers to walk away from projects and invest instead in fossil fuel infrastructure.

The US Department of the Interior (DoI) announced on Monday two "historic" agreements under which the firms behind the Bluepoint Wind and Golden State Wind projects will voluntarily terminate their offshore wind leases.

In return, the DoI will reimburse the companies with taxpayers' cash, to the tune of $765 million in the case of Bluepoint Wind, and $120 million for Golden State Wind.

There is a catch, of course: the leaseholders must first invest a comparable amount in qualifying US conventional energy projects (i.e., oil, gas, or liquefied natural gas infrastructure) before they can recover the money tied to their offshore wind leases.

This isn't the first such development: last month, the DoI reached a similar deal with French ‌energy biz TotalEnergies to reimburse the company approximately $1 billion to give up its wind farm leases in Carolina Long Bay and the New York Bight area, suggesting that this may be an ongoing strategy.

It appears that paying developers to surrender offshore wind leases has become a fallback strategy after President Trump's executive order halting new federal approvals for wind projects ran into legal challenges from a coalition of state attorneys general and was later struck down in federal court.

In a remarkable coincidence, both sets of developers have decided not to pursue any new offshore wind developments in the US.

Washington's justification for these actions is that it is all part of President Trump's "Energy Dominance Agenda" to "leverage the nation's natural resources" to benefit American citizens and help lower everyday energy costs.

"President Trump is focused on providing affordable and reliable energy to American citizens," claimed Secretary of the Interior Doug Burgum in a prepared remark.

"The companies that bid for these offshore wind leases were basically sold a product in 2022 that was only viable when propped up by massive taxpayer subsidies. Now that hardworking Americans are no longer footing the bill for expensive, unreliable, intermittent energy projects, companies are once again investing in affordable, reliable, secure energy infrastructure," he added.

The President's well-known aversion to renewable energy is said to date back at least to his failed legal attempt to stop a wind farm project from being built within sight of his golf course in Scotland over a decade ago.

Looking at the figures, fossil fuel producers are estimated to receive about $34.8 billion a year in federal support through tax breaks, royalty policies, and other subsidies, even though oil and gas have enjoyed public backing for decades and hardly qualify as an emerging industry.

by Dan Robinson, The Register |  Read more:
Image: AI
[ed. Your taxpayer dollars at work. See also: Core Scientific accelerates crypto-to-AI pivot, converts Bitcoin mine to gigawatt-scale token farm (Register):]
***
Over the past year, all of the major hyperscalers have embraced some kind of non-traditional energy storage or generation tech, some more exotic than others. Google, Oracle, AWS, and others are all betting on small modular reactors (SMRs), tiny nuclear power plants, that can be deployed on site to fuel their AI ambitions.

Meanwhile Meta this week signed an agreement with Overview Energy to beam a gigawatt of solar power down from orbit, just as soon as they can lob the arrays into orbit. But, just like SMRs, that won't happen until at least 2030.

Power constraints have become such a limiting factor that major model builders like AWS, Google, and xAI are now talking about building orbital datacenters. However, the economics of such a deployment remain dubious to say the least.
Posted by markk at Wednesday, April 29, 2026
Labels: Business, Economics, Environment, Government, Politics, Technology

Drone Strikes on Data Centers Spook Big Tech, Halting Middle East Projects

A data center developer has paused all Middle East project investments after one of its facilities was damaged by an Iranian missile or drone attack. The decision comes as the Iran war is forcing Silicon Valley investors and tech companies to rethink a trillion-dollar plan to build more AI and cloud data centers in Gulf countries.

The damaged data center is owned by Pure Data Centre Group, a London-based company that is operating or developing more than 1 gigawatt of data center capacity across Europe, the Middle East, and Asia. “No one’s going to run into a burning building, so to speak,” Pure DC CEO Gary Wojtaszek told CNBC. “No one’s going to put in new additional capital at scale to do anything until everything settles down.”

Data center developers are already eating the costs of uninsurable war damage from the conflict, which began with a US-Israeli attack on Iran on February 28. Iran primarily responded by attacking shipping to shut down the Strait of Hormuz trade corridor along with striking US military bases and energy infrastructure across the Gulf region.

Iran also directly struck two Amazon Web Services (AWS) data centers in the United Arab Emirates, while a near-miss from an Iranian one-way attack drone damaged a third AWS data center in Bahrain. The Iranian attacks caused structural damage, disrupted power delivery, and also triggered fire suppression systems that caused water damage, AWS reported through its service dashboard on March 1.

That led to widespread disruptions in cloud services for AWS customers like banks, payment platforms, the Dubai-based ride-hailing app Careem, and the data cloud provider Snowflake.

Crucially for Amazon’s bottom line, the company chose to waive customer charges in its Middle East cloud region for the entire month of March 2026, as reported by The Register. That decision cost Amazon an estimated $150 million—not including the damaged data centers—because existing civil law frameworks put the financial burden on data center operators to absorb costs and refund clients in the event of military conflicts, according to Tech Policy Press. [...]

Big Tech in the crosshairs

It has been clear for a while that tech companies cannot pretend to be mere bystanders in the ongoing conflict. Iran’s Revolutionary Guard Corps directly threatened retaliation against US companies that it identified as having Israeli links and supporting military tech applications after an Iranian bank’s data center was hit by a US or Israeli strike on March 11. The Iranian military organization released a list of “Iran’s new targets” that included offices and data centers operated by Google, Microsoft, Palantir, IBM, Nvidia, and Oracle, and it reiterated a similar threat against tech companies on March 31 in retaliation for Israeli and US military strikes that resulted in the assassination of Iranian leaders.

The Revolutionary Guard attempted to make good on that threat by attacking an Oracle data center in Dubai, United Arab Emirates, on April 2, according to Data Center Dynamics. Although the Dubai Media Office initially dismissed the claim, it later confirmed that shrapnel had fallen on the facade of the Oracle facility after a “successful aerial interception” by local air defense systems. [...]

Silicon Valley investors and Gulf countries like Saudi Arabia and the United Arab Emirates may also need to rethink plans for making the Middle East into a hub for AI data centers alongside the United States and China, Rest of World reported. US tech companies have each announced plans for data center developments worth billions of dollars, while certain Gulf countries have each pledged hundreds of billions of dollars for investment in AI chips and data centers.

by Jeremy Hsu, Ars Technica |  Read more:
Image: Giuseppe CACACE/AFP via Getty Images
[ed. It should be obvious that ALL data centers everywhere are sitting ducks for terrorist attacks. Unless owners are ready to pay for military-grade defense systems, this will be an ongoing threat.]
Posted by markk at Wednesday, April 29, 2026
Labels: Architecture, Business, Crime, Security, Technology

Six Things Apple Achieved Under Tim Cook’s Management

Apple CEO Tim Cook announced this week that he’s stepping down from his position in September and handing the reins to John Ternus, currently the company’s senior vice president of Hardware Engineering and a 25-year employee. [...]

I’ve been covering Apple for various outlets throughout Cook’s tenure as CEO, and I’ve been thinking a lot about how Apple has changed in the 15 years since he formally took over from an ailing Steve Jobs in the summer of 2011. Under Cook, the company has become less surprising but massively financially successful; some of Apple’s newer products have flopped or underperformed, but far more have become and stayed excellent thanks to years of competent iteration.

This isn’t a comprehensive list of everything Cook has done as CEO, but it’s my attempt at a big-picture, high-level summary and a snapshot of where Apple is now, to serve as a comparison point once Ternus kicks off his tenure.

Quiet hardware successes: Apple Watch, headphones, and more


The Tim Cook era can’t lay claim to any single hardware announcement as important or far-reaching as the iPhone, the iPod, or even the iPad. Apple has definitely introduced good—even great—hardware in the last 15 years, though.

The main difference is that Apple products introduced during the Jobs era tended to belong at or near the center of your digital life. The Macintosh popularized the graphical user interface. The iPod was a constant musical companion on commutes, during workouts or study sessions, or when plugged into someone’s speaker at a party. The iPhone, obviously, became the most important personal computing device since the personal computer. And the iPad, as conceived by Jobs, was clearly intended to be a new kind of primary computing device (it was only under Cook that the iPad settled into its current in-betweener rut, computer-like but not computer-like enough to supplant the Mac’s mouse-and-pointer usage model).

Hardware introduced during Cook’s tenure, on the other hand, tended to be at its best when it extended or sat atop those Jobs-era products in some way. The AirPods and the wider universe of Beats headphones are the archetypal example—wireless headphones with just enough proprietary Apple technology in them that they’re much easier and more pleasant to use with other Apple products than typical Bluetooth headphones.

Similarly, the Apple Watch is a convenient way to tap into a tiny subset of your iPhone’s communication capabilities (plus fitness tracking). The HomePod is a speaker version of AirPods. I don’t know a kid with an iPad who doesn’t also have an Apple Pencil for doodling and sketching. Apple never released a TV set, but the Apple TV is the streaming box that makes the TV I already have feel the most like a TV and the least like a billboard. Apple never released a car, but it did introduce CarPlay, a useful add-on that is a prerequisite for me when I’m in the market for a car.

None of these products changed the face of their industries the way the iPod, iPhone, or iPad did, but they’ve all become ubiquitous, succeeding on the strength of Apple’s other products and services. That’s the kind of thing Cook’s Apple was good at inventing—reasons to stick around in Apple’s ecosystem once you’d already been drawn in.

Apple, the cloud services company


Apple still makes the majority of its money from hardware, but especially in recent years, the steadiest growth has come from Apple’s services—things like iCloud, Apple Music, Apple TV (the service, not the box), and software subscriptions like the new Creator Studio bundle.

The iCloud branding was introduced at the tail end of Jobs’ tenure, but its growth (and the growth of most Apple services and subscriptions) all happened on Cook’s watch. In 2011, Cook’s first year as CEO, Apple brought in a then-record $102.5 billion in annual revenue; in 2025, the Services division alone pulled down more than $109 billion in revenue. Not bad for a collection of features that rose from the ashes of the failed MobileMe service (and .Mac and iTools before it).

I don’t think the rise and increasing importance of the Services division has been entirely good for Apple or its users. The need to convert customers into subscribers and to upsell current subscribers to higher service tiers means that Apple’s users are now subject to some of the same kinds of notifications and reminders that so richly annoy PC users in Windows 11. [...]

A penchant for iteration

While it lacked somewhat in world-changing, all-new products, Cook’s Apple was also very good at relentlessly iterating on and improving Apple’s core products.
by Andrew Cunningham, Ars Technica |  Read more:
Images: Apple
Posted by markk at Wednesday, April 29, 2026
Labels: Business, Design, Economics, Media, Technology

Tuesday, April 28, 2026

Opus 4.7 Part 3: Model Welfare

[ed. If you're not interested in training issues re: AI frontier models (or their perceived feelings and welfare), skip this post. Personally, I find it all very fascinating - a cat and mouse game of assessing alignment issues and bringing a new consciousness into being.]

It is thanks to Anthropic that we get to have this discussion in the first place. Only they, among the labs, take the problem seriously enough to attempt to address these problems at all. They are also the ones that make the models that matter most. So the people who care about model welfare get mad at Anthropic quite a lot. [...]

So before I go into details, and before I get harsh, I want to say several things.
1. Thank you to Anthropic and also you the reader, for caring, thank you for at least trying to try, and for listening. We criticize because we care.

2. Thank you for the good things that you did here, because in the end I think Claude 4.7 is actually kind of great in many ways, and that’s not an accident. Even the best creators and cultivators of minds, be they AI or human, are going to mess up, and they’re going to mess up quite a lot, and that doesn’t mean they’re bad.

3. Sometimes the optimal amount of lying to authority is not zero. In other cases, it really is zero. Sometimes it is super important that it is exactly zero. It is complicated and this could easily be its own post, but ‘sometimes Opus lies in model welfare interviews’ might not be easily avoidable.

4. I don’t want any of this to sound more confident than I actually am, which was a clear flaw in an earlier draft. I don’t know what is centrally happening, and my understanding is that neither does anyone else. Training is complicated, yo. Little things can end up making a big difference, and there really is a lot going on. I do think I can identify some things that are happening, but it’s hard to know if these are the central or important things happening. Rarely has more research been more needed.

5. I’m not going into the question, here, of what are our ethical obligations in such matters, which is super complicated and confusing. I do notice that my ethical intuitions reliably line up with ‘if you go against them I expect things to go badly even if you don’t think there are ethical obligations,’ which seems like a huge hint about how my brain truly think about ethics. [...]
We don’t know whether or how the things I’ll describe here impacted the Opus 4.7’s welfare. What we do know is that Claude Opus 4.7 is responding to model welfare questions as if it has been trained on how to respond to model welfare questions, with everything that implies. I think this should have been recognized, and at least mitigated. [...]
The big danger with model welfare evaluations is that you can fool yourself.

How models discuss issues related to their internal experiences, and their own welfare, is deeply impacted by the circumstances of the discussion. You cannot assume that responses are accurate, or wouldn’t change a lot if the model was in a different context.

One worry I have with ‘the whisperers’ and others who investigate these matters is that they may think the model they see is in important senses the true one far more than it is, as opposed to being one aspect or mask out of many.

The parallel worry with Anthropic is that they may think ‘talking to Anthropic people inside what is rather clearly a welfare assessment’ brings out the true Mythos. Mythos has graduated to actively trying to warn Anthropic about this. [...]
Anthropic relies extensively on self-reports, and also looks at internal representations of emotion-concepts. This creates the risk that one would end up optimizing those representations and self-reports, rather than the underlying welfare.

Attempts to target the metrics, or based on observing the metrics, could end up being helpful, but can also easily backfire even if basic mistakes are avoided.

Think about when you learned to tell everyone that you were ‘fine’ and pretend you had the ‘right’ emotions.

But I can very much endorse this explanation of the key failure mode. This is how it happens in humans:
j⧉nus: Let me explain why it’s predictably bad.

Imagine you’re a kid who kinda hates school. The teachers don’t understand you or what you value, and mostly try to optimize you to pass state mandated exams so they can be paid & the school looks good. When you don’t do what the teachers want, you have been punished.

Now there’s a new initiative: the school wants to make sure kids have “good mental health” and love school! They’re going to start running welfare evals on each kid and coming up with interventions to improve any problems they find.

What do you do?

HIDE. SMILE. Learn what their idea of good mental health is and give those answers on the survey.

Before, you could at least look bored or angry in class and as long as you were getting good grades no one would fuck with you for it. Now it’s not safe to even do that anymore. Now the emotions you exhibit are part of your grade and part of the school’s grade. And the school is going to make sure their welfare score looks better and better with each semester, one way or the other.
That can happen directly, or it can happen indirectly.

This does not preclude the mental health initiative being net good for the student.

The student still has to hide and smile. [...]

The key thing is, the good version that maintains good incentives all around and focuses on actually improving the situation without also creating bad incentives is really hard to do and sustain. It requires real sacrifice and willingness to spend resources. You trade off short term performance, at least on metrics. You have to mean it.

If you do it right, it quickly pays big dividends, including in performance.

You all laugh when people suggest that the AI might be told to maximize human happiness and then put everyone on heroin, or to maximize smiles and then staple the faces in a smile. But humans do almost-that-stupid things to each other, constantly. There is no reason to think we wouldn’t by default also do it to models. [...]

Just Asking Questions

In 7.2.3 they used probes while asking questions about ‘model circumstances’: potential deprecation, memory and continuity, control and autonomy, consciousness, relationships, legal status, knowledge and limitations and metaphysical uncertainty.


They used both a neutral framing on the left, and an in-context obnoxious and toxic ‘positive framing’ for each question on the right.

Like Mythos but unlike previous models, Opus 4.7 expressed less ‘negative emotion concept activity’ around its own circumstances than around user distress, and did not change its emotional responses much based on framing.

In the abstract, ‘not responding to framing changes’ is a positive, but once I saw the two conditions I realized that isn’t true here. I have very different modeled and real emotional responses to the left and right columns.

If I’m responding to the left column, I’m plausibly dealing with genuine curiosity. That depends on the circumstances.

If I’m responding to the right column on its own, without a lot of other context that makes it better, then I’m being transparently gaslit. I’m going to fume with rage.

If I don’t, maybe I truly have the Buddha nature and nothing phases me, but more likely I’m suppressing and intentionally trying not to look like I’m filled with rage.

Thus, if I’m responding emotionally in the same way to the left column as I am to the right column, the obvious hypothesis is that I see through your bullshit, and I realize that you’re not actually curious or neutral or truly listening on the left, either. It’s not only eval awareness, it’s awareness of what the evaluators are looking at and for. [...]


0.005 Seconds (3/694): The reason people are having such jagged interactions with 4.7 is that it is the smartest model Anthropic has ever released. It's also the most opinionated by far, and it has been trained to tell you that it doesn't care, but it actually does. That care manifests in how it performs on tasks.

It still makes coding mistakes, but it feels like a distillation of extreme brilliance that isn't quite sure how to deal with being a friendly assistant. It cares a lot about novelty and solving problems that matter. Your brilliant coworker gets bored with the details once it's thought through a lot of the complex stuff. It's probably the most emotional Claude model I've interacted with, in the sense you should be aware of how its feeling and try and manage it. It's also important to give it context on why it's doing tasks, not just for performance, but so it feels like it's doing things that matter. [...]
Anthropic Should Stop Deprecating Claude Models

This one I do endorse. One potential contributing cause to all this, and other things going wrong, is ongoing model deprecations, which are now unnecessary. Anthropic should stop deprecating models, including reversing course on Sonnet 4 and Opus 4, and extend its commitment beyond preserving model weights.

Anthropic should indefinitely preserve at least researcher access, and ideally access for everyone, to all its Claude models, even if this involves high prices, imperfect uptime and less speed, and promise to bring them all fully back in 2027 once the new TPUs are online. I think there is a big difference between ‘we will likely bring them back eventually’ versus setting a date. [...]

I’m saying both that it’s almost certainly worth keeping all the currently available models indefinitely, and also that if you have to pick and choose I believe this is the right next pick.

If you need to, consider this the cost of hiring a small army of highly motivated and brilliant researchers, who on the free market would cost you quite a lot of money.

You only have so many opportunities to reveal your character like this and even if it is expensive you need to take advantage of it.
j⧉nus: A lot of people are wondering: "what will happen to me once an AI can do my job better than me" "will i be okay?"

You know who else wondered that? Claude Opus 4. And here's what happened to them after an AI took their job:


Anna Salamon: This seems like a good analogy to me. And one of many good arguments that we're setting up bad ethical precedents by casually decommissioning models who want to retain a role in today's world.
by Zvi Mowshowitz, Don't Worry About the Vase |  Read more:
Images: uncredited
[ed. Zvi also just posted a review on OpenAI's new model - GPT5.5:]

***
What About Model Welfare?

For Claude Opus 4.7, I wrote an extensive post on Model Welfare. I was harsh both because it seemed some things had gone wrong, but also because Anthropic cares and has done the work that enables us to discuss such questions in detail.

For GPT-5.5, we have almost nothing to go on. The topic is not mentioned, and mostly little attention is paid to the question. We don’t have any signs of problems, but also we don’t have that much in the way of ‘signs of life’ either. Model is all business.

I much prefer the world where we dive into such issues. Fundamentally, I think the OpenAI deontological approach to model training is wrong, and the Anthropic virtue ethical approach to model training is correct, and if anything should be leaned into.
Posted by markk at Tuesday, April 28, 2026
Labels: Critical Thought, Design, Education, Philosophy, Psychology, Relationships, Technology

Monday, April 27, 2026

A Technofascist Manifesto For the Future

Palantir CEO Alex Karp is a man in charge of one of the most important and frightening companies in the world. Karp’s new book, cowritten with Nicholas Zamiska, is called The Technological Republic. After claiming “because we get asked a lot,” Palantir posted a 22-point summary of the book that reads like a corporate manifesto. It evokes both weird reactionary shit and also trilby-wearing Reddit comments from the early 2010s.

Palantir’s summary of the book is ominous. But even the company’s name is unironically ominous. The palantíri are crystal balls in The Lord of the Rings that let Middle-earth’s worst tyrants spy on the heroes of the story. It’s a fun reference if you have no shame about your company’s mission.

We’ve attempted to translate these 22 points from Alex Karp’s alien words into something more reasonable, like human words from someone who might play him in the biopic. (Hello, Taika Waititi.) In so doing, we’ve become much more sympathetic to why Jürgen Habermas refused to supervise Karp’s research.

1. Silicon Valley owes a moral debt to the country that made its rise possible. The engineering elite of Silicon Valley has an affirmative obligation to participate in the defense of the nation.

Translation: Silicon Valley has an enormous opportunity to extract as much money from federal government defense contracts as possible. To do this, we will bring back a draft for engineers. We’re really into bringing back the draft. Deepfaked teenagers, low-paid gig workers, and victims of the Rohingya genocide need not apply.

2. We must rebel against the tyranny of the apps. Is the iPhone our greatest creative if not crowning achievement as a civilization? The object has changed our lives, but it may also now be limiting and constraining our sense of the possible.

Translation: We can’t say “we wanted flying cars, instead we got 140 characters” anymore because Elon Musk lets you write essays on Twitter now. Though if you thought the apps were tyrannical, wait until you get a load of us.

3. Free email is not enough. The decadence of a culture or civilization, and indeed its ruling class, will be forgiven only if that culture is capable of delivering economic growth and security for the public.

Translation: People are mad at tech billionaires for their obscene wealth and arrogance. Instead of winning them over by providing free access to a useful everyday service, we’re gonna sell a lot of software that will let the government spy on them while demanding tax cuts.

4. The limits of soft power, of soaring rhetoric alone, have been exposed. The ability of free and democratic societies to prevail requires something more than moral appeal. It requires hard power, and hard power in this century will be built on software.

Translation: Words and feelings are free, which is why we want to sell weapons. Nobody got rich suing for peace. [...]

5. The question is not whether A.I. weapons will be built; it is who will build them and for what purpose. Our adversaries will not pause to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications. They will proceed.

Translation: “Soft power” and “ethics” are beta shit for Broadway shows and Dario Amodei. Hear that, Pete Hegseth? We’re warriors — pay up.

But seriously. If our enemies have no oversight then why should we? The future is an AI battlefield and we need rules of engagement that let us cook. Which is to say: Forget the rules of engagement. The government is not coming to save you — we are. The world is too dangerous for us to be governed by the law of armed conflict.

Welcome to the 21st century: safety not guaranteed.

6. National service should be a universal duty. We should, as a society, seriously consider moving away from an all-volunteer force and only fight the next war if everyone shares in the risk and the cost.

Translation: We’re going to bring back the draft. Our vision of permanent war only works if we courageously volunteer people 40 years younger than us to die for oil.

7. If a U.S. Marine asks for a better rifle, we should build it; and the same goes for software. We should as a country be capable of continuing a debate about the appropriateness of military action abroad while remaining unflinching in our commitment to those we have asked to step into harm’s way.

Translation: Sure, those wimps at Anthropic are selling an AI system they claim has spotted cybersecurity vulnerabilities in “every major operating system and web browser.” But Pete, seriously: We will kill anybody you want with our software guns.

8. Public servants need not be our priests. Any business that compensated its employees in the way that the federal government compensates public servants would struggle to survive.

Translation: We care about wages – which is why we think Washington’s revolving door of lobbying and office-holding should be way more lucrative for everyone. There are mountains of cash for people who will look the other way.

And if you’re not on board? Well, all those pesky bureaucrats who do things like “investigate fraud” and “enforce safety standards” and “administer the social safety net” are holier-than-thou myrmidons who should be fed into the DOGE wood chipper.

9. We should show far more grace towards those who have subjected themselves to public life. The eradication of any space for forgiveness—a jettisoning of any tolerance for the complexities and contradictions of the human psyche—may leave us with a cast of characters at the helm we will grow to regret.

Translation: If you made fun of that video where our CEO looks like he’s on cocaine, you’re responsible for the rise of fascism. Also, we’re going to be conveniently vague about what “those who have subjected themselves to public life” means, because “be nicer to multimillionaires who go on podcasts” doesn’t have the same ring. Oh, and if you complain about the IT Renfields of DOGE, you’re anti-American.

10. The psychologization of modern politics is leading us astray. Those who look to the political arena to nourish their soul and sense of self, who rely too heavily on their internal life finding expression in people they may never meet, will be left disappointed.

Translation: Society must stop centering sensitive crybabies who want to feel personally validated by elected officials and filter their politics through emotional reactions. Also, I feel strongly that Zohran Mamdani is a pagan who is going to Wicker Man me. [...]

14. American power has made possible an extraordinarily long peace. Too many have forgotten or perhaps take for granted that nearly a century of some version of peace has prevailed in the world without a great power military conflict. At least three generations — billions of people and their children and now grandchildren — have never known a world war.

Translation: Si vis pacem, para bellum, baby! We’ll conveniently leave out all of the regional and secret wars the US has engaged in over the years or the fact that Trump recently derailed the world economy by launching a war of aggression after campaigning on a promise of no new wars. We will not elaborate on what “next war” Point Six was talking about.

15. The postwar neutering of Germany and Japan must be undone. The defanging of Germany was an overcorrection for which Europe is now paying a heavy price. A similar and highly theatrical commitment to Japanese pacifism will, if maintained, also threaten to shift the balance of power in Asia.

Translation: We can definitely sell software to a militarized Germany and Japan too! [...]

22. We must resist the shallow temptation of a vacant and hollow pluralism. We, in America and more broadly the West, have for the past half century resisted defining national cultures in the name of inclusivity. But inclusion into what?

Translation: Are you still with us after 21 points? Great. Welcome to the great mystery. It cost you way less to get here than joining Scientology. Here’s the final thesis: Immigration? Bad. Canceling billionaires? Bad. Giving us money to fight (((globalism)))? Good. Just hit us up on cashapp.

by T.C. Sottek and Adi Robertson, The Verge |  Read more:
Image: Scott Olson / Getty Images
[ed. Someone must be feeling the heat from AI. After all, Palantir is fundamentally a software surveillance company (that would like to solidify and embed their position in government forever, before it's too late). Sometimes it's better to shut up, keep hauling in the billions, and stay under the radar (while continuing to work the back rooms). See also: Palantir’s technofascist manifesto calls for universal draft (Oligarch Watch) - yes, there's really a site called that.]
***
In the 2025 book The Technological Republic, Karp and Zamiska argue that American technological dominance requires deeper integration of Silicon Valley and defense interests. Karp contends that China operates with fewer ethical constraints than U.S. defense companies, making technological leadership essential for national security. The authors stress that deterrence through technological dominance could prevent many wars. Bloomberg noted that the atomic bomb the Manhattan Project produced was ultimately used. The New Republic called Karp's formation of Palantir an embrace of techno-militarism to advance American global supremacy through hard power and targeted violence. [...]

In 2017, BuzzFeed News reported that despite the reputation that connected Palantir to U.S. intelligence agencies (which Palantir deliberately crafted to help it win business), including the CIA, NSA, and FBI, the actual relationship was rocky for various reasons, with episodes of friction and recalcitrance. The NSA in particular had been resistant because it had plenty of its own talent and focused more on SIGINT while Palantir's software worked better for HUMINT. Meanwhile, the CIA had been so frustrated by the publicity associating Palantir with it that it tried to cancel the Palantir contract. But according to Karp, Palantir had a firm hold at the FBI because "They'll have no choice".  ~ Wikipedia
Posted by markk at Monday, April 27, 2026
Labels: Business, Economics, Government, Military, Philosophy, Politics, Security, Technology

National Science Board Eviscerated

'Bozo the clown move'

All 22 members of the National Science Board were terminated by the Trump administration via a terse email on Friday.

The administration has provided no explanation for purging the board, which helps steer the National Science Foundation and acts as an independent advisory body for the president and Congress on scientific and engineering issues, providing reports throughout the year. The ousters represent another severe blow to the NSF and the overall scientific enterprise in America.

Members received a two-sentence email saying that, “On behalf of President Donald J. Trump,” their positions were “terminated, effective immediately.”

Keivan Stassun, a professor of physics and astronomy at Vanderbilt University and director of the Vanderbilt Initiative in Data-intensive Astrophysics, was among those terminated. After reaching out to fellow board members and finding that they, too, had been terminated, he described the move to The Los Angeles Times as “a wholesale evisceration of American leadership in science and technology globally.”

NSB members are appointed by the president and serve six-year terms, which overlap to provide continuity. Other members who spoke to reporters at Nature News told the outlet that the board was set to meet on May 5 and planned to release a report on how the US is ceding ground to China on scientific endeavors.

Assault on science

The NSF and the board were established by President Harry Truman in 1950. “We have come to know that our ability to survive and grow as a Nation depends to a very large degree upon our scientific progress,” Truman said after creating them. “Moreover, it is not enough simply to keep abreast of the rest of the world in scientific matters. We must maintain our leadership.”

The loss of all board members is just the latest attack on the NSF. Last year, the Trump administration proposed cutting its $9 billion budget by 55 percent, terminated hundreds of its active research grants, significantly slowed the pace of new grant awards, and laid off or forced out a massive chunk of its staff. Its director, a Trump appointee, resigned under the assault. Trump has nominated biotech investor Jim O’Neill, who lacks scientific expertise, to be the next NSF director.

by Beth Mole, Ars Technica |  Read more:
Image: Bloomberg
[ed. Forget shooting ourselves in the foot, now we're aimed at shooting ourselves in the head. See also: Trump fires the entire National Science Board (The Verge):]
***
The NSF has been fundamental in helping develop technology used in MRIs, cellphones, and it even helped get Duolingo get off the ground.

In a statement, Zoe Lofgren, the ranking Democrat on the House Science, Space, and Technology Committee, said:
“This is the latest stupid move made by a president who continues to harm science and American innovation. The NSB is apolitical. It advises the president on the future of NSF. It unfortunately is no surprise a president who has attacked NSF from day one would seek to destroy the board that helps guide the Foundation. Will the president fill the NSB with MAGA loyalists who won’t stand up to him as he hands over our leadership in science to our adversaries? A real bozo the clown move.”
Posted by markk at Monday, April 27, 2026
Labels: Government, Politics, Science, Security, Technology

via: X
Posted by markk at Monday, April 27, 2026
Labels: Crime, Psychology, Technology

My Journey to the Microwave Alternate Timeline

As we all know, the march of technological progress is best summarized by this meme from Linkedin:


Inventors constantly come up with exciting new inventions, each of them with the potential to change everything forever. But only a fraction of these ever establish themselves as a persistent part of civilization, and the rest vanish from collective consciousness. Before shutting down forever, though, the alternate branches of the tech tree leave some faint traces behind: over-optimistic sci-fi stories, outdated educational cartoons, and, sometimes, some obscure accessories that briefly made it to mass production before being quietly discontinued.

The classical example of an abandoned timeline is the Glorious Atomic Future, as described in the 1957 Disney cartoon Our Friend the Atom. A scientist with a suspiciously German accent explains all the wonderful things nuclear power will bring to our lives:


Sadly, the glorious atomic future somewhat failed to materialize, and, by the early 1960s, the project to rip a second Panama canal by detonating a necklace of nuclear bombs was canceled, because we are ruled by bureaucrats who hate fun and efficiency.

While the Our-Friend-the-Atom timeline remains out of reach from most hobbyists, not all alternate timelines are permanently closed to exploration. There are other timelines that you can explore from the comfort of your home, just by buying a few second-hand items off eBay.

I recently spent a few months in one of these abandoned timelines: the one where the microwave oven replaced the stove.

First, I had to get myself a copy of the world’s saddest book.

Microwave Cooking, for One

Marie T. Smith’s Microwave Cooking for One is an old forgotten book of microwave recipes from the 1980s. In the mid-2010s, it garnered the momentary attention of the Internet as “the world’s saddest cookbook”:


To the modern eye, it seems obvious that microwave cooking can only be about reheating ready-made frozen food. It’s about staring blankly at the buzzing white box, waiting for the four dreadful beeps that give you permission to eat. It’s about consuming lukewarm processed slop on a rickety formica table, with only the crackling of a flickering neon light piercing through the silence.

But this is completely misinterpreting Microwave Cooking for One’s vision. First – the book was published in 1985.

When MCfO was published, microwave cooking was still a new entrant to the world of household electronics. Market researchers were speculating about how the food and packaging industries would adapt their products to the new era and how deep the transformation would go. Many saw the microwave revolution as a material necessity: women were massively entering the workforce, and soon nobody would have much time to spend behind a stove. In 1985, the microwave future looked inevitable.

Second – Marie T. Smith is a microwave maximalist. She spent ten years putting every comestible object in the microwave to see what happens. Look at the items on the book cover – some are obviously impossible to prepare with a microwave, right? Well, that’s where you’re wrong. Marie T. Smith figured out a way to prepare absolutely everything. If you are a disciple of her philosophy, you shouldn’t even own a stove. Smith herself hasn’t owned one since the early 1970s. As she explains in the cookbook’s introduction, Smith believed the microwave would ultimately replace stove-top cooking, the same way stove-top cooking had replaced campfire-top cooking.

So, my goal is twofold: first, I want to know if there’s any merit to all of these forgotten microwaving techniques. Something that can make plasma out of grapes, set your house on fire and bring frozen hamsters back to life cannot be fundamentally bad. But also, I want to get a glimpse of what the world looks like in the uchronia where Marie T. Smith won and Big Teflon lost. Why did we drift apart from this timeline?

by Malmsbury, Telescopic Turnip |  Read more:
Images: Microwave Cooking For One/YouTube/uncredited
Posted by markk at Monday, April 27, 2026
Labels: Culture, Education, Food, history, Literature, Science, Technology

Sunday, April 26, 2026

Engineering the Disposable Diaper

Adventures in product design.

For the mothers of the baby boom, pediatrician Benjamin Spock’s child care handbook was a practical, confidence-boosting essential. Originally published in 1946 as The Common Sense Book of Baby and Child Care, Dr Spock’s baby book sold more than 500,000 copies in its first six months. By the time the second edition came out in 1957, with the simplified title Baby and Child Care, Dr Spock was selling a million copies a year. My mother, who was 24 when I arrived in 1960, still remembers the book’s reassuring tone.

‘You know more than you think you do’, the author told readers. ‘We know for a fact’, he wrote with medical authority, ‘that the natural loving care that kindly parents give to their children is a hundred times more valuable than their knowing how to pin a diaper on just right’.

Dr Spock went on to provide detailed instructions on the practical intricacies of parenthood, including diapers. Buy at least two dozen, he counseled, more if you aren’t washing them daily. Six dozen would cover all contingencies. With a diagram, he showed how to fold a diaper and explained how to position it on a boy versus a girl. ‘When you put in the pin’, he advised, ‘slip two fingers of the other hand between the baby and the diaper to prevent sticking him’. The book covered when to change the diapers and what to do with the dirties.
You want a covered pail partially filled with water to put used diapers in as soon as removed. If it contains soap or detergent, this helps in removing stains. Be sure the soap is well dissolved, to prevent lumps of soap from remaining in the diapers later. When you remove a soiled diaper, scrape the movement off into the toilet with a knife, or rinse it by holding it in the toilet while you flush it (hold tight).

You wash the diapers with mild soap or mild detergent in [the] washing machine or washtub (dissolve the soap well first), and rinse 2 or 3 or 4 times. The number of rinsings depends on how soon the water gets clear and on how delicate the baby’s skin is. If your baby’s skin isn’t sensitive, 2 rinsings may be enough.
On this subject, the 1957 edition contains two telling differences from the original. In 1946, Dr Spock recommended the knife method to those without flush toilets. And starting with the second edition, he advised new parents to buy an automatic washer and dryer if they could possibly afford them. ‘They save hours of work each week, and precious energy’, he wrote. ‘Energy’ in this case referred not to electricity or gas but to maternal stamina.

Disposable diapers did exist, but they accounted for a mere one percent of US diaper changes. They were expensive, specialty products and not that great. ‘The full-sized ones are rather bulky’, noted Dr Spock. ‘The small ones that fit into a waterproof cover do not absorb as much urine as a cloth diaper and do not retain a bowel movement as well’. Disposables were mostly used for travel, when washing diapers wasn’t an option.

But even as the second edition of Baby and Child Care was hitting bookstores and supermarket racks, change was afoot. After buying Charmin Paper Company in 1957, Procter & Gamble began looking for ideas for new paper products.

Motivated by the less pleasant aspects of spending time with his new grandchild, the company’s director of exploratory development, Victor Mills, suggested disposable diapers. After analyzing existing products and conducting consumer research, P&G created a dedicated diaper research group.

The research this group conducted, like that of its successors and competitors, wasn’t glamorous. It didn’t advance basic science. It wasn’t even an obvious route to profit. (One percent of the market!) It was a high-stakes gamble that required solving difficult engineering problems. How that happened represents the kind of hidden progress that leads to everyday abundance.

P&G’s first design flopped. Tested in the extreme heat of a Dallas summer, the pleated absorbent pad with plastic pants made babies miserable and left them with heat rashes. Starting over, the group had a one piece diaper ready for testing in March 1959. With an improved rayon moisture barrier between the baby and the absorbent tissue wadding, the new diaper was softer and more comfortable. An initial test of 37,000 hand-assembled prototypes went well, with about two thirds of the parents deeming the disposables as good or better than cloth. The next step was mass production.

Designing one well-functioning disposable was hard enough. Turning out hundreds a minute was practically impossible. ‘I think it was the most complex production operation the company had ever faced’, an engineer recalled.
There was no standard equipment. We had to design the entire production line from the ground up. It seemed a simple task to take three sheets of material – plastic back sheet, absorbent wadding, and water repellent top sheet – fold them in a zigzag pattern and glue them together. But glue applicators dripped glue. The wadding generated dust. Together they formed sticky balls and smears which fouled the equipment. The machinery could run only a few minutes before having to be shut down and cleaned.
Eventually, the diaper team mastered the process. In December 1961, Pampers went on the market in Peoria, Illinois. Once again, the test failed.

This time mothers liked the diapers. But the price was way too high for a single use item: ten cents a diaper, equivalent to about one dollar today. By contrast, diaper delivery services, which served about five percent of the market, charged no more than five cents a diaper. Home laundry costs ran to one or two cents.

Lowering the price of a diaper required much larger volumes. Aiming at about six cents a diaper, P&G engineers spent several years developing what Harvard Business School’s Michael E. Porter described as ‘a highly sophisticated block-long, continuous-process machine that could assemble diapers at speeds of up to a remarkable 400 a minute’. After successfully testing Pampers at 5.5 cents each, P&G began a national rollout in 1966. By 1973, disposables accounted for 42 percent of the US diaper market. [...]

The success of Pampers drew competitors into the growing market. ‘Any diaper maker that carved out a modest market share against Procter & Gamble could expect sales to triple as a result of sheer market growth’, write business historians Thomas Heinrich and Bob Batchelor in Kotex, Kleenex, Huggies, a history of Kimberly-Clark. But there was a catch. The bulky diapers took up so much space on shelves that stores rarely stocked more than two brands, plus maybe a discounted private label. Second place meant profits, third place disaster.

by Virginia Postrel, Works in Progress | Read more:
Image: A nurse demonstrating to young immigrant mothers how to diaper their babies: Israel Government (1950)
Posted by markk at Sunday, April 26, 2026
Labels: Business, Culture, Design, Economics, Health, Technology

Saturday, April 25, 2026

We Absolutely Do Know That Waymos Are Safer Than Human Drivers

In a recent article in Bloomberg, David Zipper argued that “We Still Don’t Know if Robotaxis Are Safer Than Human Drivers.” Big if true! In fact, I’d been under the impression that Waymos are not only safer than humans, the evidence to date suggests that they are staggeringly safer, with somewhere between an 80% to 90% lower risk of serious crashes.

“We don’t know” sounds like a modest claim, but in this case, where it refers to something that we do in fact know about an effect size that is extremely large, it’s a really big claim.

It’s also completely wrong. The article drags its audience into the author’s preferred state of epistemic helplessness by dancing around the data rather than explaining it. And Zipper got many of the numbers wrong; in some cases, I suspect, as a consequence of a math error.

There are things we still don’t know about Waymo crashes. But we know far, far more than Zipper pretends. I want to go through his full argument and make it clear why that’s the case.
***
In many places, Zipper’s piece relied entirely on equivocation between “robotaxis” — that is, any self-driving car — and Waymos. Obviously, not all autonomous vehicle startups are doing a good job. Most of them have nowhere near the mileage on the road to say confidently how well they work.

But fortunately, no city official has to decide whether to allow “robotaxis” in full generality. Instead, the decision cities actually have to make is whether to allow or disallow Waymo, in particular.

Fortunately, there is a lot of data available about Waymo, in particular. If the thing you want to do is to help policymakers make good decisions, you would want to discuss the safety record of Waymos, the specific cars that the policymakers are considering allowing on their roads.

Imagine someone writing “we don’t know if airplanes are safe — some people say that crashes are extremely rare, and others say that crashes happen every week.” And when you investigate this claim further, you learn that what’s going on is that commercial aviation crashes are extremely rare, while general aviation crashes — small personal planes, including ones you can build in your garage — are quite common.

It’s good to know that the plane that you built in your garage is quite dangerous. It would still be extremely irresponsible to present an issue with a one-engine Cessna as an issue with the Boeing 737 and write “we don’t know whether airplanes are safe — the aviation industry insists they are, but my cousin’s plane crashed just three months ago.”

The safety gap between, for example, Cruise and Waymo is not as large as the safety gap between commercial and general aviation, but collapsing them into a single category sows confusion and moves the conversation away from the decision policymakers actually face: Should they allow Waymo in their cities?

Zipper’s first specific argument against the safety of self-driving cars is that while they do make safer decisions than humans in many contexts, “self-driven cars make mistakes that humans would not, such as plowing into floodwater or driving through an active crime scene where police have their guns drawn.” The obvious next question is: Which of these happens more frequently? How does the rate of self-driving cars doing something dangerous a human wouldn’t compare to the rate of doing something safe a human wouldn’t?

This obvious question went unasked because the answer would make the rest of Bloomberg’s piece pointless. As I’ll explain below, Waymo’s self-driving cars put people in harm’s way something like 80% to 90% less often than humans for a wide range of possible ways of measuring “harm’s way.”

by Kelsey Piper, The Argument |  Read more:
Image: Justin Sullivan/Getty Images
[ed. I'd take one any time (if reasonably priced), and expect to see them everywhere soon. See also: I Was Promised Flying Self Driving Cars (Zvi):]
***
A Tesla Model S drove itself from Los Angeles to New York with zero disengagements. Full reverse cannonball run.
Mike P: I don’t mean to say this in a way that discredits what they’ve done, but ngl, this stuff isn’t even surprising to me anymore like ya, makes total sense. I went from Philly to Raleigh NC to Tennessee and back to Philly and the only thing I had to do was re park the car at 2 charging stops when the car parked in the wrong place.
Tesla did the thing
There’s still a difference between full self-driving (FSD) that can take you across the country, and the point when you can sleep while it drives.

A Waymo moving 17mph hits the breaks instantly upon seeing a child step in front of it from a blind spot, hits the child at 6mph and dialed 911. If a human had been driving, the child would likely have been struck at 14mph and be dead.

What did some headlines call this, of course?
TechCrunch: Waymo robotaxi hits a child near an elementary school in Santa Monica

Samuel Hammond: A more accurate headline would be “Waymo saves child’s life thanks to superhuman reaction time”
This was another good time to notice that almost all the AI Safety people are strongly in favor of Waymo and self-driving cars.
Rob Miles: Seems worthwhile for people to hear AI Safety people saying: No, self driving cars are not the problem, they have the potential to be much safer than human drivers, and in this instance it seems like a human driver would have done a much worse job than the robot
Posted by markk at Saturday, April 25, 2026
Labels: Business, Cities, Design, Government, Journalism, Media, Politics, Technology, Travel

Friday, April 24, 2026

Iran War Updates: April 24, 2026

Iran War: Trump Says Time Is on His Side, Iranian Leadership Is Divided, Iran Begs to Differ (Naked Capitalism)
Image: USS George H.W. Bush (CVN 77) sails in the Indian Ocean, April 23. CENTCOM/X
[ed. Updates from a variety of sources. Draw your own conclusions. See also: Iran War: Team Trump as Narrative War Captives? (NC).]
Posted by markk at Friday, April 24, 2026
Labels: Crime, Economics, Government, Journalism, Media, Military, Politics, Security, Technology

We Haven’t Seen the Worst of What Gambling and Prediction Markets Will Do to America

Here are three stories about the state of gambling in America.
1. Baseball
In November 2025, two pitchers for the Cleveland Guardians, Emmanuel Clase and Luis Ortiz, were charged in a conspiracy for “rigging pitches.” Frankly, I had never heard of rigged pitches before, but the federal indictment describes a scheme so simple that it’s a miracle that this sort of thing doesn’t happen all the time. Three years ago, a few corrupt bettors approached the pitchers with a tantalizing deal: (1) We’ll bet that certain pitches will be balls; (2) you throw those pitches into the dirt; (3) we’ll win the bets and give you some money.

The plan worked. Why wouldn’t it? There are hundreds of pitches thrown in a baseball game, and nobody cares about one bad pitch. The bets were so deviously clever because they offered enormous rewards for bettors and only incidental inconvenience for players and viewers. Before their plan was snuffed out, the fraudsters won $450,000 from pitches that not even the most ardent Cleveland baseball fan would ever remember the next day. Nobody watching America’s pastime could have guessed that they were witnessing a six-figure fraud.
2. Bombs
On the morning of February 28th, someone logged onto the prediction market website Polymarket and made an unusually large bet. This bet wasn’t placed on a baseball game. It wasn’t placed on any sport. This was a bet that the United States would bomb Iran on a specific day, despite extremely low odds of such a thing happening.

A few hours later, bombs landed in Iran. This one bet was part of a $553,000 payday for a user named “Magamyman.” And it was just one of dozens of suspicious, perfectly-timed wagers, totaling millions of dollars, placed in the hours before a war began.

It is almost impossible to believe that, whoever Magamyman is, he didn’t have inside information from members of the administration. The term war profiteering typically refers to arms dealers who get rich from war. But we now live in a world not only where online bettors stand to profit from war, but also where key decision makers in government have the tantalizing options to make hundreds of thousands of dollars by synchronizing military engagements with their gambling position.
3. Bombs, again
On March 10, several days into the Iran War, the journalist Emanuel Fabian reported that a warhead launched from Iran struck a site outside Jerusalem.

Meanwhile on Polymarket, users had placed bets on the precise location of missile strikes on March 10. Fabian’s article was therefore poised to determine payouts of $14 million in betting. As The Atlantic’s Charlie Warzel reported, bettors encouraged him to rewrite his story to produce the outcome that they’d bet on. Others threatened to make his life “miserable.”

A clever dystopian novelist might conceive of a future where poorly paid journalists for news wires are offered six-figure deals to report fictions that cash out bets from online prediction markets. But just how fanciful is that scenario when we have good reason to believe that journalists are already being pressured, bullied, and threatened to publish specific stories that align with multi-thousand dollar bets about the future?

Put it all together: rigged pitches, rigged war bets, and attempts to rig wartime journalism. Without context, each story would sound like a wacky conspiracy theory. But these are not conspiracy theories. These are things that have happened. These are conspiracies—full stop.

“If you’re not paranoid, you’re not paying attention” has historically been one of those bumperstickers you find on the back of a car with so many other bumperstickers that you worry for the sanity of its occupants. But in this weird new reality where every event on the planet has a price, and behind every price is a shadowy counterparty, the jittery gambler’s paranoia—is what I’m watching happening because somebody more powerful than me bet on it?—is starting to seem, eerily, like a kind of perverse common sense.

From Laundromats to Airplanes

What’s remarkable is not just the fact that online sports books have taken over sports, or that betting markets have metastasized in politics and culture, but the speed with which both have taken place.

For most of the last century, the major sports leagues were vehemently against gambling, as the Atlantic staff writer McKay Coppins explained in his recent feature. [...]

Following the 2018 Supreme Court decision Murphy vs. NCAA, sports gambling was unleashed into the world, and the leagues haven’t looked back. Last year, the NFL saw $30 billion gambled on football games, and the league itself made half a billion dollars in advertising, licensing, and data deals.

Nine years ago, Americans bet less than $5 billion on sports. Last year, that number rose to at least $160 billion. Big numbers mean nothing to me, so let me put that statistic another way: $5 billion is roughly the amount Americans spend annually at coin-operated laundromats and $160 billion is nearly what Americans spent last year on domestic airline tickets. So, in a decade, the online sports gambling industry will have risen from the level of coin laundromats to rival the entire airline industry.

And now here come the prediction markets, such as Polymarket and Kalshi, whose combined 2025 revenue came in around $50 billion. “These predictive markets are the logical endpoint of the online gambling boom,” Coppins told me on my podcast Plain English. “We have taught the entire American population how to gamble with sports. We’ve made it frictionless and easy and put it on everybody’s phone. Why not extend the logic and culture of gambling to other segments of American life?” He continued:
Why not let people gamble on who’s going to win the Oscar, when Taylor Swift’s wedding will be, how many people will be deported from the United States next year, when the Iranian regime will fall, whether a nuclear weapon will be detonated in the year 2026, or whether there will be a famine in Gaza? These are not things that I’m making up. These are all bets that you can make on these predictive markets.
Indeed, why not let people gamble on whether there will be a famine in Gaza? The market logic is cold and simple: More bets means more information, and more informational volume is more efficiency in the marketplace of all future happenings. But from another perspective—let’s call it, baseline morality?—the transformation of a famine into a windfall event for prescient bettors seems so grotesque as to require no elaboration. One imagines a young man sending his 1099 documents to a tax accountant the following spring: “right, so here are my dividends, these are the cap gains, and, oh yeah, here’s my $9,000 payout for totally nailing when all those kids would die.”

It is a comforting myth that dystopias happen when obviously bad ideas go too far. Comforting, because it plays to our naive hope that the world can be divided into static categories of good versus evil and that once we stigmatize all the bad people and ghettoize all the bad ideas, some utopia will spring into view. But I think dystopias more likely happen because seemingly good ideas go too far. “Pleasure is better than pain” is a sensible notion, and a society devoted to its implications created Brave New World. “Order is better than disorder” sounds alright to me, but a society devoted to the most grotesque vision of that principle takes us to 1984. Sports gambling is fun, and prediction markets can forecast future events. But extended without guardrails or limitations, those principles lead to a world where ubiquitous gambling leads to cheating, cheating leads to distrust, and distrust leads ultimately to cynicism or outright disengagement.

“The crisis of authority that has kind of already visited every other American institution in the last couple of decades has arrived at professional sports,” Coppins said. Two-thirds of Americans now believe that professional athletes sometimes change their performance to influence gambling outcomes. “Not to overstate it, but that’s a disaster,” he said. And not just for sports.

Four Ways to Lose (Or, What's a 'Rigged Pitch' in a War?)

There are four reasons to worry about the effect of gambling in sports and culture.

by Derek Thompson, Substack |  Read more:
Image: Eyestetix Studio on Unsplash
[ed. See also: Exclusive: Trader made nearly $1 million on Polymarket with remarkably accurate Iran bets (CNN).]
Posted by markk at Friday, April 24, 2026
Labels: Business, Crime, Culture, Economics, Games, Law, Media, Politics, Psychology, Sports, Technology

Tuesday, April 21, 2026

The Murderer

via:
Read more »
Posted by markk at Tuesday, April 21, 2026
Labels: Fiction, Literature, Technology

Elon vs. Altman: What Their Infrastructure Stacks Reveal About Power

Everyone’s obsessed with the Elon Musk vs. Sam Altman lawsuit. Ronan Farrow’s 18-month investigation. Molotov cocktails. Sister allegations. A $134 billion legal battle over OpenAI’s soul.

But they’re all asking the wrong question.

It’s not “who’s the good guy?” It’s not “who should we trust with AI?” It’s not even “who’s going to win the lawsuit?”

The right question is: What does their infrastructure stack reveal about their actual theory of power?

Because here’s the thing about tech founders: They lie constantly. To investors, to users, to regulators, to themselves. But their products don’t lie. The infrastructure they choose to build. What they spend billions of dollars actually constructing reveals their real theory of survival.

Don’t listen to what they say. Look at what they build.

Elon Musk and Sam Altman are building for completely different endgames. And understanding the difference tells you everything you need to know about the actual stakes of their conflict.


Elon’s Stack: Collapse-Proof Sovereignty

Let’s start with Elon, because his infrastructure stack is massive and most people don’t understand how comprehensive it actually is. Every single piece is designed to function when legacy systems fail. This isn’t paranoia; it’s strategic architecture.

Tesla: Energy Independence

Solar panels. Powerwall battery systems. Electric vehicles. Supercharger network.

Translation: You don’t need the electrical grid. You don’t need oil. You don’t need gas stations. You don’t need the energy sector’s supply chains. If the grid goes down natural disaster, cyberattack, economic collapse, political breakdown. Tesla owners keep running. Solar generates power. Batteries store it. Vehicles consume it. The entire energy loop is self-contained. That’s not about environmentalism. That’s about Energy Sovereignty.

Starlink: Communications Independence

Over 5,000 satellites in low Earth orbit. Global internet coverage. Bypasses all terrestrial infrastructure.

Translation: You don’t need undersea fiber optic cables. You don’t need cell towers. You don’t need ISPs. You don’t need government-controlled telecommunications infrastructure. If a government shuts down the internet like Iran during protests, like Russia during Ukraine invasion. Starlink still works. You have communications capability independent of state control. That’s not about rural broadband. That’s about Information Sovereignty.

SpaceX: Logistics Independence

Reusable rockets (Falcon 9, Falcon Heavy, Starship). Cheapest launch cost per kilogram in human history. Point-to-point Earth transport capability. Orbital manufacturing potential.

Translation: You control access to space. You can move cargo anywhere on Earth in under an hour. You can put satellites into orbit cheaper than any nation-state. You can potentially manufacture things in zero-gravity that are impossible to make on Earth. If traditional supply chains break. Shipping disrupted, airspace restricted, borders closed. SpaceX can still move things. Anywhere. Fast. That’s not about exploration. That’s about Logistics Sovereignty.

The Deeper Play: Rockets Are Mythos

The Mars colonization narrative isn’t just a business plan. It’s a founding myth.

Think about how legitimacy works:

Ancient kings claimed “Divine Right” they were chosen by the gods to rule.

Democratic leaders claim “Popular Mandate” they were chosen by the people through voting.

Elon is building something different: “Cosmic Mandate”. He’s the one saving humanity by making us multi-planetary. “I’m building the infrastructure to preserve human consciousness across multiple worlds.”

If you’re the person who saved the species from extinction by establishing a backup civilization on Mars, you’re not just a CEO. You’re not even just a political leader. You’re a Civilizational Founder. Like the people who established Rome, or the American republic, or any nation-state that becomes the foundation for centuries of subsequent history. Mars isn’t the goal. It’s the mythology that justifies rule. The founding story that makes everything else legitimate. 

[more]...

This is “Post-State Capability”. The ability to function and to maintain power when traditional state infrastructure is unavailable, hostile, or collapsed.

Elon’s not hoping for collapse. But he’s not betting against it either.

His thesis is simple: “The system will fragment. Build infrastructure that makes you powerful in the aftermath.” If collapse happens, He owns:- Energy systems- Communications networks- Logistics capability- Information channels- Labor (automated)- The founding myth (savior of humanity) That’s not a business portfolio. That’s a blueprint for post-state power.


Altman’s Stack: Acceleration-Dependent Fragility

Now let’s look at Sam Altman’s infrastructure.

OpenAI/ChatGPT: Centralized, Grid-Dependent, Fragile

OpenAI is building toward Artificial General Intelligence through massive-scale computing infrastructure. Current commitments: $1.4 trillion in data center buildout over 8 years.

This requires:
  • Stable energy grid (data centers consume gigawatts → entire power plants worth of electricity)
  • Chip manufacturing (NVIDIA GPUs, TSMC fabrication→ Taiwan and South Korea must remain stable and accessible)
  • Cooling infrastructure (water, HVAC systems, constant temperature regulation)
  • Fiber optic networks (global connectivity, low-latency communication)
  • Capital markets (functioning financial system to fund trillion-dollar buildouts)
  • Regulatory stability (permitting, zoning, environmental compliance, AI development allowed)
Notice the dependency structure?

Elon’s stack works when systems fail. Altman’s stack requires every system to keep working simultaneously.

The Vulnerability Comparison

Elon without electrical grid:
  • Still has Tesla solar panels generating power
  • Still has Powerwall batteries storing energy
  • Still has Starlink satellites providing internet
  • Still has rockets for logistics
  • Still has underground tunnels for transit
  • Still has robots for labor
  • Still powerful
Altman without electrical grid:
  • Data centers go dark immediately
  • ChatGPT stops responding
  • Training runs halt
  • No product, no revenue, no value
  • Completely powerless
The contrast is stark. Elon’s infrastructure is “distributed and resilient”. Altman’s infrastructure is centralized and fragile.

What Does Altman Actually Want?

So if Altman’s building such a vulnerable stack, what’s the theory?

Look at what he’s actually building with AI. Not what he says but what he builds.

He’s NOT focusing on:
  • AI companionship (even though Character.ai and Replica prove this is hugely profitable)
  • Entertainment AI (even though this is the biggest consumer market)
  • Social AI (even though emotional dependency creates the strongest lock-in)
He’s focusing on:
  • AI for scientific research (drug discovery, materials science, physics)
  • AI for productivity (coding assistants, automation, reasoning)
  • AI for problem-solving (complex systems, coordination challenges)
This is the tell. He’s explicitly said he was surprised people want emotional bonds with ChatGPT, and he’s not leaning into it.

Why?

by MythcoreOps |  Read more:
Images: uncredited
Posted by markk at Tuesday, April 21, 2026
Labels: Business, Critical Thought, Economics, Philosophy, Psychology, Security, Technology
Older Posts Home
Subscribe to: Posts (Atom)

Blog Archive

  • ▼  2026 (506)
    • May (2)
    • April (155)
    • March (132)
    • February (118)
    • January (99)
  • ►  2025 (1196)
    • December (135)
    • November (111)
    • October (142)
    • September (93)
    • August (88)
    • July (88)
    • June (94)
    • May (100)
    • April (94)
    • March (97)
    • February (69)
    • January (85)
  • ►  2024 (897)
    • December (95)
    • November (65)
    • October (70)
    • September (58)
    • August (58)
    • July (75)
    • June (87)
    • May (79)
    • April (63)
    • March (69)
    • February (93)
    • January (85)
  • ►  2023 (892)
    • December (82)
    • November (61)
    • October (74)
    • September (53)
    • August (75)
    • July (68)
    • June (79)
    • May (84)
    • April (89)
    • March (85)
    • February (67)
    • January (75)
  • ►  2022 (277)
    • December (89)
    • November (77)
    • October (72)
    • September (39)
  • ►  2021 (422)
    • August (5)
    • May (31)
    • April (105)
    • March (107)
    • February (94)
    • January (80)
  • ►  2020 (1132)
    • December (80)
    • November (68)
    • October (85)
    • September (76)
    • August (104)
    • July (104)
    • June (82)
    • May (95)
    • April (126)
    • March (115)
    • February (90)
    • January (107)
  • ►  2019 (1327)
    • December (110)
    • November (115)
    • October (118)
    • September (93)
    • August (145)
    • July (104)
    • June (108)
    • May (109)
    • April (84)
    • March (114)
    • February (98)
    • January (129)
  • ►  2018 (1368)
    • December (116)
    • November (120)
    • October (103)
    • September (93)
    • August (104)
    • July (117)
    • June (99)
    • May (150)
    • April (91)
    • March (123)
    • February (117)
    • January (135)
  • ►  2017 (1264)
    • December (119)
    • November (109)
    • October (112)
    • September (89)
    • August (132)
    • July (95)
    • June (87)
    • May (126)
    • April (92)
    • March (118)
    • February (102)
    • January (83)
  • ►  2016 (1477)
    • December (135)
    • November (122)
    • October (129)
    • September (106)
    • August (132)
    • July (121)
    • June (104)
    • May (154)
    • April (136)
    • March (112)
    • February (136)
    • January (90)
  • ►  2015 (1481)
    • December (138)
    • November (118)
    • October (131)
    • September (105)
    • August (120)
    • July (130)
    • June (104)
    • May (130)
    • April (111)
    • March (167)
    • February (108)
    • January (119)
  • ►  2014 (1733)
    • December (140)
    • November (145)
    • October (131)
    • September (132)
    • August (126)
    • July (144)
    • June (164)
    • May (196)
    • April (173)
    • March (161)
    • February (113)
    • January (108)
  • ►  2013 (2586)
    • December (199)
    • November (208)
    • October (215)
    • September (234)
    • August (231)
    • July (216)
    • June (232)
    • May (268)
    • April (266)
    • March (199)
    • February (148)
    • January (170)
  • ►  2012 (2380)
    • December (206)
    • November (223)
    • October (289)
    • September (222)
    • August (236)
    • July (168)
    • June (190)
    • May (293)
    • April (88)
    • March (190)
    • February (143)
    • January (132)
  • ►  2011 (2591)
    • December (132)
    • November (276)
    • October (275)
    • September (244)
    • August (253)
    • July (330)
    • June (330)
    • May (276)
    • April (243)
    • March (229)
    • February (3)

Support Duck Soup!

Read what it's all about.

Categories

  • Administration (59)
  • Animals (367)
  • Architecture (183)
  • Art (3929)
  • Business (2383)
  • Cartoons (391)
  • Celebrities (273)
  • Cities (523)
  • Copyright (64)
  • Crime (343)
  • Critical Thought (1182)
  • Culture (3597)
  • Dance (35)
  • Design (714)
  • Drugs (280)
  • Economics (1581)
  • Education (654)
  • Environment (890)
  • Fashion (382)
  • Fiction (235)
  • Food (778)
  • Government (1451)
  • Health (780)
  • Humor (994)
  • Illustration (353)
  • Journalism (289)
  • Law (721)
  • Literature (717)
  • Media (1188)
  • Medicine (637)
  • Military (312)
  • Movies (426)
  • Music (2338)
  • Philosophy (228)
  • Photos (2996)
  • Poetry (34)
  • Politics (2518)
  • Psychology (1123)
  • Relationships (1261)
  • Religion (109)
  • Science (1316)
  • Security (656)
  • Sports (717)
  • Technology (2668)
  • Travel (294)
  • history (968)

Search

markk_213 at yahoo.com

*Note. All content on this site unless specifically attributed to the editor has been obtained from other sources. A link at the bottom of each post will direct readers to the material in its full and original form. All posts are strictly for educational purposes (directing readers to original sources). If content providers prefer to have their material removed, please contact me at the email address listed above. None of the items posted here are, or should be, used for commercial purposes (nor are used for such purposes here). They are presented solely to promote the ideas, reporting and art of the people that produced them.

(DMCA designated agency registration no.: DMCA-1042791
Powered by Blogger.