Duck Soup

...dog paddling through culture, technology, music and more.

Showing posts with label Media. Show all posts
Showing posts with label Media. Show all posts

Wednesday, April 29, 2026

Six Things Apple Achieved Under Tim Cook’s Management

Apple CEO Tim Cook announced this week that he’s stepping down from his position in September and handing the reins to John Ternus, currently the company’s senior vice president of Hardware Engineering and a 25-year employee. [...]

I’ve been covering Apple for various outlets throughout Cook’s tenure as CEO, and I’ve been thinking a lot about how Apple has changed in the 15 years since he formally took over from an ailing Steve Jobs in the summer of 2011. Under Cook, the company has become less surprising but massively financially successful; some of Apple’s newer products have flopped or underperformed, but far more have become and stayed excellent thanks to years of competent iteration.

This isn’t a comprehensive list of everything Cook has done as CEO, but it’s my attempt at a big-picture, high-level summary and a snapshot of where Apple is now, to serve as a comparison point once Ternus kicks off his tenure.

Quiet hardware successes: Apple Watch, headphones, and more


The Tim Cook era can’t lay claim to any single hardware announcement as important or far-reaching as the iPhone, the iPod, or even the iPad. Apple has definitely introduced good—even great—hardware in the last 15 years, though.

The main difference is that Apple products introduced during the Jobs era tended to belong at or near the center of your digital life. The Macintosh popularized the graphical user interface. The iPod was a constant musical companion on commutes, during workouts or study sessions, or when plugged into someone’s speaker at a party. The iPhone, obviously, became the most important personal computing device since the personal computer. And the iPad, as conceived by Jobs, was clearly intended to be a new kind of primary computing device (it was only under Cook that the iPad settled into its current in-betweener rut, computer-like but not computer-like enough to supplant the Mac’s mouse-and-pointer usage model).

Hardware introduced during Cook’s tenure, on the other hand, tended to be at its best when it extended or sat atop those Jobs-era products in some way. The AirPods and the wider universe of Beats headphones are the archetypal example—wireless headphones with just enough proprietary Apple technology in them that they’re much easier and more pleasant to use with other Apple products than typical Bluetooth headphones.

Similarly, the Apple Watch is a convenient way to tap into a tiny subset of your iPhone’s communication capabilities (plus fitness tracking). The HomePod is a speaker version of AirPods. I don’t know a kid with an iPad who doesn’t also have an Apple Pencil for doodling and sketching. Apple never released a TV set, but the Apple TV is the streaming box that makes the TV I already have feel the most like a TV and the least like a billboard. Apple never released a car, but it did introduce CarPlay, a useful add-on that is a prerequisite for me when I’m in the market for a car.

None of these products changed the face of their industries the way the iPod, iPhone, or iPad did, but they’ve all become ubiquitous, succeeding on the strength of Apple’s other products and services. That’s the kind of thing Cook’s Apple was good at inventing—reasons to stick around in Apple’s ecosystem once you’d already been drawn in.

Apple, the cloud services company


Apple still makes the majority of its money from hardware, but especially in recent years, the steadiest growth has come from Apple’s services—things like iCloud, Apple Music, Apple TV (the service, not the box), and software subscriptions like the new Creator Studio bundle.

The iCloud branding was introduced at the tail end of Jobs’ tenure, but its growth (and the growth of most Apple services and subscriptions) all happened on Cook’s watch. In 2011, Cook’s first year as CEO, Apple brought in a then-record $102.5 billion in annual revenue; in 2025, the Services division alone pulled down more than $109 billion in revenue. Not bad for a collection of features that rose from the ashes of the failed MobileMe service (and .Mac and iTools before it).

I don’t think the rise and increasing importance of the Services division has been entirely good for Apple or its users. The need to convert customers into subscribers and to upsell current subscribers to higher service tiers means that Apple’s users are now subject to some of the same kinds of notifications and reminders that so richly annoy PC users in Windows 11. [...]

A penchant for iteration

While it lacked somewhat in world-changing, all-new products, Cook’s Apple was also very good at relentlessly iterating on and improving Apple’s core products.
by Andrew Cunningham, Ars Technica |  Read more:
Images: Apple
Posted by markk at Wednesday, April 29, 2026
Labels: Business, Design, Economics, Media, Technology

Tuesday, April 28, 2026

The Algorithm Doesn't Have to Destroy Us

via: The Algorithm Doesn't Have to Destroy Us (Elysian)
Image: uncredited
Posted by markk at Tuesday, April 28, 2026
Labels: Art, Culture, Media

A Humble ‘Jeopardy!’ Champ Ends His Run

For the past month, “Jeopardy!” episodes have followed a pattern.

The theme music plays. The three contestants stand at their lecterns. Then two of them are clobbered by a mild-mannered bureaucrat from New Jersey named Jamie Ding.

But on Monday’s episode, the unthinkable happened: After 31 victories, Ding lost.

His streak is the fifth-longest in “Jeopardy!” history. He fell just one win short of matching James Holzhauer’s 2019 run, and he left the Alex Trebek Stage with more than $880,000 in winnings.

Early in the game broadcast Monday, Ding found himself lagging behind Greg Shahade, an International Master in chess who was lightning-fast on the buzzer. During Final Jeopardy, Ding jotted down the correct response to a clue about South African languages — but it wasn’t enough to make up the deficit.

“It was over, just like that,” Ding, 33, said in an interview.

Contestants who went up against him included a statistician, a librarian and a professor. Ding produced so many correct answers (always in the form of a question) that it seemed he might never run out.

“Who was Trotsky?”

“What are non-Newtonian fluids?”

“What are waffle fries?”

Throughout his reign, he was matter-of-fact as he came up with arcana in a split second (“What is cuneiform?”). He endeared himself to viewers through his comically humdrum banter with the show’s host, Ken Jennings, about such topics as his favorite color (orange), his favorite letter (F) and his favorite number (6).

As the streak continued, the drama-free anecdotes and humble bits of personal information shared by Ding seemed to amuse Jennings, a former “Jeopardy!” champ who holds the record for consecutive wins, with 74.

The depth of Ding’s knowledge went along with a lack of bluster. He proudly identified himself as a “faceless bureaucrat.” When he won a game, he looked pleasantly surprised, as if he had been given an unusually good free sample at Trader Joe’s.

“Put Jamie Ding on the $20 bill,” one fan demanded in a tribute on the newsletter platform Substack.

After his “Jeopardy!” loss had been taped but before it was broadcast, Ding gave a video interview from his two-bedroom apartment in Lawrenceville, New Jersey.

There he was, in front of an orange couch and a stuffed orange clown fish. He said he had remained calm throughout his final game, even as he realized that he was on his way to a loss. He went backstage and stared at the mostly orange clothes he had brought along in the hope that his streak would continue.
Advertising

“During it, I was trying to stay grounded,” he said. “Planning to win a whole bunch of games of ‘Jeopardy!’ just feels like asking to lose.”

Ding filmed the show in five-episode chunks in Los Angeles during vacation days from his job as a program administrator for the New Jersey Housing and Mortgage Finance Agency. His work involves administering tax credits to build affordable housing in the state.

In an early appearance, he praised New Jersey’s efforts on the issue compared with those of New York, Connecticut and Pennsylvania. “If you’re from one of those states, then shame on you,” he said. “Build more housing.”

He spends his time away from his job studying law at Seton Hall University. He said he did not expect his “Jeopardy!” windfall to change his life all that much. He planned to donate some money and put the rest in a high-yield savings account.

In a way, Ding said, he had been preparing for the show since childhood. The son of a neuroscience professor and a high school math teacher, he grew up in Grosse Pointe Shores, a suburb of Detroit. He competed in geography bees and on his high school quiz bowl team. He recalled losing a sixth-grade spelling bee when he misspelled the word “bolero.”

“B-a-l-l-e-r-o,” he said. “Terrible.” [...]

Ding was a relatively conservative player, avoiding the all-in wagers on Daily Doubles that were a go-to stratagem for Holzhauer. But he was unusually fast on the buzzer and seemed to have few weak categories.

“The key to Jamie’s run really has been his incredibly wide base of knowledge in just about any category you can think of,” Saunders said.

Ding used a tactic he called “knight moves” — traversing the board in an L-shaped pattern, like a knight in chess. Maybe it threw his opponents off-balance, or maybe it was just nice to have a simple rule to follow, he said. “It’s basically a guaranteed way to pick something of a different difficulty, and in a different category,” he added.

He watched his first “Jeopardy!” appearance at Pint, a bar in Jersey City, with friends from so many different groups that it felt like a wedding. He is still getting used to the attention that comes with being a TV star.

“Watching my episodes, I can be pretty self-critical — like, ‘Why did you do that?’ Or, ‘What’s wrong with your face?’” he said. The outpouring of support has been worth the discomfort. “I’m trying to keep a list of people who did nice things for me because it’s so many,” he said.

Now that his streak has ended, he can return to his hobbies, like constructing cryptic crosswords and running an Instagram account rating General Tso’s chicken with his sister. He is also part of a group of intervenors seeking to block the U.S. Department of Justice from obtaining New Jersey’s voter registration records.

It won’t be long, though, before he starts studying for the “Jeopardy!” Tournament of Champions. He might even need some more orange clothes.

“I have a reputation to uphold,” he said.

by Callie Holterman, NY Times/Seattle Times |  Read more:
Image: Katy Kildee/The Detroit News/TNS
[ed. Feels refreshing to read about a normal, well-adjusted person who's main goal in life isn't self-promotion in some way.]
Posted by markk at Tuesday, April 28, 2026
Labels: Celebrities, Culture, Education, Games, Media

Saturday, April 25, 2026

We Absolutely Do Know That Waymos Are Safer Than Human Drivers

In a recent article in Bloomberg, David Zipper argued that “We Still Don’t Know if Robotaxis Are Safer Than Human Drivers.” Big if true! In fact, I’d been under the impression that Waymos are not only safer than humans, the evidence to date suggests that they are staggeringly safer, with somewhere between an 80% to 90% lower risk of serious crashes.

“We don’t know” sounds like a modest claim, but in this case, where it refers to something that we do in fact know about an effect size that is extremely large, it’s a really big claim.

It’s also completely wrong. The article drags its audience into the author’s preferred state of epistemic helplessness by dancing around the data rather than explaining it. And Zipper got many of the numbers wrong; in some cases, I suspect, as a consequence of a math error.

There are things we still don’t know about Waymo crashes. But we know far, far more than Zipper pretends. I want to go through his full argument and make it clear why that’s the case.
***
In many places, Zipper’s piece relied entirely on equivocation between “robotaxis” — that is, any self-driving car — and Waymos. Obviously, not all autonomous vehicle startups are doing a good job. Most of them have nowhere near the mileage on the road to say confidently how well they work.

But fortunately, no city official has to decide whether to allow “robotaxis” in full generality. Instead, the decision cities actually have to make is whether to allow or disallow Waymo, in particular.

Fortunately, there is a lot of data available about Waymo, in particular. If the thing you want to do is to help policymakers make good decisions, you would want to discuss the safety record of Waymos, the specific cars that the policymakers are considering allowing on their roads.

Imagine someone writing “we don’t know if airplanes are safe — some people say that crashes are extremely rare, and others say that crashes happen every week.” And when you investigate this claim further, you learn that what’s going on is that commercial aviation crashes are extremely rare, while general aviation crashes — small personal planes, including ones you can build in your garage — are quite common.

It’s good to know that the plane that you built in your garage is quite dangerous. It would still be extremely irresponsible to present an issue with a one-engine Cessna as an issue with the Boeing 737 and write “we don’t know whether airplanes are safe — the aviation industry insists they are, but my cousin’s plane crashed just three months ago.”

The safety gap between, for example, Cruise and Waymo is not as large as the safety gap between commercial and general aviation, but collapsing them into a single category sows confusion and moves the conversation away from the decision policymakers actually face: Should they allow Waymo in their cities?

Zipper’s first specific argument against the safety of self-driving cars is that while they do make safer decisions than humans in many contexts, “self-driven cars make mistakes that humans would not, such as plowing into floodwater or driving through an active crime scene where police have their guns drawn.” The obvious next question is: Which of these happens more frequently? How does the rate of self-driving cars doing something dangerous a human wouldn’t compare to the rate of doing something safe a human wouldn’t?

This obvious question went unasked because the answer would make the rest of Bloomberg’s piece pointless. As I’ll explain below, Waymo’s self-driving cars put people in harm’s way something like 80% to 90% less often than humans for a wide range of possible ways of measuring “harm’s way.”

by Kelsey Piper, The Argument |  Read more:
Image: Justin Sullivan/Getty Images
[ed. I'd take one any time (if reasonably priced), and expect to see them everywhere soon. See also: I Was Promised Flying Self Driving Cars (Zvi):]
***
A Tesla Model S drove itself from Los Angeles to New York with zero disengagements. Full reverse cannonball run.
Mike P: I don’t mean to say this in a way that discredits what they’ve done, but ngl, this stuff isn’t even surprising to me anymore like ya, makes total sense. I went from Philly to Raleigh NC to Tennessee and back to Philly and the only thing I had to do was re park the car at 2 charging stops when the car parked in the wrong place.
Tesla did the thing
There’s still a difference between full self-driving (FSD) that can take you across the country, and the point when you can sleep while it drives.

A Waymo moving 17mph hits the breaks instantly upon seeing a child step in front of it from a blind spot, hits the child at 6mph and dialed 911. If a human had been driving, the child would likely have been struck at 14mph and be dead.

What did some headlines call this, of course?
TechCrunch: Waymo robotaxi hits a child near an elementary school in Santa Monica

Samuel Hammond: A more accurate headline would be “Waymo saves child’s life thanks to superhuman reaction time”
This was another good time to notice that almost all the AI Safety people are strongly in favor of Waymo and self-driving cars.
Rob Miles: Seems worthwhile for people to hear AI Safety people saying: No, self driving cars are not the problem, they have the potential to be much safer than human drivers, and in this instance it seems like a human driver would have done a much worse job than the robot
Posted by markk at Saturday, April 25, 2026
Labels: Business, Cities, Design, Government, Journalism, Media, Politics, Technology, Travel

Friday, April 24, 2026

Iran War Updates: April 24, 2026

Iran War: Trump Says Time Is on His Side, Iranian Leadership Is Divided, Iran Begs to Differ (Naked Capitalism)
Image: USS George H.W. Bush (CVN 77) sails in the Indian Ocean, April 23. CENTCOM/X
[ed. Updates from a variety of sources. Draw your own conclusions. See also: Iran War: Team Trump as Narrative War Captives? (NC).]
Posted by markk at Friday, April 24, 2026
Labels: Crime, Economics, Government, Journalism, Media, Military, Politics, Security, Technology

We Haven’t Seen the Worst of What Gambling and Prediction Markets Will Do to America

Here are three stories about the state of gambling in America.
1. Baseball
In November 2025, two pitchers for the Cleveland Guardians, Emmanuel Clase and Luis Ortiz, were charged in a conspiracy for “rigging pitches.” Frankly, I had never heard of rigged pitches before, but the federal indictment describes a scheme so simple that it’s a miracle that this sort of thing doesn’t happen all the time. Three years ago, a few corrupt bettors approached the pitchers with a tantalizing deal: (1) We’ll bet that certain pitches will be balls; (2) you throw those pitches into the dirt; (3) we’ll win the bets and give you some money.

The plan worked. Why wouldn’t it? There are hundreds of pitches thrown in a baseball game, and nobody cares about one bad pitch. The bets were so deviously clever because they offered enormous rewards for bettors and only incidental inconvenience for players and viewers. Before their plan was snuffed out, the fraudsters won $450,000 from pitches that not even the most ardent Cleveland baseball fan would ever remember the next day. Nobody watching America’s pastime could have guessed that they were witnessing a six-figure fraud.
2. Bombs
On the morning of February 28th, someone logged onto the prediction market website Polymarket and made an unusually large bet. This bet wasn’t placed on a baseball game. It wasn’t placed on any sport. This was a bet that the United States would bomb Iran on a specific day, despite extremely low odds of such a thing happening.

A few hours later, bombs landed in Iran. This one bet was part of a $553,000 payday for a user named “Magamyman.” And it was just one of dozens of suspicious, perfectly-timed wagers, totaling millions of dollars, placed in the hours before a war began.

It is almost impossible to believe that, whoever Magamyman is, he didn’t have inside information from members of the administration. The term war profiteering typically refers to arms dealers who get rich from war. But we now live in a world not only where online bettors stand to profit from war, but also where key decision makers in government have the tantalizing options to make hundreds of thousands of dollars by synchronizing military engagements with their gambling position.
3. Bombs, again
On March 10, several days into the Iran War, the journalist Emanuel Fabian reported that a warhead launched from Iran struck a site outside Jerusalem.

Meanwhile on Polymarket, users had placed bets on the precise location of missile strikes on March 10. Fabian’s article was therefore poised to determine payouts of $14 million in betting. As The Atlantic’s Charlie Warzel reported, bettors encouraged him to rewrite his story to produce the outcome that they’d bet on. Others threatened to make his life “miserable.”

A clever dystopian novelist might conceive of a future where poorly paid journalists for news wires are offered six-figure deals to report fictions that cash out bets from online prediction markets. But just how fanciful is that scenario when we have good reason to believe that journalists are already being pressured, bullied, and threatened to publish specific stories that align with multi-thousand dollar bets about the future?

Put it all together: rigged pitches, rigged war bets, and attempts to rig wartime journalism. Without context, each story would sound like a wacky conspiracy theory. But these are not conspiracy theories. These are things that have happened. These are conspiracies—full stop.

“If you’re not paranoid, you’re not paying attention” has historically been one of those bumperstickers you find on the back of a car with so many other bumperstickers that you worry for the sanity of its occupants. But in this weird new reality where every event on the planet has a price, and behind every price is a shadowy counterparty, the jittery gambler’s paranoia—is what I’m watching happening because somebody more powerful than me bet on it?—is starting to seem, eerily, like a kind of perverse common sense.

From Laundromats to Airplanes

What’s remarkable is not just the fact that online sports books have taken over sports, or that betting markets have metastasized in politics and culture, but the speed with which both have taken place.

For most of the last century, the major sports leagues were vehemently against gambling, as the Atlantic staff writer McKay Coppins explained in his recent feature. [...]

Following the 2018 Supreme Court decision Murphy vs. NCAA, sports gambling was unleashed into the world, and the leagues haven’t looked back. Last year, the NFL saw $30 billion gambled on football games, and the league itself made half a billion dollars in advertising, licensing, and data deals.

Nine years ago, Americans bet less than $5 billion on sports. Last year, that number rose to at least $160 billion. Big numbers mean nothing to me, so let me put that statistic another way: $5 billion is roughly the amount Americans spend annually at coin-operated laundromats and $160 billion is nearly what Americans spent last year on domestic airline tickets. So, in a decade, the online sports gambling industry will have risen from the level of coin laundromats to rival the entire airline industry.

And now here come the prediction markets, such as Polymarket and Kalshi, whose combined 2025 revenue came in around $50 billion. “These predictive markets are the logical endpoint of the online gambling boom,” Coppins told me on my podcast Plain English. “We have taught the entire American population how to gamble with sports. We’ve made it frictionless and easy and put it on everybody’s phone. Why not extend the logic and culture of gambling to other segments of American life?” He continued:
Why not let people gamble on who’s going to win the Oscar, when Taylor Swift’s wedding will be, how many people will be deported from the United States next year, when the Iranian regime will fall, whether a nuclear weapon will be detonated in the year 2026, or whether there will be a famine in Gaza? These are not things that I’m making up. These are all bets that you can make on these predictive markets.
Indeed, why not let people gamble on whether there will be a famine in Gaza? The market logic is cold and simple: More bets means more information, and more informational volume is more efficiency in the marketplace of all future happenings. But from another perspective—let’s call it, baseline morality?—the transformation of a famine into a windfall event for prescient bettors seems so grotesque as to require no elaboration. One imagines a young man sending his 1099 documents to a tax accountant the following spring: “right, so here are my dividends, these are the cap gains, and, oh yeah, here’s my $9,000 payout for totally nailing when all those kids would die.”

It is a comforting myth that dystopias happen when obviously bad ideas go too far. Comforting, because it plays to our naive hope that the world can be divided into static categories of good versus evil and that once we stigmatize all the bad people and ghettoize all the bad ideas, some utopia will spring into view. But I think dystopias more likely happen because seemingly good ideas go too far. “Pleasure is better than pain” is a sensible notion, and a society devoted to its implications created Brave New World. “Order is better than disorder” sounds alright to me, but a society devoted to the most grotesque vision of that principle takes us to 1984. Sports gambling is fun, and prediction markets can forecast future events. But extended without guardrails or limitations, those principles lead to a world where ubiquitous gambling leads to cheating, cheating leads to distrust, and distrust leads ultimately to cynicism or outright disengagement.

“The crisis of authority that has kind of already visited every other American institution in the last couple of decades has arrived at professional sports,” Coppins said. Two-thirds of Americans now believe that professional athletes sometimes change their performance to influence gambling outcomes. “Not to overstate it, but that’s a disaster,” he said. And not just for sports.

Four Ways to Lose (Or, What's a 'Rigged Pitch' in a War?)

There are four reasons to worry about the effect of gambling in sports and culture.

by Derek Thompson, Substack |  Read more:
Image: Eyestetix Studio on Unsplash
[ed. See also: Exclusive: Trader made nearly $1 million on Polymarket with remarkably accurate Iran bets (CNN).]
Posted by markk at Friday, April 24, 2026
Labels: Business, Crime, Culture, Economics, Games, Law, Media, Politics, Psychology, Sports, Technology

Tuesday, April 21, 2026

Did Streaming Subscription Prices Just Hit the Wall?

There are (finally) signs that the streaming prices have hit the wall. The public simply can’t afford paying hundreds of dollars per year for each platform. So I’m not surprised that a new survey shows that 55% of consumers want to cancel subscriptions right now.

This isn’t just an idle threat. According to Deloitte, 40% of consumers have already cut back on subscriptions during the previous three months. Even more revealing: 61% say they would cancel their favorite service if the price went up by just five dollars.

Let me repeat that—they would cancel their favorite service, not just any platform.

People now complain of subscription fatigue—and for good reason. If things don’t change, it will soon reach the point of subscription exhaustion. Tech companies have created this mess, and now must live with the consequences.

They did it with three exploitative strategies.

(1) Everything got turned into a subscription. [...]

(2) You’re now punished with intrusive and endless advertising if you don’t subscribe. [...]

(3) Instead of competing on quality and service, companies focus on “audience capture”—and then exploit the captives.

That’s you, by the way—you are the captive. At least that’s how you’re treated by the big tech platforms.

Years ago, these same companies started by offering stuff for free, or at a small price. They only forced through huge price increases after they had captured a huge user base. You see that strategy at Netflix, Spotify, Instagram, etc.

These companies make little effort now to improve their offerings or user interface. In many instances, quality has declined, even as they raise prices. But consumers aren’t stupid—they can see that they’re getting a shaft that won’t cop out...

But even I can’t believe how greedily they have now implemented that strategy. Spotify has raised its price three times in less than three years. It’s now asking $12.99 per month. And if you want a family subscription—which is essential in a household like mine—the price jumps to $21.99 per month.

Those are US prices, but Spotify is doing the same thing everywhere. Last summer, the company forced through price increases in 150 countries.

YouTube is even more avaricious. The company is now raising its premium subscription to $15.99 per month. And the family rate is a whopping $26.99—that adds up to $329 per year.

Video streaming companies are playing the same game. Not long ago, Netflix charged me $9.99 per month. I recently got a notice that my new price has been “updated” to $19.99. Yes that’s more than a doubling over the course of just a few years.

But Netflix may have gone too far. The company’s stock dropped 12% last week after its latest quarterly results. Investors expected the company to raise its guidance for future earnings—because of this subscription price boost. But the company refused to do so, and took a more cautious stance.

According to Morningstar analyst Matt Dolgin:
“The market likely hoped for increased full-year guidance, given that the March price hikes came as a surprise…Growth acceleration in 2027 now seems less likely.”
The more you dig into the latest earnings report, the more ominous things look. Netflix only met expectations because of the breakup fee after it walked away from the Warner’s acquisition. Without that one-time benefit, earnings per share would have dropped year-on-year.

If you try to find some good news here for the company, it comes from Netflix’s shift to advertising. This may be its growth engine in the future—because price increases are now stirring up consumer resistance.

I’d like to be able to provide specific numbers here, but Netflix now refuses to tell us the number of total subscribers. That’s revealing in itself. Not long ago, the company bragged endlessly about subscriber growth. Their silence now tells you everything you really need to know.

Three Ways to Defeat Subscription Fatigue

You aren’t helpless here. You do have options for battling subscription fatigue. Here are three of them.

For a start, customers have learned that canceling a subscription might make sense even if they are just bluffing. It’s amazing how different the rate looks if you’re willing to walk away. I recently canceled a subscription, and was offered an 80% price cut if I would reconsider.

I’m now thinking I should cancel every streaming subscription once per year—just to see what special offer I’m missing. Even if I sign up again at the old rate, I haven’t lost anything by trying this tactic.

Another way of combating costs is a rotation strategy. Under this scenario, consumers only pay for one video streaming subscription at a time. When they want to watch something on another platform, they simply cancel the current subscription and move to the new provider. This lets them watch anything they want for just one monthly payment.

Sure, it’s a hassle. But when annual subscriptions can cost $300 per year or more, consumers are increasingly willing to go to the trouble of ‘rotating’ from service to service.

Of course, you always have the final option of just walking away. Judging by the mood of the consumer, that will start happening more and more.

by Ted Gioia, Honest Broker |  Read more:
Image: uncredited/Netflix
[ed. One more option: Earlier this year I got tired of Amazon Prime's video service - more ads and almost every movie I wanted to see was either a rental or purchase. So I quit Prime altogether (or suspended my account, as Amazon put it). Then one day I saw they had exclusive rights to some movie or other that I wanted to see; they'd increased their speed of delivery in my zip code; and I really did miss free shipping and returns. So I unsuspended my account and started paying a monthly membership fee again. But... just by turning off my service for a few months I got back to my initial lower subscription rate simply by cost averaging over the year (albeit with a few less months of service). So you don't have to quit completely, just for a few months. (Might also note this blog has always been ad and subscription free!)]
Posted by markk at Tuesday, April 21, 2026
Labels: Business, Economics, Media, Movies, Music, Technology

Thursday, April 16, 2026

We May Be Living Through the Most Consequential Hundred Days in Cyber History, and Almost Nobody Has Noticed

[ed. Well, good luck with this one.]

The first four months of 2026 have produced a sequence of cyber incidents that, if any one of them had landed in 2014 or 2017, would have dominated a news cycle for a week.

A Chinese state supercomputer reportedly bled ten petabytes. Stryker was wiped across 79 countries. Lockheed Martin was hit for a reported 375 terabytes. The FBI Director’s personal inbox was dumped on the open web. The FBI’s wiretap management network was breached in a separate “major incident.” Rockstar Games was breached through a SaaS analytics vendor most people have never heard of. Cisco’s private GitHub was cloned. Oracle’s legacy cloud cracked open. The Axios npm package, downloaded a hundred million times a week, was hijacked by North Korea. Mercor, the $10 billion AI training-data vendor that sits inside the data pipelines of OpenAI, Anthropic, and Meta simultaneously, was breached through the LiteLLM open source library and had 4 terabytes extracted by Lapsus$. Honda was hit twice. The new ShinyHunters/Scattered Spider/LAPSUS$ alliance breached approximately 400 organizations and exfiltrated roughly 1.5 billion Salesforce records.

Stacked on top of each other across roughly a hundred days, these events are something a historian of computing security writing in 2050 will probably file as a turning point, regardless of what else happens between now and then.

And yet, the public conversation around them has been quiet to the point of being strange. This is a curious observation more than a complaint. And the goal of what follows is to gather the events into one place, cite the publications that reported each one, and then ask, gently, why the period feels so undocumented in real time.

Every named incident below is followed by inline parenthetical citations to the publications that broke or covered it, in the same way an academic paper would.

I am not arguing that the cybersecurity community is failing. I am noting that something unusual is happening.

by Patrick Quirk, Substack |  Read more:
Image: uncredited
[ed. Hmm... sounds suspicious.]
Posted by markk at Thursday, April 16, 2026
Labels: Business, Crime, Economics, Government, Media, Security, Technology

A Monkey Goes to Court

What happens when something that isn't human makes art? A series of bizarre court battles trying to answer that question centred around this image. Ultimately, it will influence what ends up on your screens and headphones forever.

It was a humid day in the Indonesian jungle, and photographer David Slater was following a group of crested black macaques, a critically endangered and particularly photogenic species of monkey.

He wanted pictures, but the macaques were nervous. So, Slater put his camera on a tripod with autofocus on and a flashbulb, allowing the monkeys to inspect it. Just as he hoped, they started playing with his gear. Then one of them reached up and hit the shutter button while staring directly into the lens. The result was a selfie, taken by a monkey. And its toothy grin inadvertently answered a basic question that sits at the heart of technology.

What came next was nearly a decade of legal battles around an unusual dispute: when something that isn't human makes a work of art, who owns the copyright? Thanks to AI, that's become a issue with some deep implications for modern life – and what it means to be human.

One of the most alarming predictions about AI is that corporations will replace the human-created music, movies and books you love with an endless stream of AI slop. But the US Supreme Court just upheld a decision about AI and copyright which suggests that future may be harder to pull off than the tech industry hoped. The path is still uncertain, and right now, the legal system is the site of a battle that will shape what you read, watch and listen to for the rest of your life. It all traces back to that one little monkey.

Monkey business

The monkey took that selfie in 2011. For a brief, blissful period, Slater enjoyed global attention from the picture, but the troubles began when someone uploaded the photo to Wikipedia, from where it could be downloaded and used free of charge. He asked the Wikimedia Foundation to take it down, arguing it cost him £10,000 (worth about $13,400 today) in lost sales. In 2014, The organisation refused, arguing the photo was in the public domain because it wasn't taken by a person.

The row prompted the US Copyright Office to issue a statement that it would not register work created by a non-human author, putting "a photograph taken by a monkey" first in a list of examples. (Slater didn't respond to interview requests, but his representation arranged for the BBC to use the photo in this article.)

The story gets weirder. Soon after, the advocacy group People for the Ethical Treatment of Animals (Peta) sued Slater on behalf of the monkey. The case argued all proceeds from the photo belonged to the macaque that took the picture, but it was really seen as a test case, an attempt to establish legal rights for animals. After four years and multiple court battles, a San Francisco judge dismissed the case. The judge's reasoning was simple: monkeys can't file lawsuits.

"It was kind of the biggest public conversation piece on this topic," says intellectual property lawyer Ryan Abbott, a partner at Brown, Neri, Smith and Khan in the US. "At the time it was very much about animal rights. But it could have been a conversation about AI." [...]

The missing author

When the US passed the Copyright Act of 1790, we only had to deal with things like writing and drawing. But the invention of photography decades later raised troubling questions. You could argue cameras do the real work, a person just hits a button.

"The Supreme Court looked at this and said, you know, we're going to interpret this purposively," says Abbott, who represented Thaler in a case against the Copyright Office. "Copyright was designed to protect the expression of tangible ideas. And that's broad enough to cover something like photography."

The same logic could apply to AI. "What you really have in photography is exactly the same thing you have here. You have a person issuing instructions to a machine to generate a work," he says. "What's the difference between that and me asking ChatGPT to make an image?"

by Thomas Germain, BBC | Read more:
Image: David Slater/ Caters New/BBC
[ed. More issues than you might imagine.]
Posted by markk at Thursday, April 16, 2026
Labels: Art, Business, Copyright, Critical Thought, Government, Law, Media, Music, Photos, Technology

Tuesday, April 14, 2026

Actors and Scribes; Words and Deeds

[ed. With all the propaganda, misdirection, and outright lies we've heard lately about our war with Iran (or non-war, per Congressional republicans); the upcoming mid-term elections; progress and effects of AI; immigration and deportation policies; the economy; future job security, etc. etc. it seems useful to consider on a basic level how all this information is being transmitted and received. After all, there's a gigantic media apparatus designed specifically for this purpose - to optimize engagement in one form or another. So, while some people might do their best to tune it all out (which would be a mistake, and probably hopeless), others sift through the noise for some semblance of truth, or to hear what they want to hear. This essay helps define some cognitive ground rules.] 
***

Among the kinds of people, are the Actors, and the Scribes. Actors mainly relate to speech as action that has effects. Scribes mainly relate to speech as a structured arrangement of pointers that have meanings. [...]

There's "telling the truth," and then there's a more specific thing that's more obviously distinct from even Actors who are trying to make honest reports: keeping precisely accurate formal accounts...

Summary

Everyone agrees that words have meaning; they convey information from the speaker to the listener or reader. That's all they do. So when I used the phrase “words have meanings” to describe one side of a divide between people who use language to report facts, and people who use language to enact roles, was I strawmanning the other side?

I say no. Many common uses of language, including some perfectly legitimate ones, are not well-described by "words have meanings." For instance, people who try to use promises like magic spells to bind their future behavior don't seem to consider the possibility that others might treat their promises as a factual representation of what the future will be like.

Some uses of language do not simply describe objects or events in the world, but are enactive, designed to evoke particular feelings or cause particular actions. Even when speech can only be understood as a description of part of a model of the world, the context in which a sentence is uttered often implies an active intent, so if we only consider the direct meaning of the text, we will miss the most important thing about the sentence.

Some apparent uses of language’s denotative features may in fact be purely enactive. This is possible because humans initially learn language mimetically, and try to copy usage before understanding what it’s for. Primarily denotative language users are likely to assume that structural inconsistencies in speech are errors, when they’re often simply signs that the speech is primarily intended to be enactive.

Enactive language

Some uses of words are enactive: ways to build or reveal momentum. Others denote the position of things on your world-map.

In the denotative framing, words largely denote concepts that refer to specific classes of objects, events, or attributes in the world, and should be parsed as such. The meaning of a sentence is mainly decomposable into the meanings of its parts and their relations to each other. Words have distinct meanings that can be composed together in structures to communicate complex and nonobvious messages, or just uses and connotations. When you speak in this mode, it’s to describe models - relationships between concepts, which refer to classes of objects in the world.

In the enactive mode, the function of speech is to produce some action or disposition in your listener, who may be yourself. Ideas are primarily associative, reminding you of the perceptions with which the speech-act is associated. When I wrote about admonitions as performance-enhancing speech, I gave the example of someone being encouraged by their workout buddies:
Recently, at the gym, I overheard some group of exercise buddies admonishing their buddy on some machine to keep going with each rep. My first thought was, “why are they tormenting their friend? Why can’t they just leave him alone? Exercise is hard enough without trying to parse social interactions at the same time.”

And then I realized - they’re doing it because, for them, it works. It's easier for them to do the workout if someone is telling them, “Keep going! Push it! One more!”
In the same post, I quoted Wittgenstein’s thought experiment of a language where words are only ever used as commands, with a corresponding action, never to refer to an object. Wittgenstein gives the example of a language used for nothing but military orders, and then elaborates on a hypothetical language used strictly for work orders. For instance, a foreman might use the utterance “Slab!” to direct a worker to fetch a slab of rock. I summarized the situation thus:
When I hear “slab”, my mind interprets this by imagining the object. A native speaker of Wittgenstein’s command language, when hearing the utterance “Slab!”, might - merely as the act of interpreting the word - feel a sense of readiness to go fetch a stone slab.
Wittgenstein’s listener might think of the slab itself, but only as a secondary operation in the process of executing the command. Likewise, I might, after thinking of the object, then infer that someone wants me to do something with the slab. But that requires an additional operation: modeling the speaker as an agent and using Gricean implicature to infer their intentions. The word has different cognitive content or implications for me, than for the speaker of Wittgenstein’s command language.

Military drills are also often about disintermediating between a command and action. Soldiers learn that when you receive an order, you just do the thing. This can lead to much more decisive and coordinated action in otherwise confusing situations – a familiar stimulus can lead to a regular response.

When someone gives you driving directions by telling you what you'll observe, and what to do once you make that observation, they're trying to encode a series of observation-action linkages in you.

This sort of linkage can happen to nonverbal animals too. Operant conditioning of animals gets around most animals' difficulty understanding spoken instructions, by associating a standardized reward indicator with the desired action. Often, if you want to train a comparatively complex action like pigeons playing pong, you'll need to train them one step at a time, gradually chaining the steps together, initially rewarding much simpler behaviors that will eventually compose into the desired complex behavior.

Crucially, the communication is never about the composition itself, just the components to be composed. Indeed, it’s not about anything, from the perspective of the animal being trained. This is similar to an old-fashioned army reliant on drill, in which, during battle, soldiers are told the next action they are to take, not told about overall structure of their strategy. They are told to, not told about.

Indeterminacy of translation

It’s conceivable that having what appears to be a language in common does not protect against such differences in interpretation. Quine also points to indeterminacy of translation and thus of explicable meaning with his "gavagai" example. As Wikipedia summarizes it:
Indeterminacy of reference refers to the interpretation of words or phrases in isolation, and Quine's thesis is that no unique interpretation is possible, because a 'radical interpreter' has no way of telling which of many possible meanings the speaker has in mind. Quine uses the example of the word "gavagai" uttered by a native speaker of the unknown language Arunta upon seeing a rabbit. A speaker of English could do what seems natural and translate this as "Lo, a rabbit." But other translations would be compatible with all the evidence he has: "Lo, food"; "Let's go hunting"; "There will be a storm tonight" (these natives may be superstitious); "Lo, a momentary rabbit-stage"; "Lo, an undetached rabbit-part." Some of these might become less likely – that is, become more unwieldy hypotheses – in the light of subsequent observation. Other translations can be ruled out only by querying the natives: An affirmative answer to "Is this the same gavagai as that earlier one?" rules out some possible translations. But these questions can only be asked once the linguist has mastered much of the natives' grammar and abstract vocabulary; that in turn can only be done on the basis of hypotheses derived from simpler, observation-connected bits of language; and those sentences, on their own, admit of multiple interpretations.
Everyone begins life as a tiny immigrant who does not know the local language, and has to make such inferences, or something like them. Thus, many of the difficulties in nailing down exactly what a word is doing in a foreign language have analogues in nailing down exactly what a word is doing for another speaker of one’s own language.

Mimesis, association, and structure

Not only do we all begin life as immigrants, but as immigrants with no native language to which we can analogize our adopted tongue. We learn language through mimesis. For small children, language is perhaps more like Wittgenstein's command language than my reference-language. It's a commonplace observation that children learn the utterance "No!" as an expression of will. In The Ways of Naysaying: No, Not, Nothing, and Nonbeing, Eva Brann provides a charming example:
Children acquire some words, some two-word phrases, and then no. […] They say excited no to everything and guilelessly contradict their naysaying in the action: "Do you want some of my jelly sandwich?" "No." Gets on my lap and takes it away from me. […] It is a documented observation that the particle no occurs very early in children's speech, sometimes in the second year, quite a while before sentences are negated by not.
First we learn language as an assertion of will, a way to command. Then, later, we learn how to use it to describe structural features of world-models. I strongly suspect that this involves some new, not entirely mimetic cognitive machinery kicking in, something qualitatively different: we start to think in terms of pointer-referent and concept-referent relations. In terms of logical structures, where "no" is not simply an assertion of negative affect, but inverts the meaning of whatever follows. Only after this do recursive clauses, conditionals, and negation of negation make any sense at all.

As long as we agree on something like rules of assembly for sentences, mimesis might mask a huge difference in how we think about things. It's instructive to look at how the current President of the United States uses language. He's talking to people who aren't bothering to track the structure of sentences. This makes him sound more "conversational" and, crucially, allows him to emphasize whichever words or phrases he wants, without burying them in a potentially hard-to-parse structure. As Katy Waldman of Slate says:
For some of us, Trump’s language is incendiary garbage. It’s not just that the ideas he wants to communicate are awful but that they come out as Saturnine gibberish or lewd smearing or racist gobbledygook. The man has never met a clause he couldn’t embellish forever and then promptly forget about. He uses adjectives as cudgels. You and I view his word casserole as not just incoherent but representative of the evil at his heart.

But it works. […]

Why? What’s the secret to Trump’s accidental brilliance? A few theories: simple component parts, weaponized unintelligibility, dark innuendo, and power signifiers.

[…] Trump tends to place the most viscerally resonant words at the end of his statements, allowing them to vibrate in our ears. For instance, unfurling his national security vision like a nativist pennant, Trump said: But, Jimmy, the problem – I mean, look, I’m for it. But look, we have people coming into the country* that are looking to do tremendous harm…. Look what happened in Paris. Look what happened in California, with, you know, 14 people dead. Other people are going to die, they’re badly injured, *we have a real problem.

Ironically, because Trump relies so heavily on footnotes, false starts, and flights of association, and because his digressions rarely hook back up with the main thought, the emotional terms take on added power. They become rays of clarity in an incoherent verbal miasma. Think about that: If Trump were a more traditionally talented orator, if he just made more sense, the surface meaning of his phrases would likely overshadow the buried connotations of each individual word. As is, to listen to Trump fit language together is to swim in an eddy of confusion punctuated by sharp stabs of dread. Which happens to be exactly the sensation he wants to evoke in order to make us nervous enough to vote for him.
Of course, Waldman is being condescending and wrong here. This is not word salad, it's high context communication. But high context communication isn't what you use when you are thinking you might persuade someone who doesn't already agree with you, it's just a more efficient exercise in flag-waving. The reason why we don't see a complex structure here is because Trump is not trying to communicate this sort of novel content that structural language is required for. He's just saying "what everyone was already thinking."

But while Waldman picked a poor example, she's not wholly wrong. In some cases, the President of the United States seems to be impressionistically alluding to arguments or events his audience has already heard of – but his effective rhetorical use of insulting epithets like “Little Marco,” “Lying Ted Cruz,” and “Crooked Hillary,” fit very clearly into this schema. Instead of asking us to absorb facts about his opponents, incorporate them into coherent world-models, and then follow his argument for how we should judge them for their conduct, he used the simple expedient of putting a name next to a descriptor, repeatedly, to cause us to associate the connotations of those words. We weren't asked to think about anything. These were simply command words, designed to act directly on our feelings about the people he insulted.

We weren't asked to take his statements as factually accurate. It's enough that they're authentic.

This was persuasive to enough voters to make him President of the United States. This is not a straw man. This is real life. This is the world we live in.

You might object that the President of the United States is an unfair example, and that most people of any importance should be expected to be better and clearer thinkers than the leader of the free world. So, let's consider the case of some middling undergraduates taking an economics course.

by Ben Hoffman, Compass Rose |  Read more:
Posted by markk at Tuesday, April 14, 2026
Labels: Critical Thought, Government, Journalism, Media, Philosophy, Politics, Psychology

I'm Super, Thanks for Asking

Posted by markk at Tuesday, April 14, 2026
Labels: Cartoons, Humor, Media, Politics

Monday, April 13, 2026

Winston Tseng, Epstein Files Transparency Act
via:
Posted by markk at Monday, April 13, 2026
Labels: Illustration, Media, Photos, Politics

Friday, April 10, 2026

Fed Up. Finally

"I'm SICK of this shit... can't he just behave like a normal human?"

Megyn Kelly, Former Fox News host (and Trump whisperer) losing it.
via: SiriusXM/Instagram
[ed. See also: Trump Lashes Out at Prominent Conservatives Over Iran War Criticism (NYT). It's starting to look like Hitler in bunker time. The only question is why people thought he was somebody different.]
Posted by markk at Friday, April 10, 2026
Labels: Crime, Government, Media, Military, Politics

Monday, April 6, 2026

Dating Apps: Giving Men What They Want But Not What They Need

Dating apps were built on the bones of Grindr. I have been known to joke that everything wrong with dating apps is divine retribution for culturally appropriating them from the gays.

Gay men, specifically, that’s important - the overwhelming majority of people making apps are still men, and most of those are still straight men, and while I don’t exactly have insider knowledge on this, it couldn’t be clearer to me that some open-ish minded straight tech boy heard from one of his gay male friends about being able to summon sex partners to his bed from the immediate vicinity after filtering on a bunch of lewd photos and thought: “There isn’t a straight man alive who wouldn’t consider giving up his left hand to have this experience with women. I could make a billion dollars making straight Grindr.”

And thus Tinder was born. Blah blah blah lust and greed sullying the purity of romantic and sexual love; a direction I could go, but instead we’re going to talk about the ways that playing to male preferences in the short term can easily ruin their entire lives, even when it was men’s idea.

Dating apps aggressively reflect male preferences, sexuality neutral. They’re long on photos, short on text. They filter primarily on location, which has some usefulness, but is most useful if the question is “who’s geographically close enough to me that walking to my place for sex is a realistic option” .

Men love flipping through photos of people they’re attracted to - that alone drove much of the traffic to Facebook’s precursor, Hot or Not. This app is built to give men a sexual scrolling experience as soothingly magnetic as any social media site while providing enough mystery to feel less degenerate than porn (the better for large doses and intermittent rewards).

For women, it’s grim. Yes, they get matches much more often than men do (largely because these extremely male-centric UI decisions lure vastly more male users than women; what economist could have predicted this problem with a heterosexual dating app). They don’t enjoy using these apps, not nearly to the degree or as often as men do. For most women, sifting through men feels dehumanizing, and sorting on pictures feels painfully limited (the male equivalent might be having to swipe based on photos of a woman’s favorite outfit, laid out on her bed. Vaguely boring and frustrating to have to make important decisions with so little information about the things you care about).

This isn’t just because of blackpill stuff about how men aren’t hot to women - that topic has been covered to death, yes women find men physically hot but no it doesn’t always work in such a way that static photos capture, so men are impossibly screwed by efforts to appeal to women with photos alone. There’s also the fact that men suck at taking pictures, because the market for photos of people is overwhelmingly men as buyers and women as suppliers, with the demand being for sexually attractive photos of women. Looking at photos of men is like driving a Nissan truck: it couldn’t be clearer that it is not your specialty and significantly worse than other products that your entire factory line was designed for.

You might think that dating apps are bad for men because they lead to men experiencing significant rejection - even the way my post is framed up until this point sort of implies as much. That framework, like much about dating apps, gets the whole picture subtly, insidiously wrong in a way that leaves people who take them at face value much worse off. You know who takes things at face value most often? You’re not going to believe this,

No, the greatest deprivation created by dating apps is specifically denying women and men the opportunity for women to keep men around in a general capacity. (If this idea makes you freak out about the friend zone, I’m almost impressed with you because young people seem to do so little socializing that no one complains about the friend zone anymore. Pat yourself on the back for having friends if you’ve managed to develop a resentment complex around the friend zone).

Most women develop attraction to men via proximity and time. Force a woman to choose if she wants the option to sleep with a man the second she meets him, and she will default to no in almost every single case. For many men, this means that any men who enjoy the attention of women who are open to sleeping with them at first glance are the only men women authentically want. Respectfully, you’re thinking like a guy, and if you believe that men and women are extremely different, I’m going to need you to trust that women develop affection for men differently than men do for women, such that you’ll ruin your life trying to figure out why women don’t desire you in the exact same way that you desire them...

One of the worst things you can do if you date women is to push them into a choice of yes or no as early as possible. You are simply too much of a risk on too many axes to get something other than a no unless you look like Chris Hemsworth, and even that wouldn’t get you yeses from 100% of the women you might ask out (hot men can still be shitty in about a thousand ways, and women often aren’t willing to take risks even for hotness. Again. They are not men). You might think that your goal should be to look like Chris Hemsworth, or alternatively to despair that you don’t look like Chris Hemsworth and go sulkily into that good night, but that’s you thinking like a guy and assuming that how women feel has to match how you feel. Frankly, that’s what got you into this mess: by trusting tech men who told you that you could game heterosexual dating by giving you an interface that pinged all your dopamine sensors while curiously robbing you of a lot of opportunities to find and develop a fulfilling relationship. [...]

The major product provided by a dating app is the illusion of participating in dating at all - some time swiping through faces, and congratulations, you are “dating”, you Tried, you do not need to do anything scarier or riskier or less fun than this.

by Eurydice, Eurydice Lives |  Read more:
Image: uncredited via
Posted by markk at Monday, April 06, 2026
Labels: Business, Culture, Media, Psychology, Relationships, Technology

Saturday, April 4, 2026

Go Ahead and Use AI. It Will Only Help Me Dominate You.

Recently there has been a lot of commentary of the following type:

BAD WRITER [touchily]: “Actually, I do use AI to help me write.”

Okay. That checks out. Carry on.

Want to use AI as a Valuable Part of Your Writing Process? Want to use it to “generate pushback on my column thesis” and be “more comprehensible” and “craft unique angles” and offer “positive and negative feedback” and “scale the quantity” of your “output?”

Knock yourself out.

You have my blessing.

Hey buddy— go for it!

Some in the “real writer” community find this sort of rampant outsourcing of the writing process to AI to be distressing. Not me. Would I do it myself? No. I have self-respect. But I want to tell you, my friends, that you have my full support for all of it. Want to throw your dashed-off notes into ChatGPT and have it spit a draft back at you and then edit that and call it your own? Want to toss a few hastily written headlines at Claude and have it generate the outline of your piece? Want to dump your entire career archives into a chatbot and then order it to replicate your own voice so you don’t have to?

Do you, a grown man, a successful professional writer who has received a book deal paying you real US currency, want to use AI for the purpose of “making sure the book matches [your own] writing style”[???]? Guess what, brother: I support you. I affirm you. I am right here offering you a classic thumbs-up gesture of affirmation.

“Whoa, a writer who I have never regarded as particularly inventive is using AI? I am surprised and disappointed.” There’s a sentence I would never utter. Instead, I would accept the news of your AI use with total equanimity, nodding almost imperceptibly to indicate that this is not something worth raising my eyebrows over.

No, I will not be joining in the chorus of condemnation. On the contrary. If you are a professional writer, I want you to use AI. Because this industry is competitive. I’ll take any advantage I can get. And if you want to make your writing suck, that’s all the better for me. One less person outshining me.

The tepid, conformist nature of your AI-assisted prose will only make my unexpected bons mots stand out more sharply. While you lean on a technological crutch of grammatical mediocrity to drag your essays over the finish line, I’ll be metaphorically zipping past you on my “magic carpet” of words emerging directly from my own declining and unpredictable brain. Over time, the intellectual box into which AI has seduced your creative process will suffocate you, leaving your bereft readers little choice but to drift into my subscription base.

You’ll be all, “Politics in America is divided—but it doesn’t have to be. Let’s discuss how to bridge the partisan divide.” Your sense of joy at the possibilities of the English language will have been so eroded that you won’t even understand why that sucks shit. Meanwhile I’ll be dropping some wild similes you could never even imagine. “Politics is like a sea slug.” What?? How?? Readers will flock to me to find out. Too bad your AI editor struck that line from your piece as “indecipherable.”

You and your friend “Claude” wouldn’t last two seconds in my cipher.

Maybe you read the studies about how AI use causes “cognitive surrender” that slowly destroys your ability to think critically about the linguistic cud that the machine is serving you. Or about how it causes “cognitive foreclosure” that prevents you from ever developing the skills to critique AI output even if you wanted to. Maybe these studies give you pause, when you think about introducing these inscrutable tools of mental paralysis into your own creative process.

Don’t worry about it!

Life is hard enough already. You’re busy. You have lots of things to do—laundry, making lunch, and more. The last thing you need is a bunch of jealous (Brooklyn hipster) writers lecturing you about how this magical productivity booster is somehow “bad” for you. Those are probably the same haters who told you to stop doing so much crystal meth. Some people can’t stand to see you succeed!

I just checked a calendar—it’s 2026. AI is here to stay and you might as well beat the rush by using it more and more, right? Right. In the name of efficiency, it just makes sense for you to turn over ever greater portions of your thought process to this seductive helper, never stopping to ask yourself what it is costing you. You are a nice person and your job (writing) deserves to be easy. There, there. Allow yourself to sink into the warm opiate of cerebral ease. This is better. Yes. This is much better.

By all means—proceed.

And then, when you have settled into this comfortable pattern, sit back and watch me unsheath my massive, work-hardened intellect, built to staggering strength through a daily regimen of thinking about stuff. I think you’ll find that your panicked efforts to resist my onslaught will prove unsuccessful, hampered as you are by atrophied muscles of the mind. Ask your AI companion for some final words of comfort. The hour of your doom draws near.

I will crush you with ease.

by Hamilton Nolan, How Things Work |  Read more:
Image: Getty
[ed. Haha...yep. : ) See also: Who Goes AI? (with respect to Dorothy Thompson's 'Who Goes Nazi', gracefully acknowledged by the author).]
Posted by markk at Saturday, April 04, 2026
Labels: Culture, Fiction, Journalism, Media, Technology

The Big T-Shirt Payoff

The College Student—and His Cat Meme—Who Hunted the World’s Biggest Cyberweapon

Sitting in his dorm room at the Rochester Institute of Technology, Benjamin Brundage was closing in on a mystery that had even seasoned internet investigators baffled. A cat meme helped him crack the case.

A growing network of hacked devices was launching the biggest cyberattacks ever seen on the internet. It had become the most powerful cyberweapon ever assembled, large enough to knock a state or even a small country offline. Investigators didn’t know exactly who had built it—or how.
 
Brundage had been following the attacks, too—and, in between classes, was conducting his own investigation. In September, the college senior started messaging online with an anonymous user who seemed to have insider knowledge.

As they chatted on Discord, a platform favored by videogamers, Brundage was eager to get more information, but he didn’t want to come off as too serious and shut down the conversation. So every now and then he’d send a funny GIF to lighten the mood. Brundage was fluent in the memes, jokes and technical jargon popular with young gamers and hackers who are extremely online.

“It was a bit of just asking over and over again and then like being a bit unserious,” said Brundage.

At one point, he asked for some technical details. He followed up with the cat meme: a six-second clip that showed a hand adjusting a necktie on a fluffy gray cat.

Brundage didn’t expect it to work, but he got the information. “It took me by surprise,” he said.

Eventually the leaker hinted there was a new vulnerability on the internet. Brundage, who is 22, would learn it threatened tens of millions of consumers and as much as a quarter of the world’s corporations. As he unraveled the mystery, he impressed veteran researchers with his findings—including federal law enforcement, which took action against the network two weeks ago.

Chad Seaman, a researcher at Akamai, joked at one point that the internet could go down if Brundage spent too much time on his exams.

Early warning

Three times a year, several hundred of the techies who keep North America’s internet running gather to talk shop. Last June they met at a conference in Denver hosted by the North American Network Operators’ Group.

One major topic was a fast-growing and often legally dubious business known as residential proxy networks. Dozens of companies around the world run such networks, which are made up of consumer devices like phones, computers and video players.

These “res proxy” companies rent out access to internet connections on the devices to customers who want to look like they’re surfing the internet from a genuine home address.

That kind of access is useful for people who want privacy or for companies that want to masquerade as regular people to test out internet features for particular regions or scrape the web for data (say, a shopping price-comparison site). AI companies use the networks to get around blocks on automated traffic so they can gather large amounts of data to train their models.

Then there are the customers who want to hide their identity while engaging in ticket scalping, bank fraud, bomb threats, stalking, child exploitation, hacking or espionage.

Some device owners willingly sign up to be on these networks so they can make a few dollars a month, but most have no idea they’re connected to one.

At the Denver conference, Craig Labovitz was alarmed. The Nokia executive had been tracking the data flows of the internet’s infrastructure for years, and he knew the network’s data centers, chokepoints and design better than most.

Starting in January 2025, Nokia’s sensors had picked up a series of increasingly powerful cyberattacks coming from devices that hadn’t previously been considered dangerous. Called distributed denial of service, or DDoS, attacks, these were massive floods of junk internet data designed to knock websites offline by overwhelming the data pipes that connected them. These attacks are sometimes launched by extortionists or even business rivals seeking to sabotage computer networks.

Nokia saw hundreds of thousands of devices joining in these attacks. One unprecedented attack later in the year on internet service provider Cloudflare was “comparable to the combined populations of the UK, Germany, and Spain all simultaneously typing a website address and then hitting ‘enter’ at the same second,” Cloudflare said.

The network, which would become known as Kimwolf, seemed to be using residential proxy connections to launch its attacks, giving it the potential to do massive damage.

“The basic message was, ‘Be afraid,’” Labovitz remembers. [...]

Instead he applied his hacking skills toward legitimate cybersecurity research. In his senior year of high school, he found bugs in websites belonging to the Dutch government and reported them via a “bug bounty” program that offered hackers prizes for unearthing security flaws.A few months later, the Dutch National Cyber Security Center mailed him his bounty: a black T-shirt. It read: “I hacked the Dutch government and all I got was this lousy t-shirt.”

He remembers it as one of the most rewarding experiences of his young life: a “dopamine rush,” he said. [...]

On March 19, federal authorities announced they’d disrupted four of the world’s largest DDoS botnets, including Kimwolf. Kimwolf had launched more than 26,000 DDoS attacks targeting over 8,000 victims, according to a court filing. The press release announcing the takedown thanked Brundage’s company, Synthient, among others.

​Industry experts say that Kimwolf today is a shadow of its former self. The cybersecurity firm Netscout says it’s seeing about 30,000 Kimwolf machines active at any given time.

Brundage recently got a text message from a federal official on the case. The official had heard about the bug bounty Brundage got from the Dutch government years ago and had a question: “What’s a good address to mail you a t-shirt, and what’s your size?”

by Robert McMillan, Wall Street Journal |  Read more:
Image: via
[ed. Here's how to protect yourself.]
Posted by markk at Saturday, April 04, 2026
Labels: Business, Crime, Education, Media, Security, Technology

Wednesday, April 1, 2026

The AI Doc

 

(This will be a fully spoilorific overview. If you haven’t seen The AI Doc, I recommend seeing it, it is about as good as it could realistically have been, in most ways.)

Like many things, it only works because it is centrally real. The creator of the documentary clearly did get married and have a child, freak out about AI, ask questions of the right people out of worry about his son’s future, freak out even more now with actual existential risk for (simplified versions of) the right reasons, go on a quest to stop freaking out and get optimistic instead, find many of the right people for that and ask good non-technical questions, get somewhat fooled, listen to mundane safety complaints, seek out and get interviews with the top CEOs, try to tell himself he could ignore all of it, then decide not to end on a bunch of hopeful babies and instead have a call for action to help shape the future.

The title is correct. This is about ‘how I became an Apolcaloptimist,’ and why he wanted to be that, as opposed to an argument for apocaloptimism being accurate. The larger Straussian message, contra Tyler Cowen, is not ‘the interventions are fake’ but that ‘so many choose to believe false things about AI, in order to feel that things will be okay.’

A lot of the editing choices, and the selections of what to intercut and clip, clearly come from an outsider without technical knowledge, trying to deal with their anxiety. Many of them would not have been my choices, especially the emphasis on weapons and physical destruction, but I think they work exactly because together they make it clear the whole thing is genuine.

Now there’s a story. It even won praise online as fair and good, from both those worried about existential risk and several of the accelerationist optimists, because it gave both sides what they most wanted. [...]

Yes, you can do that for both at once, because they want different things and also agree on quite a lot of true things. That is much more impactful than a diatribe.

We live in a world of spin. Daniel Roher is trying to navigate a world of spin, but his own earnestness shines through, and he makes excellent choices on who to interview. The being swayed by whoever is in front of him is a feature, not a bug, because he’s not trying to hide it. There are places where people are clearly trying to spin, or are making dumb points, and I appreciated him not trying to tell us which was which.

MIRI offers us a Twitter FAQ thread and a full website FAQ explaining their full position in the context of the movie, which is that no this is not hype and yes it is going to kill everyone if we keep building it and no our current safety techniques will not help with that, and they call for an international treaty.

Are there those who think this was propaganda or one sided? Yes, of course, although they cannot agree on which angle it was trying to support.

Babies Are Awesome

The overarching personal journey is about Daniel having a son. The movie takes one very clear position, that we need to see taken more often, which is that getting married and having a family and babies and kids are all super awesome.

This turns into the first question he asks those he interviews. Would you have a child today, given the current state of AI? [...]

People Are Worried About AI Killing Everyone

The first set of interviews outlines the danger.

This is not a technical film. We get explanations that resonate with an ordinary dude.

We get Jeffrey Ladish explaining the basics of instrumental convergence, the idea that if you have a goal then power helps you achieve that goal and you cannot fetch the coffee if you’re dead. That it’s not that the AI will hate us, it’s that it will see us like we see ants, and if you want to put a highway where the anthill is that’s the ant’s problem.

We get Connor Leahy talking about how creating smarter and more capable things than us is not a safe thing to be doing, and emphasizing that you do not need further justification for that. We get Eliezer Yudkowsky saying that if you share a planet with much smarter beings that don’t care about you and want other things, you should not like your chances. We get Ajeya Cotra explaining additional things, and so on.

Aside from that, we don’t get any talk of the ‘alignment problem’ and I don’t think the word alignment even appears in the film that I can remember.

It is hard for me to know how much the arguments resonate. I am very much not the target audience. Overall I felt they were treated fairly, and the arguments were both strong and highly sufficient to carry the day. Yes, obviously we are in a lot of trouble here.

Freak Out

Daniel’s response is, quite understandably and correctly, to freak out.

Then he asks, very explicitly, is there a way to be an optimist about this? Could he convince himself it will all work out?

by Zvi Mowshowitz, DWAtV |  Read more:
Posted by markk at Wednesday, April 01, 2026
Labels: Critical Thought, Design, Media, Movies, Politics, Science, Security, Technology
Older Posts Home
Subscribe to: Posts (Atom)

Blog Archive

  • ▼  2026 (506)
    • May (2)
    • April (155)
    • March (132)
    • February (118)
    • January (99)
  • ►  2025 (1196)
    • December (135)
    • November (111)
    • October (142)
    • September (93)
    • August (88)
    • July (88)
    • June (94)
    • May (100)
    • April (94)
    • March (97)
    • February (69)
    • January (85)
  • ►  2024 (897)
    • December (95)
    • November (65)
    • October (70)
    • September (58)
    • August (58)
    • July (75)
    • June (87)
    • May (79)
    • April (63)
    • March (69)
    • February (93)
    • January (85)
  • ►  2023 (892)
    • December (82)
    • November (61)
    • October (74)
    • September (53)
    • August (75)
    • July (68)
    • June (79)
    • May (84)
    • April (89)
    • March (85)
    • February (67)
    • January (75)
  • ►  2022 (277)
    • December (89)
    • November (77)
    • October (72)
    • September (39)
  • ►  2021 (422)
    • August (5)
    • May (31)
    • April (105)
    • March (107)
    • February (94)
    • January (80)
  • ►  2020 (1132)
    • December (80)
    • November (68)
    • October (85)
    • September (76)
    • August (104)
    • July (104)
    • June (82)
    • May (95)
    • April (126)
    • March (115)
    • February (90)
    • January (107)
  • ►  2019 (1327)
    • December (110)
    • November (115)
    • October (118)
    • September (93)
    • August (145)
    • July (104)
    • June (108)
    • May (109)
    • April (84)
    • March (114)
    • February (98)
    • January (129)
  • ►  2018 (1368)
    • December (116)
    • November (120)
    • October (103)
    • September (93)
    • August (104)
    • July (117)
    • June (99)
    • May (150)
    • April (91)
    • March (123)
    • February (117)
    • January (135)
  • ►  2017 (1264)
    • December (119)
    • November (109)
    • October (112)
    • September (89)
    • August (132)
    • July (95)
    • June (87)
    • May (126)
    • April (92)
    • March (118)
    • February (102)
    • January (83)
  • ►  2016 (1477)
    • December (135)
    • November (122)
    • October (129)
    • September (106)
    • August (132)
    • July (121)
    • June (104)
    • May (154)
    • April (136)
    • March (112)
    • February (136)
    • January (90)
  • ►  2015 (1481)
    • December (138)
    • November (118)
    • October (131)
    • September (105)
    • August (120)
    • July (130)
    • June (104)
    • May (130)
    • April (111)
    • March (167)
    • February (108)
    • January (119)
  • ►  2014 (1733)
    • December (140)
    • November (145)
    • October (131)
    • September (132)
    • August (126)
    • July (144)
    • June (164)
    • May (196)
    • April (173)
    • March (161)
    • February (113)
    • January (108)
  • ►  2013 (2586)
    • December (199)
    • November (208)
    • October (215)
    • September (234)
    • August (231)
    • July (216)
    • June (232)
    • May (268)
    • April (266)
    • March (199)
    • February (148)
    • January (170)
  • ►  2012 (2380)
    • December (206)
    • November (223)
    • October (289)
    • September (222)
    • August (236)
    • July (168)
    • June (190)
    • May (293)
    • April (88)
    • March (190)
    • February (143)
    • January (132)
  • ►  2011 (2591)
    • December (132)
    • November (276)
    • October (275)
    • September (244)
    • August (253)
    • July (330)
    • June (330)
    • May (276)
    • April (243)
    • March (229)
    • February (3)

Support Duck Soup!

Read what it's all about.

Categories

  • Administration (59)
  • Animals (367)
  • Architecture (183)
  • Art (3929)
  • Business (2383)
  • Cartoons (391)
  • Celebrities (273)
  • Cities (523)
  • Copyright (64)
  • Crime (343)
  • Critical Thought (1182)
  • Culture (3597)
  • Dance (35)
  • Design (714)
  • Drugs (280)
  • Economics (1581)
  • Education (654)
  • Environment (890)
  • Fashion (382)
  • Fiction (235)
  • Food (778)
  • Government (1451)
  • Health (780)
  • Humor (994)
  • Illustration (353)
  • Journalism (289)
  • Law (721)
  • Literature (717)
  • Media (1188)
  • Medicine (637)
  • Military (312)
  • Movies (426)
  • Music (2338)
  • Philosophy (228)
  • Photos (2996)
  • Poetry (34)
  • Politics (2518)
  • Psychology (1123)
  • Relationships (1261)
  • Religion (109)
  • Science (1316)
  • Security (656)
  • Sports (717)
  • Technology (2668)
  • Travel (294)
  • history (968)

Search

markk_213 at yahoo.com

*Note. All content on this site unless specifically attributed to the editor has been obtained from other sources. A link at the bottom of each post will direct readers to the material in its full and original form. All posts are strictly for educational purposes (directing readers to original sources). If content providers prefer to have their material removed, please contact me at the email address listed above. None of the items posted here are, or should be, used for commercial purposes (nor are used for such purposes here). They are presented solely to promote the ideas, reporting and art of the people that produced them.

(DMCA designated agency registration no.: DMCA-1042791
Powered by Blogger.