Wednesday, October 15, 2025

Everything Is Television

A spooky convergence is happening in media. Everything that is not already television is turning into television. Three examples:

1. You learn a lot about a company when its back is against the wall. This summer, we learned something important about Meta, the parent company of Facebook and Instagram. In an antitrust case with the Federal Trade Commission, Meta filed a legal brief on August 6, in which it made a startling claim. Meta cannot possibly be a social media monopoly, Meta said, because it is not really a social media company.

Only a small share of time spent on its social-networking platforms is truly “social” networking—that is, time spent checking in with friends and family. More than 80 percent of time spent on Facebook and more than 90 percent of time spent on Instagram is spent watching videos, the company reported. Most of that time is spent watching content from creators whom the user does not know. From the FTC filing:
Today, only a fraction of time spent on Meta’s services—7% on Instagram, 17% on Facebook—involves consuming content from online “friends” (“friend sharing”). A majority of time spent on both apps is watching videos, increasingly short-form videos that are “unconnected”—i.e., not from a friend or followed account—and recommended by AI-powered algorithms Meta developed as a direct competitive response to TikTok’s rise, which stalled Meta’s growth.
Social media has evolved from text to photo to video to streams of text, photo, and video, and finally, it seems to have reached a kind of settled end state, in which TikTok and Meta are trying to become the same thing: a screen showing hours and hours of video made by people we don’t know. Social media has turned into television.

2. When I read the Meta filing, I had been thinking about something very different: the future of my podcast, Plain English.

When podcasts got started, they were radio for the Internet. This really appealed to me when I started my show. I never watch the news on television, and I love listening to podcasts while I make coffee and go on walks, and I’d prefer to make the sort of media that I consume. Plus, as a host, I thought I wanted to have conversations focused on the substance of the words rather than on ancillary concerns about production value and lighting.

But the most successful podcasts these days are all becoming YouTube shows. Industry analysts say consumption of video podcasts is growing twenty times faster than audio-only ones, and more than half of the world’s top shows now release video versions. YouTube has quietly become the most popular platform for podcasts, and it’s not even close. On Spotify, the number of video podcasts has nearly tripled since 2023, and video podcasts are significantly outgrowing non-video podcasts. Does it really make sense to insist on an audio-only podcast in 2025? I do not think so. Reality is screaming loudly in my ear, and its message is clear: Podcasts are turning into television.

3. In the last few weeks, Meta introduced a product called Vibes, and OpenAI announced Sora. Both are AI social networks where users can watch endless videos generated by artificial intelligence. (For your amusement, or horror, or whatever, here is: Sam Altman stealing GPUs at Target to make more AI; the O.J. Simpson trial as an amusement park ride; and Stephen Hawking entering a professional wrestling ring.)

Some tech analysts predict that these tools will lead to an efflorescence of creativity. “Sora feels like enabling everyone to be a TikTok creator,” the investor and tech analyst MG Siegler wrote. But the internet’s history suggests that, if these products succeed, they will follow what Ben Thompson calls the 90/9/1 rule: 90 percent of users consume, 9 percent remix and distribute, and just 1 percent actually create. In fact, as Scott Galloway has reported, 94 percent of YouTube views come from 4 percent of videos, and 89 percent of TikTok views come from 5 percent of videos. Even the architects of artificial intelligence, who imagine themselves on the path to creating the last invention, are busy building another infinite sequence of video made by people we don’t know. Even AI wants to be television.

Too Much Flow


Whether the starting point is a student directory (Facebook), radio, or an AI image generator, the end point seems to be the same: a river of short-form video. In mathematics, the word “attractor” describes a state toward which a dynamic system tends to evolve. To take a classic example: Drop a marble into a bowl, and it will trace several loops around the bowl’s curves before settling to rest at the bottom. In the same way, water draining in a sink will ultimately form a spiral pattern around the drain. Complex systems often settle into recurring forms, if you give them enough time. Television seems to be the attractor of all media.

By “television,” I am referring to something bigger than broadcast TV, the cable bundle, or Netflix. In his 1974 book Television: Technology and Cultural Form, Raymond Williams wrote that “in all communications systems before [television], the essential items were discrete.” That is, a book is bound and finite, existing on its own terms. A play is performed in a particular theater at a set hour. Williams argued that television shifted culture from discrete and bounded products to a continuous, streaming sequence of images and sounds, which he called “flow.” When I say “everything is turning into television,” what I mean is that disparate forms of media and entertainment are converging on one thing: the continuous flow of episodic video.

By Williams’s definition, platforms like YouTube and TikTok are an even more perfect expression of television than old-fashioned television, itself. On NBC or HBO, one might tune in to watch a show that feels particular and essential. On TikTok, by contrast, nothing is essential. Any one piece of content on TikTok is incidental, even inessential. The platform’s allure is the infinitude promised by its algorithm. It is the flow, not the content, that is primary.

One implication of “everything is becoming television” is that there really is too much television—so much, in fact, that some TV is now made with the assumption that audiences are always already distracted and doing something else. Netflix producers reportedly instruct screenwriters to make plots as obvious as possible, to avoid confusing viewers who are half-watching—or quarter-watching, if that’s a thing now—while they scroll through their phones. (...)

Among Netflix’s 36,000 micro-genres, one is literally called “casual viewing.” The label is reportedly reserved for sitcoms, soap operas, or movies that, as the Hollywood Reporter recently described the 2024 Jennifer Lopez film Atlas, are “made to half-watch while doing laundry.”...  The whole point is that it’s supposed to just be there, glowing, while you do something else. Perhaps a great deal of television is not meant to absorb our attention, at all, but rather to dab away at it, to soak up tiny droplets of our sensory experience while our focus dances across other screens. You might even say that much television is not even made to be watched at all. It is made to flow. The play button is the point.

Lonely, Mean, and Dumb

… and why does this matter? Fine question. And, perhaps, this is a good place for a confession. I like television. I follow some spectacular YouTube channels. I am not on Instagram or TikTok, but most of the people I know and love are on one or both. My beef is not with the entire medium of moving images. My concern is what happens when the grammar of television rather suddenly conquers the entire media landscape.

In the last few weeks, I have been writing a lot about two big trends in American life that do not necessarily overlap. My work on the “Antisocial Century” traces the rise of solitude in American life and its effects on economics, politics, and society. My work on “the end of thinking” follows the decline of literacy and numeracy scores in the U.S. and the handoff from a culture of literacy to a culture of orality. Neither of these trends is exclusively caused by the logic of television colonizing all media. But both trends are significantly exacerbated by it. 

Television’s role in the rise of solitude cannot be overlooked. In Bowling Alone, the Harvard scholar Robert Putnam wrote that between 1965 and 1995, the typical adult gained six hours a week in leisure time. As I wrote, they could have used those additional 300 hours a year to learn a new skill, or participate in their community, or have more children. Instead, the typical American funneled almost all of this extra time into watching more TV. Television instantly changed America’s interior decorating, relationships, and communities: (...)

Digital media, empowered by the serum of algorithmic feeds, has become super-television: more images, more videos, more isolation. Home-alone time has surged as our devices have become more bottomless feeds of video content. Rather than escape the solitude crisis that Putnam described in the 1990s, we now seem to be more on our own. (Not to mention: meaner and stupider, too.)

It would be rash to blame our berserk political moment entirely on short-form video, but it would be careless to forget that some people really did try to warn us that this was coming. In Amusing Ourselves to Death, Neil Postman wrote that “each medium, like language itself, makes possible a unique mode of discourse by providing a new orientation for thought, for expression, for sensibility.” Television speaks to us in a particular dialect, Postman argued. When everything turns into television, every form of communication starts to adopt television’s values: immediacy, emotion, spectacle, brevity. In the glow of a local news program, or an outraged news feed, the viewer bathes in a vat of their own cortisol. When everything is urgent, nothing is truly important. Politics becomes theater. Science becomes storytelling. News becomes performance. The result, Postman warned, is a society that forgets how to think in paragraphs, and learns instead to think in scenes. (...)

When literally everything becomes television, what disappears is not something so broad as intelligence (although that seems to be going, too) but something harder to put into words, and even harder to prove the value of. It’s something like inwardness. The capacity for solitude, for sustained attention, for meaning that penetrates inward rather than swipes away at the tip of a finger: These virtues feel out of step with a world where every medium is the same medium and everything in life converges to the value system of the same thing, which is television. 

by Derek Thompson |  Read more:
Image: Ajeet Mestry on Unsplash
[ed. See also: The Last Days Of Social Media (Noema).]

via:

Daniel G. Jay, Heisenberg and Schrödinger’s Cat #3, 2025

The Limits of Data

Right now, the language of policymaking is data. (I’m talking about “data” here as a concept, not as particular measurements.) Government agencies, corporations, and other policymakers all want to make decisions based on clear data about positive outcomes. They want to succeed on the metrics—to succeed in clear, objective, and publicly comprehensible terms. But metrics and data are incomplete by their basic nature. Every data collection method is constrained and every dataset is filtered.

Some very important things don’t make their way into the data. It’s easier to justify health care decisions in terms of measurable outcomes: increased average longevity or increased numbers of lives saved in emergency room visits, for example. But there are so many important factors that are far harder to measure: happiness, community, tradition, beauty, comfort, and all the oddities that go into “quality of life.”

Consider, for example, a policy proposal that doctors should urge patients to sharply lower their saturated fat intake. This should lead to better health outcomes, at least for those that are easier to measure: heart attack numbers and average longevity. But the focus on easy-to-measure outcomes often diminishes the salience of other downstream consequences: the loss of culinary traditions, disconnection from a culinary heritage, and a reduction in daily culinary joy. It’s easy to dismiss such things as “intangibles.” But actually, what’s more tangible than a good cheese, or a cheerful fondue party with friends?

It’s tempting to use the term intangible when what we really mean is that such things are hard to quantify in our modern institutional environment with the kinds of measuring tools that are used by modern bureaucratic systems. The gap between reality and what’s easy to measure shows up everywhere. Consider cost-benefit analysis, which is supposed to be an objective—and therefore unimpeachable—procedure for making decisions by tallying up expected financial costs and expected financial benefits. But the process is deeply constrained by the kinds of cost information that are easy to gather. It’s relatively straightforward to provide data to support claims about how a certain new overpass might help traffic move efficiently, get people to work faster, and attract more businesses to a downtown. It’s harder to produce data in support of claims about how the overpass might reduce the beauty of a city, or how the noise might affect citizens’ well-being, or how a wall that divides neighborhoods could erode community. From a policy perspective, anything hard to measure can start to fade from sight.

An optimist might hope to get around these problems with better data and metrics. What I want to show here is that these limitations on data are no accident. The basic methodology of data—as collected by real-world institutions obeying real-world forces of economy and scale—systematically leaves out certain kinds of information. Big datasets are not neutral and they are not all-encompassing. There are profound limitations on what large datasets can capture.

I’m not just talking about contingencies of social biases. Obviously, datasets are bad when the collection procedures are biased by oversampling by race, gender, or wealth. But even if analysts can correct for those sorts of biases, there are other, intrinsic biases built into the methodology of data. Data collection techniques must be repeatable across vast scales. They require standardized categories. Repeatability and standardization make data-based methods powerful, but that power has a price. It limits the kinds of information we can collect. (...)

These limitations are particularly worrisome when we’re thinking about success—about targets, goals, and outcomes. When actions must be justified in the language of data, then the limitations inherent in data collection become limitations on human values. And I’m not worried just about perverse incentives and situations in which bad actors game the metrics. I’m worried that an overemphasis on data may mislead even the most well-intentioned of policymakers, who don’t realize that the demand to be “objective”—in this very specific and institutional sense—leads them to systematically ignore a crucial chunk of the world.

Decontextualization

Not all kinds of knowledge, and not all kinds of understanding, can count as information and as data. Historian of quantification Theodore Porter describes “information” as a kind of “communication with people who are unknown to one another, and who thus have no personal basis for shared understanding.” In other words, “information” has been prepared to be understood by distant strangers. The clearest example of this kind of information is quantitative data. Data has been designed to be collected at scale and aggregated. Data must be something that can be collected by and exchanged between different people in all kinds of contexts, with all kinds of backgrounds. Data is portable, which is exactly what makes it powerful. But that portability has a hidden price: to transform our understanding and observations into data, we must perform an act of decontextualization.

An easy example is grading. I’m a philosophy professor. I issue two evaluations for every student essay: one is a long, detailed qualitative evaluation (paragraphs of written comments) and the other is a letter grade (a quantitative evaluation). The quantitative evaluation can travel easily between institutions. Different people can input into the same system, so it can easily generate aggregates and averages—the student’s grade point average, for instance. But think about everything that’s stripped out of the evaluation to enable this portable, aggregable kernel.

Qualitative evaluations can be flexible and responsive and draw on shared history. I can tailor my written assessment to the student’s goals. If a paper is trying to be original, I can comment on its originality. If a paper is trying to precisely explain a bit of Aristotle, I can assess it for its argumentative rigor. If one student wants be a journalist, I can focus on their writing quality. If a nursing student cares about the real-world applications of ethical theories, I can respond in kind. Most importantly, I can rely on our shared context. I can say things that might be unclear to an outside observer because the student and I have been in a classroom together, because we’ve talked for hours and hours about philosophy and critical thinking and writing, because I have a sense for what a particular student wants and needs. I can provide more subtle, complex, multidimensional responses. But, unlike a letter grade, such written evaluations travel poorly to distant administrators, deans, and hiring departments.

Quantification, as used in real-world institutions, works by removing contextually sensitive information. The process of quantification is designed to produce highly portable information, like a letter grade. Letter grades can be understood by everybody; they travel easily. A letter grade is a simple ranking on a one-dimensional spectrum. Once an institution has created this stable, context-invariant kernel, it can easily aggregate this kind of information—for students, for student cohorts, for whole universities. A pile of qualitative information, in the form of thousands of written comments, for example, does not aggregate. It is unwieldy, bordering on unusable, to the administrator, the law school admissions officer, or future employer—unless it has been transformed and decontextualized.

So here is the first principle of data: collecting data involves a trade-off. We gain portability and aggregability at the price of context-sensitivity and nuance. What’s missing from data? Data is designed to be usable and comprehensible by very different people from very different contexts and backgrounds. So data collection procedures tend to filter out highly context-based understanding. Much here depends on who’s permitted to input the data and who the data is intended for. 

by C. Thi Nguyen, Issues in Science and Technology |  Read more:
Image: Shonagh Rae

Tuesday, October 14, 2025

Is it Really Different this Time?

What is amusing is just how much talk there has been about the AI investment bubble, and what it will do or not do to the markets and the economy when it implodes or doesn’t implode: That it’s almost like at the peak of the Dotcom Bubble. That it’s much worse than at the peak of the Dotcom Bubble. That it’s nothing like the Dotcom Bubble because this time it’s different. That even if it’s like the Dotcom Bubble and then turns into the Dotcom Bust, or worse, it’s still worth it because AI will be around and change the world, just like the Internet is still around and changed the world, even if those first investors got wiped out, or whatever.

There are many voices that loudly point this out, and point out just how risky it is to bet on hocus-pocus money, or that explain in detail why this isn’t risky at all, why this is not anything like the Dotcom Bubble, why this time it’s different – the four most dangerous words in investing.

The talk fills the spectrum, and these are people with enough stature to be quoted in the media: Jamie Dimon, Jeff Bezos, the Bank of England, Goldman Sachs analysts, IMF Managing Director Kristalina Georgieva…

The focus is on the big-tech-big-startup circularity of hocus-pocus deals between Nvidia, OpenAI, AMD, along with Amazon, Microsoft, Alphabet, Meta, Tesla, Oracle, and many others, including SoftBank, of course.

OpenAI now has an official “valuation” — based on its secondary stock offering — of $500 billion though it’s bleeding increasingly huge amounts of cash. And there are lots of players in between and around them. They all toss around announcements of AI hocus-pocus deals between them.

OpenAI has announced deals totaling $1 trillion with a small number of tech companies, at the top of which are Nvidia ($500 billion), Oracle ($300 billion), and AMD ($270 billion). Each of these announcements causes the stocks of these companies to spike massively – the direct and immediate effects of hocus-pocus money.

OpenAI obviously doesn’t have $1 trillion; it’s burning prodigious amounts of cash. And so it’s trying to rake in investment commitments from the same companies that it would buy equipment from, and engineer creative deals that cause these stock prices to spike, and so the hocus-pocus money announcements keep circulating.

OpenAI’s idea of building data centers with Nvidia GPUs that would require 10 gigawatts (GW) of power is just mindboggling. The biggest nuclear powerplant in the US, the Plant Vogtle in Georgia, with four reactors, including two that came on line in 2023 and 2024, has a generating capacity of about 4.5 GW. All nuclear powerplants in the US combined have a generating capacity of 97 GW.

But it’s real money too. A lot of real money.

Big Tech is letting its huge piles of cash spill out into the economy to build this vast empire of technology that requires data centers that would consume huge amounts of electricity to let AI do its thing.

And these “hyperscalers, are leveraging that money flow with borrowing, by issuing large amounts of bonds.

And private credit has jumped into the mania to provide further leverage, lending large amounts to data-center startup “neocloud” companies that plan to build data centers and rent out the computing power; those loans are backed with collateral, namely the AI GPUs. No one knows what a three-year-old used GPU, superseded by new GPUs, will be worth three years from now, when the lenders might want to collect on their defaulted loan, but that’s the collateral.

The data centers are getting built. The costs of the equipment in them – revenues for companies that provide this equipment and related services – dwarf the costs of the building. And stocks of companies that supply this equipment and the services have been surging.

The bottleneck is power, and funds are flowing into that, but it takes a long time to build powerplants and transmission infrastructure.

Is it really different this time?

So there is this large-scale industrial aspect of the AI investment bubble. That was also the case in the Dotcom Bubble. The telecom infrastructure needed to be built out at great cost. Fiberoptics made the internet what it is today. Those fibers needed to be drawn and turned into cables, and the cables needed to be laid across the world, and the servers, routers, and other equipment needed to be installed, and services were invented and provided, and businesses and households needed to be connected, and it was all real, and it was all very costly, requiring huge investments, but progress was slow and revenues lagging, and then these overhyped stocks just imploded under that weight, along with the stocks that were the pioneers of ecommerce, internet advertising, streaming, and whatnot.

The Nasdaq, where much of it was concentrated, plunged by 78% over a period of two-and-a-half years, investors lost huge amounts of money, many got wiped out, thousands of companies and their stocks vanished or were bought for scrap when that investment bubble crashed. And a year into the crash, it triggered a recession in the US – and a mini-depression in Silicon Valley and San Francisco where much of this had played out.

Yet the internet thrived. Amazon barely survived and then thrived in that new environment. But Amazon was one of the exceptions.

In this mania of hype, hocus-pocus deals, and huge amounts of real money fortified by leverage – all of which caused stock prices to explode – markets become edgy. Everyone is talking about it, everyone sees it, they’re all edgy, regardless of their narrative – whether a big selloff is inevitable with deep consequences on the US economy, or whether this time it’s different and the mania can go on and isn’t even halfway done.

Whatever the narrative, it says risk in all-caps. Anything can prod these stock prices at their precarious levels to suddenly U-turn, and if the selloff goes on long enough, the investment bubble would come to a halt, and the hocus-pocus deals would be just that, and the whole construct would come apart. But AI would still be around doing its thing, just like the Internet.

by Wolf Richter, Wolf Street |  Read more:
Image: Alexas_Fotos on Unsplash

The Gospel According to South Park

Somehow, five years have passed since the COVID summer of 2020. My son had just “finished” fourth grade. His mother and I were distracted parents of him and his seven-year-old sister, both of us teetering from cabin fever. It felt like we were hanging on to our sanity, and our marriage, by a thread.

We held on to both, thankfully. Our kids seem to have recovered, too. But by this time that summer, it’s fair to say we had completely “lost contain” of our children. Even under normal conditions, we’ve favored a loose-reins approach to parenting, with a healthy dose of Lenore Skenazy-style “Free Range Parenting.” But that summer? I gave up entirely. I let my son watch TV. A lot of TV.

By the time school resumed, he had watched every episode of The Simpsons and every episode of South Park.

At the time, I felt more than a little guilty about letting a 10-year-old binge-watch two decades of South Park. It was a bit early, I thought, for him to be learning proper condom application techniques from Mr. Garrison. When I told friends later, the story always got a laugh – a kind of comic confession from a parent who’d fallen asleep at the wheel.

But as my son made his way through middle school and into high school, something changed. One night over dinner, we were talking about wars when I mentioned Saddam Hussein. My son chimed in casually – he knew exactly who Saddam was. I asked him how. His answer: “South Park.”

That kept happening. From Michael Jackson and Neverland Ranch, to Mormonism, to the NSA, to wokeism … my son was not only familiar with these topics, he was informed, funny, and incisively skeptical. I realized that this crash course from Butters and Cartman and Mr. Mackey had functioned like one of those downloads Neo gets in The Matrix; except that instead of instantly learning martial arts, my son had instantly become culturally literate. And, just as important, that literacy came wrapped in a sense of humor rooted in satire, absurdity, and a deep mistrust of power, regardless of party affiliation.

He jokes about Joe Biden’s senility and Trump’s grifting grossness. He refers to COVID-era masking as “chin diapers,” a phrase South Park coined while many adults were still double-masking alone in their cars. It struck me: my greatest parenting lapse had somehow turned into one of my best decisions.

Of course, it’s not just that South Park is anti-authority and unapologetically crude. So was Beavis and Butthead. The difference is that South Park is crafted. It endures not just because of what it says, but how it’s made – with discipline, speed, and storytelling intelligence.

South Park co-creators Matt Parker and Trey Stone are master storytellers. In a short video that should be required viewing for anyone who writes, they explain that if the beats, or scenes, of your story are best linked by the phrase “and then,” you’re doing it wrong. Instead, each scene should be connected by “therefore” or “but.” It’s deceptively simple, and it’s the single best explanation of narrative momentum I’ve ever seen. (Watch it here.)

Combine that storytelling mastery with a relentless work ethic that has allowed them to churn out weekly takes on almost every major current event of the last three decades, and you get the South Park that we know and (that most of us) love today. A generational institution that’s still funny.

And still winning.

Just days after closing a new five-year, $1.5 billion deal with Paramount+, South Park opened its 27th season with an episode titled “Sermon on the Mount,” which gleefully eviscerated both President Trump and Paramount+. What’s the point of having “fuck you money” if you never say “fuck you”? (...)

And the difference between South Park and the late-night crowd isn’t just about the comedy. It’s about the message. During COVID, while Colbert and others were fawning over Fauci, hawking Pfizer ads, and pushing for school closures, South Park was mocking all of it – the masks, the panic, the bureaucratic gaslighting. As a concept, “chin diapers” wasn’t just funny – it was accurate.

When comedy becomes propaganda, it stops being funny. Parker and Stone have never forgotten that the job is to make people laugh. That means skewering whoever is in power, without asking for permission.

Late night talk shows are dying, not entirely but primarily because the product is borderline unwatchable. But, despite the best efforts of the hall monitor, cancel culture crowd, satire – real, cutting, offensive, hilarious satire – is alive and well. My son, now in high school, is living proof. He is a great conversationalist, comfortable speaking with just about anyone of any age; in large part, thanks to a show I once felt guilty for letting him watch.

As it turns out, enrolling my son in summer school at South Park Elementary wasn’t a parenting blunder at all. And, of course, Parker and Stone had it right from the beginning.

by Jeremy Radcliffe, Epsilon Theory | Read more:
Image: South Park
[ed. They'll pick it all up from classmates anyway. I think my son was near that age, maybe about 12, when I took him to see Pulp Fiction.]

In Praise of the Faroe Islands

In praise of the Faroe Islands 

Due to its small size and limited variation, I wouldn’t say it’s the singular most beautiful nation on earth (I’d give that to New Zealand), but it’s certainly at the very top tier of the most beautiful places on earth. What stands out about the Faroe Islands’ beauty is that every single place you set foot will be beautiful. There is no real need to go to any specific destinations (there aren’t even national parks or “nature zones” in the Faroe Islands), as there is incredible beauty at every point. And no matter where you go, you will always be in nature, surrounded by a quiet that feels completely removed from the modern world. (...)

In many places, “culture” feels like an aesthetic layer—a set of foods, clothing styles, or historical anecdotes. But in the Faroes, it feels deeper, like a shared operating system. When you speak to any person there, it’s immediately clear they are all operating from the same framework—a worldview that is both deeply felt and meaningfully distinct from the rest of the world.

Conservative intellectuals on Twitter and Substack are constantly sketching out their ideal society: a high-trust community rooted in family (fertility rates are high), self-sufficiency, and continuity with the past. They dream of a life lived closer to the land, with a strong sense of personal responsibility. By almost any of their metrics, the Faroe Islands is the most successful conservative nation on earth. And yet, it is also a profoundly liberal place. It’s cosmopolitan and highly educated. There is a massive social safety net and great equality, a deep belief in the collective over the individual, and a culture where economic aspiration doesn’t dominate life. It is, in many ways, the idyllic left-wing society. The Faroe Islands seems to have achieved the goals of both political tribes simultaneously, without any of the ideological warfare.

What makes the Faroe Islands special in my opinion is not that it’s so nice, but that it’s so nice yet has no desire to optimize or make more efficient (or exploit) anything to become even “nicer.” This is unusual, as most successful places reached their status by climbing a cutthroat ladder, trading off nearly everything in pursuit of greater efficiency.

To give the simplest example: the Faroe Islands are a series of islands, some of which have fewer than 10 people living on them, and are otherwise quite isolated from each other. No worry—the Faroe Islands, with a “we are all one” ethos, have power and internet going to every corner of their nation, with subsidized helicopter rides and ferries to even the smallest islands to make sure life can feel connected for all Faroese people. More well known, the Faroe Islands have built impressive and incredibly expensive undersea tunnels connecting all of the major and proximate islands to each other.

They spend this money not to make the islands more productive or efficient, but simply because they believe all Faroese people should be connected. The infrastructure exists for solidarity, not optimization. A consultant would call the tunnels and helicopter subsidies a spectacular misallocation of capital. But this misses the point entirely—they’re treating infrastructure as as a kind of social infrastructure, not economic.

by Daniel Frank, not not Talmud |  Read more:
Images: Daniel Frank
[ed. At first I thought this was about the Falkland Islands (off the tip of South America). Then realized I didn't know where the Faroes were at all.]

Cameras Capture Every Fan’s Reaction

As Jorge Polanco hit his second home run Sunday night, a row of fans in Section 211 quickly unveiled a five-person-long Mariners flag. Meanwhile, in Section 120, a fan in a white Julio Rodriguez jersey tried to high-five everyone in the row behind him. And in Section 308, a once-full beverage cup appeared to soar when someone lost their grip amid the excitement.

The reactions were all captured by a multicamera system that photographs every fan at T-Mobile Park during big moments, like Polanco’s home run, or smaller moments, like the Hydro Challenge.

If you were at a Mariners home game this season, you can see what you looked like and then download dozens of those free images, as a ball went out of the park, hot dogs from heaven parachuted from the upper deck or everyone sang along during the Seventh Inning Stretch. And if you’re at Friday’s Game 5 against the Detroit Tigers in T-Mobile Park, remember to smile — you’re on camera.

The camera system belongs to Momento, a Chicago-based company that also photographs fans at Seattle Seahawks and Seattle Kraken games, among other professional teams.

The Mariners’ partnership with Momento started last year, but its popularity has surged with the baseball team winning its first American League West title since 2001. More than 22,000 images have been downloaded from the Mariners’ first two ALDS games alone, according to founder and CEO Austin Fletcher, compared with an average of 1,000 downloads per game during the regular season.

“With the excitement of the Mariners’ postseason, I think it really just helps teams connect with their fans in a really authentic way,” Fletcher said.

To view images, users go to a website run by Momento, choose the team and specific game, then input their section, row and seat. After submitting a name, contact information and birth date — not for verification, but for analytics that go to the Mariners — a fan can see photos of themselves and the people around them in different formats: just the image, one that looks like a ticket with their seat number or a GIF of multiple photos showing movement.

The photos are labeled by moments from the game — Sunday’s game had crowd images from Polanco’s two home runs, Rodriguez’s double and the moment the Mariners won.

Momento installs 10 cameras in each sports venue that are synced to take photos when a worker presses a button. For T-Mobile Park, Momento enters all 47,000-plus seats to connect them with the correct images and within minutes, fans can view photos. The Mariners still want to capture fan reactions even in losses or games without big plays, said JT Newton, the Mariners corporate partnership team’s manager of operations and development.

“Even if there maybe wasn’t a home run that day, that doesn’t mean that you still don’t want to relive being with your family at the ballpark,” Newton said.

Along with the fan experience, what do the Mariners get out of it? More information about you. As Momento put it in a 2024 news release, the crowd analytics help teams “better understand their fan base,” enabling them to “engage with their audience in unprecedented ways by pioneering personalized marketing campaigns tailored to individual fans through their unforgettable experiences.”

Reliving moments may be jarring for some fans who didn’t realize they were being recorded, particularly those in higher-up sections that don’t get the same camera time as the ones behind home plate. A fan can look up their seats, but in theory, so can a detective; a concerned friend trying to monitor someone’s beer and hot dog intake; or a suspicious ex who found a discarded ticket stub...

A Major League Baseball ticket’s terms of use agreement includes a paragraph giving MLB organizations, as well as some sponsors and other partners, unrestricted rights to the ticket holder’s image in any live or recorded broadcast or other media taken in connection with the event.

In simpler terms: Once you swipe your ticket, the Mariners can use your image however they want.

“In today’s world, fans are pretty aware that at a public space you could show up on a TV broadcast or on the jumbotron,” Fletcher said. “I think it’s just something that’s expected.”

Momento does honor opt-out requests if fans don’t want to have their images shown, Fletcher added. Users can submit requests on Momento’s website, and an employee will remove the seat from appearing. (...)

The company now works with 10 professional teams and earns money through team agreements, sponsorships and, for some events, physical products like framed photos, according to Fletcher, who credits Seattle’s teams, and their fandoms, with their growth.

by Paige Cornwell, Seattle Times |  Read more:
Image: Momento
[ed. Seriously invasive, and creepy.]

So What, Now What?

Back in April, I wrote a note called Crashing the Car of Pax Americana. The skinny of that note is that we’re experiencing a regime change across pretty much every policy dimension, not just trade policy and immigration/labor policy, but also monetary policy, fiscal policy, foreign policy, antitrust policy, internal security policy, public health policy … you name it … in the transition to what this administration calls its America First program. You may think that this transition is a good idea or (like me) you may think it’s a disastrous idea, but arguing about that isn’t my point here.

My point here is that the America First regime change IS, that the water in which we swim for every policy dimension has shifted not just in degree but in kind, and that every policy dimension has moved to a new set of equilibrium behaviors and a new set of rational expectations going forward.

Our foreign policy has shifted from an essentially unipolar world where the United States maintained global soft power and an unquestioned reserve currency to finance its standard of living in exchange for ‘free rider’ benefits for our allies, to a multipolar world where the United States has dramatically downsized its third world influence and seeks to extract economic rents from everyone, especially our allies.

Our fiscal policy has shifted from a Congressionally-led (or at least highly mediated) process of budgeting, appropriations and debt ceilings to a Presidentially-mandated process of executive orders, rescissions and directed investments, with $5 trillion in headroom for unfettered deficit spending.

Our monetary policy has shifted from determining bank regulation and interest rates by a quasi-independent central bank per its ‘mandates’ for price stability and full employment to a determination by the White House per its ‘strategic vision’ for managing the deficit and spurring home buying.

Again, I’m not arguing the merits or demerits here. I’m arguing that America First is a nonlinear break from the past, and more to the point, it’s a stable, self-sustaining break from the past. I’m arguing that you cannot unring any of these bells, because that’s what it means to have a regime change and a new equilibrium. I’m arguing that we need to stop pining for policy reversion to some halcyon days of yore (even if yore was just a few months ago) and start planning for the natural consequences of rational government, corporate and household responses to the new regime.

So what, now what?

There is an intense and innate human desire to ‘return to normal’ after a prolonged shock to the system. You see it in families hit by a long illness for a loved one. We all experienced that feeling coming out of Covid. Believe me, I get it. But there’s no going back to what we once considered ‘normal’ for our predictable patterns of interaction (that’s the definition of a regime, btw) with the US government. What we are experiencing today IS our new normal, not just for the next 3 years but for substantially longer as future American Presidents must maintain the structural executive power of the new policy regime even as they make alterations on the surface and at the margins. It’s like the introduction of mustard gas in WWI; once one side uses it, regardless of which side uses it, everyone must use it – at least until a new deterrence equilibrium is established – and it never goes away as a permanent feature of the battlefield.

There’s no going back to ‘normal’ with, for example, the US government’s equity stake in Intel, any more than there’s a going back to ‘normal’ with the Fed’s balance sheet and its purchase of hundreds of billions of dollars in mortgage-backed securities. Once you take an action like this you quickly find that it is impossible to unwind it without causing all sorts of new headaches and without (gulp!) reducing your institutional and bureaucratic power. And even if you do unwind this particular action, you’ve already proven that you are capable and willing to take this sort of action. To use a sports truism here, the first time a player takes himself out of a game or tournament is never the last time he’ll take himself out of a game or tournament, and everyone involved – coaches, teammates, bettors – will adjust their forward expectations accordingly.

Not only is there no going back to ‘normal’ with the US government’s equity stake in Intel – or the revenue cut from Nvidia or the domestic manufacturing demands of Apple or the ‘DEI settlements’ with major law firms or the ‘libel settlements’ with major media companies or the ‘antisemitism settlements’ with major universities – but the clear and obvious implication is that we have yet to settle on the new equilibrium position of direct public sector ownership and control of the private sector. Forget about a reversion. Hell, forget about a slowdown. For at least the next three years, there’s going to be an acceleration in demands for equity and money and rents of all sorts from anyone subject to US government regulation and taxation or anyone who has received US government support, no matter how long ago and no matter how indirectly. I mean, just in the past week, the White House has talked about imposing ‘intellectual property’ settlements on research universities for past research grants and requiring equity stakes and/or revenue cuts from defense contractors who sell weapons abroad. There’s going to be more of these demands for direct public sector ownership and control of the private sector, and not just more of these demands but MOAR of these demands, across every conceivable economic and social dimension.

There’s another word for direct public sector ownership and control of the private sector, of course, and that’s socialism. You could use other -isms here if you like, particularly the f-word to reflect the incestuous marriage between statism and corporatism, and on that note I’m just going to drop this here, as the kids would say.

But I’m going to use socialism because people lose their minds when you use the f-word. But regardless of which word you use, what it means to say that this is a regime change and a new equilibrium and there’s no return to ‘normal’ is that direct public sector ownership and control of the private sector is not a temporary side effect of America First on the way to some new capitalist dawn. Socialism IS the outcome.

Ditto there’s no return to ‘normal’ when it comes to tariffs. Protectionism and high tariff barriers are not a White House negotiation tactic on the way to a world of free trade. Protectionism IS the outcome.

Ditto there’s no return to ‘normal’ when it comes to monetary policy. The overt politicization of the Fed and its subordination to White House demands for artificially low interest rates (aka ‘financial repression’) are not regrettable but necessary steps on the way towards reducing the state’s control over the economy. Financial repression IS the outcome.

I mean, you can continue to argue that this isn’t really [insert -ism here] if you want. You can continue to argue that ‘it would be even worse’ with the other side if you want. You can continue to argue about who ‘started it’ if you want. But I’m done with that.

I am delighted to stipulate to my red-oriented readers that the Democrats ‘started it’ when it comes to lawfare, with the preposterous and obviously politically motivated New York state prosecution of Donald Trump. Ditto the politically motivated 0.5% interest rate cut before the election last year. Ditto the personal and corrupt use of Presidential pardons by Joe Biden. Ditto the politicization of the CDC and the political pressure to fast-track mRNA vaccines and the promotion of Fauci’s ‘noble lies’ and the lockdowns to ‘flatten the curve’ by … oh wait, that was Trump … but sure, I am more than happy to stipulate that Biden did exactly the same thing.

And I’m not asking anyone to give up their righteous anger at whoever and whatever they are righteously angry about, least of all myself. Personally speaking, I will never not be angry at an American President who sends badge-less, warrant-less, armed and masked agents to grab people off the street and detain them indefinitely for the probable cause of having brown skin. Or an American government that has established its very own Gulag Archipelago – not in Siberia but in El Salvador and Uganda and the Everglades and the Chihuahuan Desert – where cruelty is the obvious point and punishment exists for punishment’s sake. To a lesser but still very real extent, I will never not be angry at the venally corrupt, vacant, egomaniacal, figurehead former President and the courtiers inside and outside the White House who propped him up for years, projecting nothing but weakness and achieving nothing but the accelerated collapse of Pax Americana.

But I am asking all of us – myself included – to set aside the anger long enough to look clearly at what IS, not what either our anger would have us project or what our innate desire for a return to normalcy would have us imagine.

So what, now what?

I’m asking this because I think that I see what IS when it comes to the current gerrymandering efforts in Texas and in California. I think I see what the new set of equilibrium behaviors and rational expectations are, and it scares me even more than the socialism, protectionism and financial repression that are similarly part of today’s IS. I’d like to ask readers to figure out where I’m wrong and how we can avoid what seems to me to be the most likely outcome of this game.

As I see it, the naked off-cycle gerrymandering of Congressional districts for direct partisan benefit happening today isn’t just a good analogy for the mustard gas example of how an equilibrium shifts, but actually IS mustard gas in our modern political trench warfare.


At the request and encouragement of the White House, Texas is creating new Republican districts without even a fig leaf of a non-partisan rationale. They are explicitly creating new Republican-majority districts to marginally disenfranchise Democratic voters because they can. To which California, which has in the past created Democrat-majority districts to marginally disenfranchise Republican voters (but with a fig leaf of a non-partisan organization to manage the process), will respond by abandoning the fig leaf and creating new Democrat-majority districts because they can. To which other red states like Florida will respond by creating new Republican-majority districts. To which other blue states like New York will respond by abandoning their fig leaf of an independent electoral districting commission and creating … well, you get the idea. It’s mustard gas. Once one side uses it, regardless of which side uses it, everyone must use it.

My strong belief is that if the Republicans come anywhere close to losing the House because of Democratic Representatives from newly gerrymandered districts, Trump will declare those new Democratic members-elect in the 2026 midterms to be ‘illegitimate’ because of an ‘illegal’ state process (like abandoning the fig leaf of an independent electoral districting commission). And while the federal government has next to no Constitutional role in certifying Congressional elections or members-elect, I think there’s a pretty straightforward way that Trump could orchestrate a contested seating of the 120th Congress and force it to a Supreme Court decision.

The Constitution requires members-elect to take an oath of allegiance to the Constitution before they can assume office, but is silent on how or by whom that oath is administered. Federal law says that oath can only be administered to members-elect by the (new) Speaker of the House, but in the scenario I’m imagining both the Democratic caucus and the Republican caucus would take steps to elect a Speaker of the House for the 120th Congress, and neither caucus would recognize the other Speaker-elect as legitimate. We know who the White House would recognize! I know this scenario sounds crazy, but this is a pretty well-known rabbit hole that came up in the 118th Congress when the Republican caucus took several days to elect a Speaker and no one had actually taken their oath, and it’s certainly no less crazy than Trump’s Jan 6th premise that Mike Pence as presiding officer of the Electoral College could reject entire slates of electors on suspicion of ‘illegitimacy’.


Yes, the Supreme Court would eventually weigh in here, and I think they’d rule that a) the federal law requiring an oath administered by the Speaker of the House does not apply if it runs afoul of the Constitution, which in addition to requiring an oath also sets a date certain for the new Congress to take office, b) that if the law doesn’t apply than anyone – like a local notary, even – can administer the oath, and c) since the federal government has no say – none! – over state certification of Congressional members-elect, then the Democrats in this scenario would have the legitimate claim to a majority caucus and control of the House. But honestly, who knows how they’d rule!

More to the point, the Supreme Court can’t rule on this until it actually happens, so there would be a period of time in January 2027 where we would not have a 120th Congress at all. More crucially still, no matter how the Supreme Court ruled, broad swaths of the American electorate and one of the two most powerful executives in the country (the US President and the governor of California) would have very publicly rejected the legitimacy of the 120th Congress. At a minimum every state would take immediate action to disenfranchise its minority party voters to the nth possible degree in advance of the 2028 election, but I don’t think there’s any possible way that we’d even get to the 2028 election. Federalized National Guard units would already be deployed into major blue state cities as part of the federal crime bill that’s going to be enacted this fall, and do you think Trump would hesitate for one second to invoke the Insurrection Act? I don’t.

by Ben Hunt, Epsilon Theory |  Read more:
Images: uncredited/Annie Hall
[ed. See also: North Carolina joins growing US battle over redrawing electoral maps (BBC); and, Narrative and Metaverse, Pt. 4 - Carrying the Fire (last in the series).]

Monday, October 13, 2025

Monsters From the Deep



I get that the news cycle is packed right now, but I just heard from a colleague at the Smithsonian that this is fully a GIANT SQUID BEING EATEN BY A SPERM WHALE and it’s possibly the first ever confirmed video according to a friend at NOAA ~ Rebecca R. Helm
***
"From the darkness of the deep, the mother rose slowly, her great body pulsing with effort, while the calf clung close to her side. The faint shimmer of the surface light caught on something twisting in her jaws—long pale arms, still trembling, a giant calamari dragged from the black abyss.

The calf pressed its head against the mother’s flank, curious, its small eye turning toward the strange, sprawling catch. Around them, the other whales gathered, a circle of giants, each click and creak of their voices carrying through the water like an ancient council.

The mother released a cloud of ink the squid had left behind, now dissipating in ghostly ribbons. She let the prey dangle for a moment before tearing a piece free with a practiced shake of her head. The calf tried to imitate, nudging the slack arms of the squid, but only managed to tangle its mouth in the trailing suckers. The adults rumbled with what could only be described as laughter.

High above, a shaft of sunlight pierced the water, illuminating the drifting arms of the squid like banners in the deep. The feast had begun, but it was also a lesson—the calf’s first glimpse of the abyss’s hidden monsters, and of the power its mother carried up from the dark world below."

via: here and here

Government For Half of America

Reuters reported on Thursday that White House deputy chief of staff Stephen Miller is playing a central role in the administration’s crackdown on opponents. The administration is threatening to target funding behind what the administration calls “domestic terror networks,” those it claims embrace “anti-Americanism, anti-capitalism, and anti-Christianity.”

House speaker Mike Johnson (R-LA) got into the act of attacking the administration’s opponents today, claiming that the Democratic senators holding out for the extension of the premium tax credits so that healthcare premiums don’t skyrocket—a position supported by 78% of Americans—are taking that position only because they’re afraid of anti-Trumpers. Johnson called the October 18 No Kings rally a “hate America rally” of “[t]he antifa crowd, the pro-Hamas crowd, and the Marxists…. It is an outrageous gathering for outrageous purposes,” he said.

Majority whip Tom Emmer (R-MN) joined in, calling those who are taking a stand against Trump’s destruction of the nation’s constitutional checks and balances “the terrorist wing” of the Democratic Party, saying it “is set to hold…a hate America rally in [Washington, D.C.] next week.” Legal scholar David Noll noted that it’s “interesting that if you say the [C]onstitution creates a separation of powers systems in which there are no kings, they think you hate [A]merica.”

Josh Dawsey reported in the Wall Street Journal today that administrative officials joke about ruling Congress with an “iron fist” and that Trump ally Steve Bannon has compared Congress to Russia’s largely ceremonial Duma.

Today House speaker Johnson announced he would cancel another week’s session, making four weeks he has kept House members from their jobs. Johnson first sent the members home on September 19. Staying out of session means not working on the budget that is overdue or hammering out the necessary appropriations bills. It means not working on figuring out a way to extend the healthcare premium tax credits that Democrats are demanding.

It also means not swearing in Representative Adelita Grijalva (D-AZ), who won election on September 23 and who will provide the 218th vote on a discharge petition to trigger a vote on a measure requiring the release of the files the government has on the investigation of convicted sex offender Jeffrey Epstein.

The administration is trying to ram its will through Congress. Republicans have tried to pin the blame for the shutdown on Democrats, sending automatic out-of-office email replies that blame Democrats for the shutdown, for example, in violation of the Hatch Act that prohibits using government resources for partisan purposes. As the shutdown drags on and most Americans blame Republicans, their efforts to shift the blame are ratcheting up. Now the administration has posted a video at airport Transportation Security Administration (TSA) lines featuring Homeland Security secretary Kristi Noem saying that operations are impacted because “Democrats in Congress refuse to fund the federal government.”

Immigration lawyer Aaron Reichlin-Melnick commented: “Can you think of a single movie in which there is a video from the government denouncing its political opponents playing on a loop in public spaces in which that government was the good guy?”

Natalie Allison and Riley Beggin of the Washington Post reported yesterday that members of the administration have not engaged with Democrats at all to negotiate an end to the shutdown.

by Heather Cox Richardson, Letters From An American |  Read more:
Image: US Government/Dept. of the Treasury
[ed. When you have a president, his administration, and half of Congress weaponizing our government against half the voters in this country - I'd say it's well past time do something. A government run by these people should be shut down. And don't be deluded over who's to blame. It's not the "radical left" (now on official websites - see below). It's "un-Americans" like these who are doing everything they can to erase the united in United States.]

Sunday, October 12, 2025

18 Well-Read People on How They Find the Time For Books

Would you really be so surprised to learn that we are reading less and less every year? Last month, a new study revealed that only 16 percent of Americans are reading for pleasure, which represents a 40 percent drop from peak rates just over a decade ago. (Terrifyingly, people are reading to their children less and less, too.) But in my corner of the internet, books don’t appear to have lost their status. Celebrities pose with them on boats and beaches and select them for their clubs and Bookstagrammers post towering stacks of their latest “hauls.” And though I surround myself with readers, it can easily feel harder and harder to make the time to spend with a book — and easier to buckle and give into distractions.

So I asked an assortment of well-read people — critics, authors, Substackers — to tell me how, exactly, they find the time for books. In doing so, they described their daily routines, their home-furniture setups, and their children’s extracurriculars. One thing that came up over and over: the relentless, almost inescapable attention-zapping evil of the phone. If technology is waging a war on our attention spans, these soldiers are well-prepared for the fight.

Molly Young, book critic and magazine writer

I treat my phone like poison. I leave the house as much as possible without it. After I had a kid, people were like, “What if there’s an emergency?” Every fucking person on Earth has a phone. I’ll ask the person sitting eight inches away.

Once you are released from the grip of your phone, you have like eight extra hours in the day and reading becomes way easier. It feels like a treat and not like something that you have to strive to do. I always have a book in my bag so that during all those interstitial waiting periods — e.g., in line at checkout — I’m reading a paragraph instead of doing nothing. I only read paper books. I don’t listen to audiobooks just because I can’t have things in my ears all the time because then I don’t have an internal monologue, which is really scary.

I keep a list of books that I read every year, probably between 60 and 130. Which doesn’t feel like that many, but I’m a slow reader, so that’s my excuse.

by Jasmine Vojdani, The Cut | Read more:
Image: AMC

Saturday, October 11, 2025

Bela de Kristo (Hungarian, 1920–2006) - Four hands 

Burkhard Neie - Modern Romance
via:

Gen Z's College Radio Revival

It’s been a weird summer for the music industry.

The fewest new hits in U.S. history. No song of the summer. An AI artist just signed a $3M record deal. The biggest band on the charts? HUNTR/X, a fictional K-pop girl group from a Netflix movie. The vibes are off.

In September, a bombshell report from MIDiA Research crystallized the mood: “music discovery is at a generational crossroads,” it argues:
“Music discovery is traditionally associated with youth, but today’s 16-24-year-olds are less likely than 25-34-year-olds to have discovered an artist they love in the last year.”
Read that again: Younger consumers, typically the drivers of cultural trends, are less likely to discover new artists compared to 25-34 year-olds.

And even when they do discover artists, they are less likely to stream that artist’s music, according to the report.

If you stopped reading here, you might conclude young people just don’t care about music anymore.

However, one unexpected source of music discovery is quietly booming among Gen Z listeners: college radio.

College radio killed the TikTok star

I spoke to seven student general managers and surveyed 80+ DJs at stations across America: ACRN (Ohio University), WCBN (University of Michigan), WEGL (Auburn University), WHRW (Binghamton University), WRFL (University of Kentucky), WVBR (Cornell University), and WZBC (Boston College).

They told me student interest in college radio has dramatically increased in recent years. Stations that once struggled to fill airtime are now turning people away, shortening shows, alternating time slots, and running training programs just to keep up with the demand from aspiring student DJs.

For decades, college radio championed underground artists before they hit the mainstream. Against all odds—COVID shutdowns, FCC regulations, and the long decline of FM radio—college radio is thriving again. (...)

A wave of Gen Z demand

Ten years ago, WCBN (Michigan) was struggling to fill three-hour programming blocks, says GM Anja Sheppard. Today, it’s the “fullest schedule we’ve had in recent memory,” with shows reduced to one-hour due to “such demand from students to be on air.”

At WRFL (Kentucky), “we’ve had some of the most exponential growth this station has seen in its 37 year history,” says GM Aidan Greenwell. “We’ve gotten to the point where we simply don’t have enough time to allow everybody on the show schedule,” with 350 signups at the student interest fair this year, 100 shows on the schedule and around 120 people currently staffing the station. (...)

Demand for on-air slots is out-pacing “hours in the day” at WEGL (Auburn), GM Rae Nawrocki says. The station has grown from roughly 30 members four years ago to 120 students and 60 on-air shows today.

Some stations have so many aspiring student DJs, they have internships and apprenticeship programs for those waiting for their chance to go on-air: WHRW (Binghamton) has 150–200 active DJs and another 80 apprentices, WZBC (Boston College) counts 70 interns for its online stream in addition to 90 FM DJs.

What’s Driving the College Radio Renaissance?

1. Algorithm fatigue

Students consistently described radio as an authentic, community-driven refuge from the passive, isolated, algorithm-driven digital experiences that have defined their adolescence. “You can’t scroll on reels and run a radio station at the same time,” says Greenwell (WRFL) “You have to be in the present.”

In our survey of 80+ DJs, students under 25 years old named “friends/word of mouth” as their favorite way to discover music (69%), with TikTok (21%), YouTube (10%) or other social media (16%) relatively low-ranked.

When asked “Who is your favorite artist you discovered recently, and how did you discover them?”, open-ended responses were split almost evenly between friends/word of mouth (27%) and algorithmic/streaming discovery (26%), with smaller shares citing live shows, radio, online communities (Bandcamp, Reddit, RateYourMusic), or physical media.
  • “I’m 21. I grew up in the age of algorithms. The way music is right now scares me because of the rise of AI. Not even AI made music (I hate it) but even just ‘Daily Mix, 1 and 2 and 3 and 4 and 5.’ It’s not made by someone. It’s made by an algorithm. I wish more of that stuff was person curated.”—Mari McLaughlin, WHRW (Binghamton)
  • “What attracts a lot of people to college radio is the idea of putting somebody on. Showing them a new song they haven’t seen before, outside of the algorithmic nature of streaming.”—Aidan Greenwell, WRFL (Kentucky)
  • “I’ve started learning a lot more about music from other people’s recommendations than I ever had before. These experiences are shaping me more than algorithms or Spotify.”—Anna Loy, WVBR (Cornell).
  • “Diehard music lovers are shifting away from Spotify. The trend I am seeing is people want ownership and community instead of this vague green app.”—Rae Nawrocki (WEGL)
2. Analog Nostalgia

The resurgence of interest in physical media is a significant driver of Gen Z’s attraction to college radio.

Millennials embraced technology for its convenience and accessibility, which reduced the friction in media consumption. Gen Z, in response, is seeking out experiences that are more tangible, personal and inconvenient.

This manifests in a return to high-friction analog media like vinyl, flip phones, film cameras and radio. (...)

The physical libraries at many stations, with massive collections of vinyl and CDs, offer a tangible connection to music history that can’t be replicated online:
  • “Our music directors have been writing comments on the records and CDs since the seventies. It’s funny to see how opinions change over time. It’s like an analog Internet comment section.”—Marcus Rothera, WZBC (Boston College)
  • “WHRW has a massive physical music library. I’m pretty sure it’s one of the biggest on the east coast. We have somewhere in the ballpark of 8,000 CDs and vinyls in there.”—Mari McLaughlin, WHRW (Binghamton) 
3. Community, Creativity & Belonging

College radio stations serve as vital “third spaces” where students can find a community of like-minded people outside of classes and social media, said several GMs. Community was cited as a top reason for joining college radio (79%) by DJs under age 25 in the survey (second only to “creative outlet” at 94%). (...)

A Hopeful Counter-Model

Skeptics might point out that college radio audiences are small, but the real story isn’t who’s listening—it’s that so many young people want to DJ, dig into music history, share discoveries, and build community around music.

The revival of college radio isn’t a signal that FM is back; it’s proof that Gen Z still cares deeply about music, discovery, and culture.

In a moment when the wider industry is betting on algorithms, AI, and fleeting TikTok trends, these college radio DJs remind us that young people aren’t bored of music—they’re bored of the shallow, virality-obsessed way music is marketed to them.

by Emily White, emwhitenoise |  Read more:
Image: DJ Maya, WCBN (University of Michigan) Photo credit: Olivia Glinski (2025)
[ed. I wanted to be a jazz DJ in college but stumbled over FCC licensing requirements, and (in retrospect), how little I actually knew about jazz at the time... haha.]

Mask of la Roche-Cotard,

Also known as the “Mousterian Protofigurine”, is a purported artifact dated to around 75,000 years ago, in the Mousterian period. It was found in 1975 in the entrance of a cave named La Roche-Cotard, territory of the commune of Langeais (Indre-et-loire), on the banks of the river Loire.

The artifact, possibly created by Neanderthal humans, is a piece of flat flint that has been shaped in a way that seems to resemble the upper part of a face. A piece of bone pushed through a hole in the stone has been interpreted as a representation of eyes.

Paul Bahn has suggested this “mask” is “highly inconvenient”, as “It makes a nonsense of the view that clueless Neanderthals could only copy their cultural superiors the Cro-Magnon”.

Though this may represent an example of artistic expression in Neanderthal humans, some archaeologists question whether the artifact represents a face, and some suggest that it may be practical rather than artistic.

In 2023 the oldest known Neanderthal engravings were found in La Roche-Cotard cave which have been dated to more than 57,000 years ago.

The Life and Death of the American Foodie

When food culture became pop culture, a new national persona was born. We regret to inform you, it’s probably you.

When did you become such an adventurous eater?” my mom often asks me, after I’ve squealed about some meal involving jamón ibérico or numbing spices. The answer is, I don’t know, but I can think of moments throughout my life where food erupted as more than a mere meal: My cousin and his Ivy League rowing team hand-making pumpkin ravioli for me at Thanksgiving. Going to the pre-Amazon Whole Foods and giddily deciding to buy bison bacon for breakfast sandwiches assembled in a dorm kitchen. Eating paneer for the first time in India. Slurping a raw oyster in New Orleans.

What made me even want to try a raw oyster in 2004, despite everything about an oyster telling me NO, was an entire culture emerging promising me I’d be better for it. Food, I was beginning to understand from TV and magazines and whatever blogs existed then, was important. It could be an expression of culture or creativity or cachet, folk art or surrealism or science, but it was something to pay attention to. Mostly, I gleaned that to reject foodieism was to give up on a new and powerful form of social currency. I would, then, become a foodie.

To be a foodie in the mid-aughts meant it wasn’t enough to enjoy French wines and Michelin-starred restaurants. The pursuit of the “best” food, with the broadest definition possible, became a defining trait: a pastry deserving of a two-hour wait, an international trip worth taking just for a bowl of noodles. Knowing the name of a restaurant’s chef was good, but knowing the last four places he’d worked at was better — like knowing the specs of Prince’s guitars. This knowledge was meant to be shared. Foodies traded in Yelp reviews and Chowhound posts, offering tips on the most authentic tortillas and treatises on ramps. Ultimately, we foodies were fans, gleefully devoted to our subculture.

Which inevitably leads to some problems, when, say, the celebrities the subculture has put on a pedestal are revealed to be less-than-honorable actors, or when values like authenticity and craft are inevitably challenged. What it’s historically meant to be a foodie, a fan, has shifted and cracked and been reborn.

And ultimately, it has died. Or at least the term has. To be called a “foodie” now is the equivalent of being hit with an “Okay, boomer.” But while the slang may have changed, the ideals the foodie embodied have been absorbed into all aspects of American culture. There may be different words now, or no words at all, but the story of American food over the past 20 years is one of a speedrun of cultural importance. At this point, who isn’t a foodie? (...)
***
How did we get to chefs-holding-squeeze-bottles as entertainment? The 1984 Cable Communications Policy Act deregulated the industry, and by 1992, more than 60 percent of American households had a cable subscription. Food Network launched in 1993, and compared to Julia Child or Joyce Chen drawing adoring viewers on public broadcasting programs, the channel was all killer, no filler, with shows for every mood. By the early 2000s, you could geek out with Alton Brown on Good Eats, experience Italian sensuality with Molto Mario or Everyday Italian, fantasize about a richer life with Barefoot Contessa, or have fun in your busy suburban kitchen with 30 Minute Meals. Anthony Bourdain’s A Cook’s Tour gave viewers an initial taste of his particular brand of smart-alecky wonder, and there were even competition shows, like the Japanese import Iron Chef.

The premiere of 2005’s The Next Food Network Star, which later gave us Guy Fieri, baron of the big bite, was the network’s first admission that we were ready to think of food shows in terms of entertainment, not just instruction and education. But Food Network was still a food network. The mid-aughts brought the revelation that food programming didn’t have to live just there, but could be popular primetime television — when that was an actual time and not just a saying.

Then came Top Chef, inspired by the success of Bravo’s other reality competition series, Project Runway. There is no overstating Top Chef’s lasting influence on food entertainment, but off the bat it did one thing that further cemented foodieism as a bona fide subculture: Its air of professionalism gave people a vocabulary. “The real pushback from the network was but the viewers can’t taste the food,” says Lauren Zalaznick, president of Bravo at the time. But just like the experts on Project Runway could explain good draping to someone who didn’t know how to sew, Top Chef “committed to telling the story of the food in such a way that it would become attainable no matter where you were,” she says.

This gave viewers a shared language to speak about food in their own lives. Now, people who would never taste these dishes had a visual and linguistic reference for molecular gastronomy, and could speculate about Marcel Vigneron’s foams. If you didn’t know what a scallop was, you learned, as Top Chef was awash in them. Yes, you could hear Tom Colicchio critique a classic beurre blanc, but also poke, al pastor, and laksa, and now that language was yours too. And you could hear chefs speak about their own influences and inspirations, learning why exactly they thought to pair watermelon and gnocchi.

The food scene then “was more bifurcated,” says Evan Kleiman, chef and longtime host of KCRW’s Good Food. “There were super-high-end restaurants that were expensive, maybe exclusive, and for the most part represented European cuisines. And then what was called ‘ethnic food’ was often relegated to casual, family-run kind of spots.” Top Chef may have been entertainment for the upwardly mobile foodie, but in 2005, Bourdain’s No Reservations premiered on the Travel Channel, similarly emphasizing storytelling and narrative. In his hands, the best meals often didn’t even require a plate. His was a romantic appreciation of the authentic, the hole-in-the-wall, the kind of stuff that would never be served in a dining room. It set off an entire generation of (often less respectful, less considered) foodie adventurism.

No Reservations is what got me interested in the culture of eating,” says Elazar Sontag, currently the restaurant editor at Bon Appétit. Because it was about food as culture, not as profession. But there was programming for it all. Also in 2005, Hell’s Kitchen premiered on Fox, with an amped-up recreation of a dinner service in each night’s challenge. “Hell’s Kitchen’s high-octane, insane, intense environment of a restaurant kitchen is actually what made me think, when I was maybe 12 or 13, that I want to work in restaurants,” says Sontag.

All these shows were first and foremost about gathering knowledge, whether it was what, indeed, a gastrique was, or the history of boat noodles in Thailand. It didn’t matter if you’d ever been there. The point was that you knew. “Food was becoming a different kind of cultural currency,” says Sontag. “I didn’t clock that shift happening at the time, but it’s very much continued.”

Language is meant to be spoken; knowledge is meant to be shared. Now that everyone knew there were multiple styles of ramen, there was no better place to flex about it than with a new tool: the social internet. Online, “talking about restaurants and going to restaurants became something that people could have a shared identity about,” says Rosner. “There was this perfect storm of a national explosion of gastronomic vocabulary and a platform on which everybody could show off how much they knew, learn from each other, and engage in this discovery together.” Your opinion about your corner bagel shop suddenly had a much wider relevance.

by Jaya Saxena, Eater | Read more:
Image: Julia Duffosé