Saturday, January 17, 2026

What Are We Thinking?

Who are you? What’s going on deep inside yourself? How do you understand your own mind? The ancient sages had big debates about this, and now modern neuroscience is helping us sort it all out.

When my amateur fascination with neuroscience began, roughly two decades ago, the scientists seemed to spend a lot of time trying to figure out where in the brain different functions were happening. That led to a lot of simplistic shorthand in the popular conversation: Emotion is in the amygdala. Motivation is in the nucleus accumbens. Back in those days management consultants could make a good living by giving presentations with slides of brain scans while uttering sentences like: “You can see that the parietal lobe is all lit up. This proves that …”

But over the past several years the field of neuroscience seems to have moved away from this modular approach (each brain region has its own job). Researchers are more likely to believe that the brain is a network of interconnected regions. They are more likely to talk about vast and dynamic webs of neurons whose connections link disparate parts of the brain.

Luiz Pessoa, who runs the Maryland Neuroimaging Center, recently offered a metaphor that helps a layman like me understand what’s going on. In an essay for Aeon, he asks us to imagine a flock of starlings swooping and swirling in the sky. No single starling organizes this ballet, yet out of the local interactions between all the starlings a coordinated dance emerges.

As the brain is trying to navigate through the complex situations of the day, it is creating what Pessoa calls “neuronal ensembles distributed across multiple brain regions,” which, like a murmuration of starlings, “forms a single pattern from the collective behavior.”

This makes sense to me. Life is really complicated. To deal with a million unexpected circumstances, you wouldn’t want a brain filled with just a few regions doing just a few jobs. You’d want the brain to be able to improvise a vast number of networked ensembles that would dynamically affiliate and thus coordinate sensible responses.

Pessoa’s metaphor inspired me to try a little thought experiment. Imagine that you are a teacher and you look out at your classroom and see each of the students in your class as a flock of starlings. Their brains are not empty vats to be filled with information. Their brains are not computers that impersonally churn through calculations. Rather, each student is an ever-changing swirl of thoughts, fears, feelings, desires, impulses, memories and body sensations that interact to form a single mind that guides the student through the events of her day.

If you saw people this way, I think the first thing you’d notice is how much individual variation there is. If you see kids’ minds as a vat to be filled up, or if you conceive of the brain as a kind of computer, then every vat and every computer is kind of the same. But if kids are swirls, then every swirl has its own distinct set of motions — its own personality, its unique dance.

And yet our educational system is standardized. As Todd Rose writes in his excellent book “The End of Average,” when we grade or sort people, we measure them according to a few criteria, and then we rank them along a single continuum. Some people are A players, some are B and some are D. The message is: Be just like everybody else but better.

But if you see people as flocks of starlings, you’d see just how dehumanizing such sorting systems are. If you wanted to coach, teach or treat a flock of starlings, you wouldn’t be content with factory-style, one-size-fits-all approaches. You wouldn’t want to rank people along a single scale. You’d want personalized education, personalized medicine, personalized management techniques.

The second thing you’d notice, I think, is that change is the human constant. In our culture we have a tendency to essentialize people, to pick a few labels or traits that supposedly capture who they are. But flocks are always in motion. A person who is extroverted at home might be introverted at synagogue. A stock trader who is overly aggressive in a bull market may be overly cautious in a bear market.

Behavior is more about if-then signatures. If I’m confronted with this context I tend to respond with this mental swirl and that action, but if I’m in a different context I’ll respond with a very different swirl. If we saw people as starlings, I think we’d pay more attention to how good each person is at changing and adapting and less on supposedly permanent traits.

Third, I’d think you’d notice that the categories we use to understand people get in the way of actually understanding them. We divide mental activity into categories like perception, reason, emotion, desire, action. This fits well with the modular view of the brain. Vision takes place in the back of the head; reason takes place in the front.

But if you see people as a set of swirls, you are confronted with the fact that all these different mental activities are intensely interconnected as part of a single holistic process. The emotions you feel influence what you see just as much as what you see influences what you feel. The divisions between these mental categories begin to dissolve away. As Lisa Feldman Barrett, a neuroscientist, wrote in her book “How Emotions Are Made,” “emotions are not, in principle, distinct from cognitions and perceptions.”

Finally, if we embrace the flock of starlings metaphor, then we can dump one of the more unfortunate metaphors we in the West have relied on to understand the mind. This metaphor, going back to ancient Greece, holds that reason is a wise charioteer and the passions — emotions and desires — are the stallions who pull the chariot. In this fable, reason is the calm, sophisticated smart guy, while the emotions and desires are dumb, primitive beasts. People lead good rational lives when they use reason to suppress and control the passions.

This chariot metaphor rests on an overly positive estimation of the power of pure reason and an overly negative view of the passions. The fact is that your emotions are not primitive and dumb. Positive emotions encourage risk-taking. Awe encourages you to broaden your focus beyond your narrow self. Sadness encourages you to change your way of thinking.

Your desires are not dumb, either. They tell you what is worth valuing and where you should go. Your body also contains its own form of wisdom. Cortisol increases vigilance. Adrenaline prepares the body for quick and decisive action.

As Annie Murphy Paul writes in her book “The Extended Mind,” “Recent research suggests a rather astonishing possibility: The body can be more rational than the brain.”

If we wanted to step back and look at the whole mind, we’d say — to the extent that we can even separate these faculties — that reason, emotions and desires are just different resources people draw upon to help make judgments about what to do next. Each faculty has its own strengths and weaknesses, and life goes best when a person coordinates all the faculties in one graceful swirl.

Your job as a conscious person is not to be a dominating, rationalist charioteer. It’s to read the judgments that your emotions, desires and body are sending you, act on them when their judgments are appropriate and redirect them when they are getting carried away.

by David Brooks, NY Times |  Read more:
Image: Sarah Mason/Getty Images/Aeon

The Great Replacement

“What if you knew her and/ Found her dead on the ground/ How can you run when you know” — Crosby, Stills, Nash & Young

I am neither a forensic expert nor a jury member, but it sure looks to me like an ICE agent shot and killed a woman who wasn’t threatening his life. We have video of the killing of Renee Good in Minneapolis on January 7th, and the Washington Post has a detailed blow-by-blow analysis of the video: [...]

The Vice President’s claim that the shots were fired from the front of the car is pretty clearly false. He also repeatedly talked about ICE agentsgoing door to door” to deport illegal immigrants — pretty clearly ignoring the Constitution’s Fourth Amendment, which prohibits “unreasonable searches and seizures”.

Vance’s reception on social media — even from the kind of “tech right” types that are usually his fans — was largely negative.
 
Two days is probably far too early for the killing of Good to have shifted national opinion radically. The negative drift in views toward ICE is probably due to their consistent record of brutality, aggression, dubious legality, and unprofessionalism in Trump’s second term.

Here’s a video of ICE agents in Arkansas beating up an unarmed U.S. citizen. Here’s a video of ICE agents arresting two U.S. citizens in a Target. Here’s a story about a similar arrest. Here’s a video of an ICE agent brandishing a gun in the face of a protester. Here’s the story of ICE agents arresting a pastor who complained about an arrest he saw. Here’s a video of ICE agents arresting an American citizen and punching him repeatedly. Here’s a video of ICE agents threatening a bystander who complained about their reckless driving. Here’s a video of ICE agents arresting a man for yelling at them from his own front porch. Here’s a video of ICE agents making a particularly brutal arrest while pointing their weapons at unarmed civilians nearby. Here’s a story about another ICE killing, this one in Maryland, under dubious circumstances. Here’s a video of ICE agents savagely beating and arresting a legal immigrant. Here’s a video of ICE agents storming a private home without a warrant. Here’s a video of ICE agents pulling a disabled woman out of a car when she’s just trying to get to the doctor.

These are all things I noticed on X within just the last two days. There has been a pretty constant stream of these for months. Here’s a roundup of some others, by Jeremiah Johnson:
For the past year, ICE has been involved in a series of escalating incidents that rarely result in repercussions for anyone involved. ICE agents have recklessly caused traffic accidents and then, in one incident, arrested the person whose car they hit. They’ve tear-gassed a veteran, arrested him, and denied him access to medical care and an attorney. They have attacked protesters merely for filming them in public. They’ve pepper-sprayed a fleeing onlooker in the eyes from a foot away. They’ve pointed guns at a 6-year-old. They’ve knelt on top of a pregnant woman while they arrested her. They have arrested another pregnant woman, then kept her separated from her newborn while she languished in custody. They have repeatedly arrested American citizens, and they’ve even reportedly deported a citizen, directly contradicting court orders.
These are anecdotes, but there have also been careful, systematic reports about ICE arrests and mistreatment of U.S. citizens and poor conditions in ICE detention centers.

The Wall Street Journal also reviewed some other videos and other records of ICE shootings, and found a similar pattern to the Renee Good killing:
The Wall Street Journal has identified 13 instances of agents firing at or into civilian vehicles since July, leaving at least eight people shot with two confirmed dead…The Journal reviewed public records—court documents, agency press releases and gun-violence databases—of vehicle shootings involving immigration agents, though video is only publicly available for four of them…The Minneapolis shooting shares characteristics with others the Journal reviewed: Agents box in a vehicle, try to remove an individual, block attempts to flee, then fire.
Instead of causing ICE agents to pause in consternation, the killing of Renee Good appears to have made many even more aggressive. Here’s a video of an ICE agent in Minnesota telling a protester “Have y’all not learned from the past coupla days?”. Here’s a video of an ICE agent kicking over candles at a memorial for Renee Good.

Perhaps this is unsurprising, given the ultra-low standards for recruitment and training of ICE agents under Trump:
A deadly shooting in Minneapolis at the hands of a federal immigration officer comes weeks after a bombshell report on President Donald Trump’s desperate drive to rush 10,000 deportation officers onto the payroll by the end of 2025.

The explosive Daily Mail report found that the administration's $50,000 signing bonus attracted droves of unqualified recruits — high school grads who can "barely read or write," overweight candidates with doctor's notes saying they're unfit, and even applicants with pending criminal charges…[O]ne Department of Homeland Security official [said]: "We have people failing open-book tests and we have folks that can barely read or write English."
Jeremiah Johnson has more:
Reporting shows that ICE is filled with substandard agents. Its aggressive push to hire more agents uses charged rhetoric that appeals to far-right groups, but the agency has run into problems with recruits unable to pass background checks or meet minimum standards for academic background, personal fitness, or drug usage. One career ICE agent called new recruits “pathetic,” according to The Atlantic, and a current Department of Homeland Security official told NBC News that “There is absolutely concern that some people are slipping through the cracks,” and being inadvertently hired.
It’s worth noting, though, that Jonathan Ross himself is well-trained, with plenty of experience in law enforcement and military combat operations. So it’s not always a matter of poor training.

A number of Republican politicians have defended ICE’s actions with rhetoric that sounds downright authoritarian. Texas Representative Wesley Hunt said: “The bottom line is this: when a federal officer gives you instructions, you abide by them and then you get to keep your life.” Florida Representative Randy Fine said: “If you get in the way of the government repelling a foreign invasion, you’re going to end up just like that lady did.”

Is this America now? A country where unaccountable and poorly trained government agents go door to door, arresting and beating people on pure suspicion, and shooting people who don’t obey their every order or who try to get away? “When a federal officer gives you instructions, you abide by them and then you get to keep your life” is a perfect description of an authoritarian police state. None of this is Constitutional, every bit of it is deeply antithetical to the American values we grew up taking for granted.

This tweet really seems to sum it up:


Why is this happening? Part of it is because of the mistakes of the Biden administration. For the first three years of his presidency, Biden allowed a massive, disorderly flood of border-hopping asylum seekers and quasi-legal migrants of all types to pour into the country, and as a result, Americans got really, really mad. That made immigration into a major issue in the 2024 election, helped Trump get elected, and provided political cover for a dramatic expansion of deportations. Now, probably thanks to ICE’s brutality and the administration’s lawlessness, support for immigrants and disapproval of Trump’s immigration policies are rising again. But the administration still has what it considers a mandate to act with impunity.

The deeper reason, though, is the ideology of the MAGA movement. Over the years, I’ve come to realize that most Trump supporters view immigration as a literal invasion of the United States — not a figurative “invasion”, but a literal attempted conquest of America by foreigners.

And a substantial percentage of these folks believe that the purpose of this “invasion” is to “replace” the existing American population. This is from a PRRI poll from late 2024:
One-third of Americans (33%) agree with the “Great Replacement Theory,” or the idea that immigrants are invading our country and replacing our cultural and ethnic background. The majority of Americans (62%) disagree with this theory. Agreement with this theory has decreased by 3 percentage points from 36% in 2019…Six in ten Republicans (60%) agree with the “Great Replacement Theory,” compared with 30% of independents and 14% of Democrats. Among Republicans, those who hold a favorable view of Trump are more likely than those who hold an unfavorable view to agree that immigrants are invading our country (68% vs. 32%).
Perhaps some think that this “Great Replacement” is only cultural or partisan/political — the DHS recruits agents with a call to “Defend your culture!” — but many clearly think it’s racial in nature. The DHS recently posted this image:


100 million is far more than the total number of immigrants in the United States (which is estimated at around 52 million). Instead, it’s close to the total number of nonwhite people in the country. So the idea of “100 million deportations” clearly goes well beyond the idea of deporting illegal immigrants, and well beyond the idea of deporting all immigrants, into the territory of ethnic cleansing.

The DHS is posting these memes as a recruitment tactic, and polls about the “Great Replacement” show that there’s a large pool of potential recruits to whom this rhetoric is likely to appeal. In other words, many of the ICE agents now going around kicking in doors, beating up and threatening protesters, arresting citizens on pure suspicion, and occasionally shooting people believe that they are engaged in a race war. [...]

To be fair, the Great Replacement ideology didn’t arise out of nowhere. It’s an irrational and panicky overreaction that will lead America down the road to disaster — it’s full of hate and lies, it’s inherently divisive, it’s associated with some of history’s most horrible regimes, and it’s being promoted by some very bad actors. But it has also been egged on by a progressive movement that has made anti-white discrimination in hiring a pillar of its approach to racial equity, and has normalized anti-white rhetoric in the public sphere. This was an unforced error by the left — one of many over the past decade.

But whoever started America’s stupid race war, the real question is who will stand up and end it. The GOP, and the MAGA movement specifically, was offered a golden off-ramp from this dark path. In 2020 and 2024, Hispanic Americans, along with some Asian and Black Americans, shifted strongly toward Trump and the GOP. This was a perfect opportunity for the GOP to make itself, in the words of Marco Rubio, a “multiracial working-class” party. This would have been similar to how Nixon and Reagan expanded the GOP coalition to include “white ethnics” that the GOP had spurned in the early 20th century. But instead, MAGA took the victory handed to them by nonwhite voters and used it to act like exactly the kind of white-nationalist race warriors that liberals had always insisted they were. [...]

But Trump is an old man, and the younger generation was raised not on mid-20th-century nationalist rhetoric but on right-wing social media and memes. When Trump is gone, the MAGA movement will cease to be defined by his personal charisma, and will start being defined by the ideology of the Great Replacement — the same ideology that is now motivating many of the ICE agents acting like thugs in the streets of America.

And it’s increasingly clear that JD Vance, understanding that he lacks Trump’s cult of personality, has decided to make himself the leader, voice, and avatar of the “Great Replacement” movement — even if this arouses the disgust of many traditional conservatives and some figures in the tech right. With the disarray of the Democrats and the weakness of other GOP factions, Vance’s move may be a smart political bet, even if it comes at the expense of American freedom and stability.

by Noah Smith, Noahpinion |  Read more:
Images: X/DHS
[ed. Oh for simpler times when a political break-in was considered the height of lawless government. Never thought I'd ever say this in my lifetime, but these days, and with this government, I'd vote for Nixon in a heartbeat:]
***
He covertly aided Pakistan during the Bangladesh Liberation War in 1971 and ended American combat involvement in Vietnam in 1973, and the military draft the same year. His visit to China in 1972 led to diplomatic relations between the two nations, and he finalized the Anti-Ballistic Missile Treaty with the Soviet Union. During the course of his first term, he enacted many progressive environmental policy shifts, such as creating the Environmental Protection Agency and passing laws, including the Endangered Species and Clean Air Acts. In addition to implementing the Twenty-sixth Amendment that lowered the voting age from 21 to 18, he ended the direct international convertibility of the U.S. dollar to gold in 1971, effectively taking the United States off the gold standard. He also imposed wage and price controls for 90 days, launched the Wars on Cancer and Drugs, passed the Controlled Substances Act, and presided over the end of the Space Race by overseeing the Apollo 11 Moon landing. ~ Wikipedia

Julie Curtiss (French, 1982) - Limule (2021)

The Dilbert Afterlife

Sixty-eight years of highly defective people

Thanks to everyone who sent in condolences on my recent death from prostate cancer at age 68, but that was Scott Adams. I (Scott Alexander) am still alive.

Still, the condolences are appreciated. Scott Adams was a surprisingly big part of my life. I may be the only person to have read every Dilbert book before graduating elementary school. For some reason, 10-year-old-Scott found Adams’ stories of time-wasting meetings and pointy-haired bosses hilarious. No doubt some of the attraction came from a more-than-passing resemblance between Dilbert’s nameless corporation and the California public school system. We’re all inmates in prisons with different names.

But it would be insufficiently ambitious to stop there. Adams’ comics were about the nerd experience. About being cleverer than everyone else, not just in the sense of being high IQ, but in the sense of being the only sane man in a crazy world where everyone else spends their days listening to overpaid consultants drone on about mission statements instead of doing anything useful. There’s an arc in Dilbert where the boss disappears for a few weeks and the engineers get to manage their own time. Productivity shoots up. Morale soars. They invent warp drives and time machines. Then the boss returns, and they’re back to being chronically behind schedule and over budget. This is the nerd outlook in a nutshell: if I ran the circus, there’d be some changes around here.

Yet the other half of the nerd experience is: for some reason this never works. Dilbert and his brilliant co-workers are stuck watching from their cubicles while their idiot boss racks in bonuses and accolades. If humor, like religion, is an opiate of the masses, then Adams is masterfully unsubtle about what type of wound his art is trying to numb.

This is the basic engine of Dilbert: everyone is rewarded in exact inverse proportion to their virtue. Dilbert and Alice are brilliant and hard-working, so they get crumbs. Wally is brilliant but lazy, so he at least enjoys a fool’s paradise of endless coffee and donuts while his co-workers clean up his messes. The P.H.B. is neither smart nor industrious, so he is forever on top, reaping the rewards of everyone else’s toil. Dogbert, an inveterate scammer with a passing resemblance to various trickster deities, makes out best of all.

The repressed object at the bottom of the nerd subconscious, the thing too scary to view except through humor, is that you’re smarter than everyone else, but for some reason it isn’t working. Somehow all that stuff about small talk and sportsball and drinking makes them stronger than you. No equation can tell you why. Your best-laid plans turn to dust at a single glint of Chad’s perfectly-white teeth.

Lesser lights may distance themselves from their art, but Adams radiated contempt for such surrender. He lived his whole life as a series of Dilbert strips. Gather them into one of his signature compendia, and the title would be Dilbert Achieves Self Awareness And Realizes That If He’s So Smart Then He Ought To Be Able To Become The Pointy-Haired Boss, Devotes His Whole Life To This Effort, Achieves About 50% Success, Ends Up In An Uncanny Valley Where He Has Neither The Virtues Of The Honest Engineer Nor Truly Those Of The Slick Consultant, Then Dies Of Cancer Right When His Character Arc Starts To Get Interesting.

If your reaction is “I would absolutely buy that book”, then keep reading, but expect some detours.

Fugitive From The Cubicle Police

The niche that became Dilbert opened when Garfield first said “I hate Mondays”. The quote became a popular sensation, inspiring t-shirts, coffee mugs, and even a hit single. But (as I’m hardly the first to point out) why should Garfield hate Mondays? He’s a cat! He doesn’t have to work!

In the 80s and 90s, saying that you hated your job was considered the height of humor. Drew Carey: “Oh, you hate your job? There’s a support group for that. It’s called everybody, and they meet at the bar.”


This was merely the career subregion of the supercontinent of Boomer self-deprecating jokes, whose other prominences included “I overeat”, “My marriage is on the rocks”, “I have an alcohol problem”, and “My mental health is poor”.

Arguably this had something to do with the Bohemian turn, the reaction against the forced cheer of the 1950s middle-class establishment of company men who gave their all to faceless corporations and then dropped dead of heart attacks at 60. You could be that guy, proudly boasting to your date about how you traded your second-to-last patent artery to complete a spreadsheet that raised shareholder value 14%. Or you could be the guy who says “Oh yeah, I have a day job working for the Man, but fuck the rat race, my true passion is white water rafting”. When your father came home every day looking haggard and worn out but still praising his boss because “you’ve got to respect the company or they won’t take care of you”, being able to say “I hate Mondays” must have felt liberating, like the mantra of a free man.

This was the world of Dilbert’s rise. You’d put a Dilbert comic on your cubicle wall, and feel like you’d gotten away with something. If you were really clever, you’d put the Dilbert comic where Dilbert gets in trouble for putting a comic on his cubicle wall on your cubicle wall, and dare them to move against you.


(again, I was ten at the time. I only know about this because Scott Adams would start each of his book collections with an essay, and sometimes he would talk about letters he got from fans, and many of them would have stories like these.)

But t-shirts saying “Working Hard . . . Or Hardly Working?” no longer hit as hard as they once did. Contra the usual story, Millennials are too earnest to tolerate the pleasant contradiction of saying they hate their job and then going in every day with a smile. They either have to genuinely hate their job - become some kind of dirtbag communist labor activist - or at least pretend to love it. The worm turns, all that is cringe becomes based once more and vice versa. Imagine that guy boasting to his date again. One says: “Oh yeah, I grudgingly clock in every day to give my eight hours to the rat race, but trust me, I’m secretly hating myself the whole time”? The other: “I work for a boutique solar energy startup that’s ending climate change - saving the environment is my passion!” Zoomers are worse still: not even the fig leaf of social good, just pure hustle.

Dilbert is a relic of a simpler time, when the trope could be played straight. But it’s also an artifact of the transition, maybe even a driver of it. Scott Adams appreciated these considerations earlier and more acutely than anyone else. And they drove him nuts.

Stick To Drawing Comics, Monkey Brain

Adams knew, deep in his bones, that he was cleverer than other people. God always punishes this impulse, especially in nerds. His usual strategy is straightforward enough: let them reach the advanced physics classes, where there will always be someone smarter than them, then beat them on the head with their own intellectual inferiority so many times that they cry uncle and admit they’re nothing special.

For Adams, God took a more creative and – dare I say, crueler – route. He created him only-slightly-above-average at everything except for a world-historical, Mozart-tier, absolutely Leonardo-level skill at making silly comics about hating work.


Scott Adams never forgave this. Too self-aware to deny it, too narcissistic to accept it, he spent his life searching for a loophole. You can read his frustration in his book titles: How To Fail At Almost Everything And Still Win Big. Trapped In A Dilbert World. Stick To Drawing Comics, Monkey Brain. Still, he refused to stick to comics. For a moment in the late-90s, with books like The Dilbert Principle and The Dilbert Future, he seemed on his way to be becoming a semi-serious business intellectual. He never quite made it, maybe because the Dilbert Principle wasn’t really what managers and consultants wanted to hear:
I wrote The Dilbert Principle around the concept that in many cases the least competent, least smart people are promoted, simply because they’re the ones you don't want doing actual work. You want them ordering the doughnuts and yelling at people for not doing their assignments—you know, the easy work. Your heart surgeons and your computer programmers—your smart people—aren't in management.
Okay, “I am cleverer than everyone else”, got it. His next venture (c. 1999) was the Dilberito, an attempt to revolutionize food via a Dilbert-themed burrito with the full Recommended Daily Allowance of twenty-three vitamins. I swear I am not making this up. A contemporaneous NYT review said it “could have been designed only by a food technologist or by someone who eats lunch without much thought to taste”. The Onion, in its twenty year retrospective for the doomed comestible, called it a frustrated groping towards meal replacements like Soylent or Huel, long before the existence of a culture nerdy enough to support them. Adams himself, looking back from several years’ distance, was even more scathing: “the mineral fortification was hard to disguise, and because of the veggie and legume content, three bites of the Dilberito made you fart so hard your intestines formed a tail.”

His second foray into the culinary world was a local restaurant called Stacey’s.

by Scott Alexander, Astral Codex Ten |  Read more:
Images: Dilbert/ACX 
[ed. First picture: Adams actually had a custom-built tower on his home shaped like Dilbert’s head.]

Friday, January 16, 2026

via:
[ed. What do you think. Real or fake? Who cares?]

What Makes a Novel "Good"?

Why People on Substack Lost their Minds When Someone Said: "Don't Read All the Classics"

On Substack, people will tear you a new one if you dare to neg cherished classics like James Joyce’s Ulysses. When I wrote a post last year criticizing Ulysses, I definitely caught some internet side eye. But the judgment didn’t even come close to the comments on Karen Rodriguez’ post “The 40 Famous Classics You’re Allowed to Skip (And Why Everyone Secretly Agrees).” The comments were so mean I physically flinched reading them...

My favorite section of her list is the “Literally Unreadable (But People Pretend)” category, which includes Ulysses (Joyce), In Search of Lost Time (Proust), and Finnegans Wake (Joyce), which Karen describes as “unreadable even for Joyce scholars.”

I don’t come from the academic literature world, I’m a lawyer-turned-novelist and all I care about from a reader’s perspective is that books are both 1) entertaining and 2) moving. There’s so many books that people praise lavishly but that I find fail that basic criteria, including Ulysses.

So why the hell is everyone losing their mind over this? Like is Joyce your god? Why is criticizing these books, these authors, such a cardinal sin?

I think I finally figured out why. And it has to do what people value in their books. There’s actually a whole debate in literary criticism concerning what fiction is supposed to do for humanity and what makes a novel good.

I happen to fall in with the group that doesn’t particularly appreciate Joyce. But there’s camps out there that die for modernist novels (like Ulysses) and experimental post-modern writing (like Pynchon’s work). I don’t agree with them, but it was helpful to understand what those readers value in those works.

Here’s what I learned:

Realism vs. Everything Else

The big debate in literary fiction boils down to this: should novels try to represent life as it actually is, or should they do something else entirely?

Realism is what most of us think of as “normal” fiction. It’s Alice Munro, Marilynne Robinson, Jhumpa Lahiri. Characters feel like real people with believable psychology. The prose is clear and doesn’t call attention to itself. No one discovers they’re secretly royalty or gets abducted by aliens. It’s just life, rendered carefully on the page.

But here’s what makes realism click for me: it’s defined more by what it’s NOT than what it is.

Realism is not romance with impossible coincidences. It’s not allegory where characters represent abstract concepts. It’s not metafiction that constantly reminds you you’re reading a book. It’s not heavily plotted melodrama where orphans conveniently turn out to be related to their benefactors. And it’s not highly stylized or poetic prose where every sentence is gorgeously metaphorical. (...)

Before realism became dominant in the mid-1800s (think Flaubert, George Eliot, Tolstoy), novels were full of improbable adventures, clear moral lessons, and coincidence-heavy plots. Realism said: what if we just showed ordinary people dealing with ordinary disappointments? What if we went deep into their psychology instead of hitting them with dramatic plot twists?

Then Modernism Said “Not So Fast”

By the early 1900s, some writers thought realism was insufficient. Virginia Woolf, James Joyce, William Faulkner broke with realistic conventions, but not because they didn’t care about truth. They thought traditional realism couldn’t capture modern consciousness.

Modernism’s insight: Reality is fragmented and chaotic, especially after World War I shattered Victorian certainties. Modernist authors used stream of consciousness, fractured timelines, and difficult prose to represent how minds actually work and how reality actually feels...

The questions pile up without clear answers, thoughts interrupt themselves—this is trying to show consciousness as it actually moves, not tidied up for the reader.

The key difference from realism: Modernists believed meaning still existed, but you needed new forms to access it. Joyce’s Ulysses is notoriously difficult, but according to the internet (I don’t know, I haven’t read past page six), the novel is ultimately trying to demonstrate the truth of a day in Dublin in 1904. The experiments serve a purpose.

The problem, for me, is that the experiments can make the writing very un-fun to read.

Then Postmodernism Said “There Is No Truth”

Postmodernism (think Thomas Pynchon, John Barth, Donald Barthelme) takes fragmentation and makes it playful. These writers are skeptical that fiction can reveal any stable truth at all. So they write metafiction that constantly breaks the fourth wall, mixes high and low culture, and treats meaning itself as a game.

Here’s an excerpt from Donald Barthelme’s “The School.”
One day, we had a discussion in class. They asked me, where did they go? The trees, the salamander, the tropical fish, Edgar, the poppas and mommas, Matthew and Tony, where did they go? And I said, I don’t know, I don’t know. And they said, who knows? and I said, nobody knows. And they said, is death that which gives meaning to life? and I said, no, life is that which gives meaning to life. Then they said, but isn’t death, considered as a fundamental datum, the means by which the taken-for-granted mundanity of the everyday may be transcended in the direction of—I said, yes, maybe.
What’s interesting is that school kids wouldn’t, they couldn’t, be making the observation that death is “a fundamental datum, the means by which...everyday may be transcended in the direction...” because of their age and life experience. So if it’s not the children’s “voice” saying this in the story, it must be the narrator, or maybe even the writer. Barthelme is winking at us, breaking character (the fourth wall), reminding us that this story is all made up. It’s clever but keeps us at arm’s length emotionally.

When you read postmodern fiction, it often feels like writers writing for other writers—it’s inside jokes about literary conventions rather than stories that move you emotionally. That’s intentional. Postmodernists think the search for emotional truth through fiction is naive. Better to be playfully ironic about the whole enterprise.

This is why I sometimes find postmodernism so boring. (I actually like Barthelme’s short story “The Baby” which is harrowing.) But who reads Gravity’s Rainbow (Pynchon) for pleasure besides academics who need to write dissertations about it?

Why This Actually Matters For Writers (and Readers)

Understanding these camps helped me see what choices I’m making in writing my novel—and how certain readers or critics might respond to those choices.

If I write a straightforward story with believable characters and clear prose, I’m in the realist tradition. If I experiment with fragmented timelines or stream of consciousness, I’m borrowing modernist techniques. If I get cute and self-referential, I’m flirting with postmodernism.

None of these are “right” or “wrong,” but they come with trade-offs. Realism connects emotionally but can feel conventional. Modernist techniques can capture complex consciousness but risk alienating readers. Postmodern playfulness might be intellectually interesting but often sacrifices what fiction does best: making us care about people who don’t exist.

These days the fiction world is pretty eclectic. There’s typical realism (Alice Munro), realism with fantastical elements (Kelly Link), experimentalism with emotional sincerity (David Foster Wallace apparently tried to split this difference), and everything in between.

My take after this deep dive: Fiction’s unique power is making us feel what it’s like to be someone else. When technique serves that purpose—whether it’s Alice Munro’s precision or Faulkner’s stream of consciousness—great. When technique becomes the point itself, I lose interest.

by Noor Rahman, Write on Track |  Read more:
Image: via
[ed. More examples in the full essay. I can't read Joyce. Even (and especially) because of his prose (except for Portrait of the Artist). Same with Proust, but for different reasons - his prose is beautiful but buried beneath the endless minutia of social manners and French society that eventually becomes unbearable (In Search of Lost Time).]

Measure Up

“My very dear friend Broadwood—

I have never felt a greater pleasure than in your honor’s notification of the arrival of this piano, with which you are honoring me as a present. I shall look upon it as an altar upon which I shall place the most beautiful offerings of my spirit to the divine Apollo. As soon as I receive your excellent instrument, I shall immediately send you the fruits of the first moments of inspiration I gather from it, as a souvenir for you from me, my very dear Broadwood; and I hope that they will be worthy of your instrument. My dear sir, accept my warmest consideration, from your friend and very humble servant.

—Ludwig van Beethoven”

As musical instruments improved through history, new kinds of music became possible. Sometimes, the improved instrument could make novel sounds; other times, it was louder; and other times stronger, allowing for more aggressive play. Like every technology, musical instruments are the fruit of generations worth of compounding technological refinement.

In a shockingly brief period between the late 18th and early 19th centuries, the piano was transformed technologically, and so too was the function of the music it produced.

To understand what happened, consider the form of classical music known as the “piano sonata.” This is a piece written for solo piano, and it is one of the forms that persisted through the transition, at least in name. In 1790, these were written for an early version of the piano that we now think of as the fortepiano. It sounded like a mix of a modern piano and a harpsichord.

Piano sonatas in the early 1790s were thought of primarily as casual entertainment. It wouldn’t be quite right to call them “background music” as we understand that term today—but they were often played in the background. People would talk over these little keyboard works, play cards, eat, drink.

In the middle of the 1790s, however, the piano started to improve at an accelerated rate. It was the early industrial revolution. Throughout the economy, many things were starting to click into place. Technologies that had kind of worked for a while began to really work. Scale began to be realized. Thicker networks of people, money, ideas, and goods were being built. Capital was becoming more productive, and with this serendipity was becoming more common. Few at the time could understand it, but it was the beginning of a wave—one made in the wake of what we today might call the techno-capital machine.

Riding this wave, the piano makers were among a great many manufacturers who learned to build better machines during this period. And with those improvements, more complex uses of those machines became possible.

Just as this industrial transformation was gaining momentum in the mid-1790s, a well-regarded keyboard player named Ludwig van Beethoven was starting his career in earnest. He, like everyone else, was riding the wave—though he, like everyone else, did not wholly understand it.

Beethoven was an emerging superstar, and he lived in Vienna, the musical capital of the world. It was a hub not just of musicians but also of musical instruments and the people who manufactured them. Some of the finest piano makers of the day—Walter, Graf, and Schanz—were in or around Vienna, and they were in fierce competition with one another. Playing at the city’s posh concert spaces, Beethoven had the opportunity to sample a huge range of emerging pianistic innovations. As his career blossomed, he acquired some of Europe’s finest pianos—including even stronger models from British manufacturers like Broadwood and Sons.

Iron reinforcement enabled piano frames with higher tolerances for louder and longer play. The strings became more robust. More responsive pedals meant a more direct relationship between the player and his tool. Innovations in casting, primitive machine tools, and mechanized woodworking yielded more precise parts. With these parts one could build superior hammer and escapement systems, which in turn led to faster-responding keys. And more of them, too—with higher and lower octaves now available. It is not just that the sound these pianos made was new: These instruments had an enhanced, more responsive user interface.

You could hit these instruments harder. You could play them softer, too. Beethoven’s iconic use of sforzando—rapid swings from soft to loud tones—would have been unplayable on the older pianos. So too would his complex and often rapid solos. In so many ways, then, Beethoven’s characteristic style and sound on the keyboard was technologically impossible for his predecessors to achieve... 

Beethoven was famous for breaking piano strings that were not yet strong enough to render his vision. There was always a relevant margin against which to press. By his final sonata, written in the early 1820s, he was pressing in the direction of early jazz. It was a technological and artistic takeoff from this to this, and from this to this.

Beethoven’s compositions for other instruments followed a structurally similar trajectory: compounding leaps in expressiveness, technical complexity, and thematic ambition, every few years. Here is what one of Mozart’s finest string quartets sounded like. Here is what Beethoven would do with the string quartet by the end of his career.

No longer did audiences talk during concerts. No longer did they play cards and make jokes. Audiences became silent and still, because what was happening to them in the concert hall had changed. A new type of art was emerging, and a new meta-character in human history—the artist—was being born. Beethoven was doing something different, something grander, something more intense, and the way listeners experienced it was different too.

The musical ideas Beethoven introduced to the world originated from his mind, but those ideas would have been unthinkable without a superior instrument.
I bought the instrument I’m using to write this essay in December 2020. I was standing in the frigid cold outside of the Apple Store in the Georgetown neighborhood of Washington, D.C., wearing a KN-95 face mask, separated by six feet from those next to me in line. I had dinner with a friend scheduled that evening. A couple weeks later, the Mayor would temporarily outlaw even that nicety.

I carried this laptop with me every day throughout the remainder of the pandemic. I ran a foundation using this laptop, and after that I orchestrated two career transitions using it. I built two small businesses, and I bought a house. I got married, and I planned a honeymoon with my wife. (...)

In a windowless office on a work trip to Stanford University on November 30, 2022, I discovered ChatGPT on this laptop. I stayed up all night in my hotel playing with the now-primitive GPT-3.5. Using my laptop, I educated myself more deeply about how this mysterious new tool worked.

I thought at first that it was an “answer machine,” a kind of turbocharged search engine. But I eventually came to prefer thinking of these language models as simulators of the internet that, by statistically modeling trillions of human-written words, learned new things about the structure of human-written text.

What might arise from a deeper-than-human understanding of the structures and meta-structures of nearly all the words humans have written for public consumption? What inductive priors might that understanding impart to this cognitive instrument? We know that a raw pretrained model, though deeply flawed, has quite sophisticated inductive priors with no additional human effort. With a great deal of additional human effort, we have made these systems quite useful little helpers, even if they still have their quirks and limitations.

But what if you could teach a system to guide itself through that digital landscape of modeled human thoughts to find better, rather than likelier, answers? What if the machine had good intellectual taste, because it could consider options, recognize mistakes, and decide on a course of cognitive action? Or what if it could, at least, simulate those cognitive processes? And what if that machine improved as quickly as we have seen AI advance so far? This is no longer science fiction; this research has been happening inside of the world’s leading AI firms, and with models like OpenAI’s o1 and o3, we see undoubtedly that progress is being made.

What would it mean for a machine to match the output of a human genius, word for word? What would it mean for a machine to exceed it? In at least some domains, even if only a very limited number at first, it seems likely that we will soon breach these thresholds. It is very hard to say how far this progress will go; as they say, experts disagree.

This strange simulator is “just math,”—it is, ultimately, ones and zeroes, electrons flowing through processed sand. But the math going on inside it is more like biochemistry than it is like arithmetic. The language model is, ultimately, still an instrument, but it is a strange one. Smart people, working in a field called mechanistic interpretability, are bettering our understanding all the time, but our understanding remains highly imperfect, and it will probably never be complete. We don’t quite have precise control yet over these instruments, but our control is getting better with time. We do not yet know how to make our control systems “good enough,” because we don’t quite know what “good enough” means yet—though here too, we are trying. We are searching.

As these instruments improve, the questions we ask them will have to get harder, smarter, and more detailed. This isn’t to say, necessarily, that we will need to become better “prompt engineers.” Instead, it is to suggest that we will need to become more curious. These new instruments will demand that we formulate better questions, and formulating better questions, often, is at least the seed of formulating better answers.

The input and the output, the prompt and the response, the question and the answer, the keyboard and the music, the photons and the photograph. We push at our instruments, we measure them up, and in their way, they measure us. (...)
I don’t like to think about technology in the abstract. Instead, I prefer to think about instruments like this laptop. I think about all the ways in which this instrument is better than the ones that came before it—faster, more reliable, more precise—and why it has improved. And I think about the ways in which this same laptop has become wildly more capable as new software tools came to be. I wonder at the capabilities I can summon with this keyboard now compared with when I was standing in that socially distanced line at the Apple Store four years ago.

I also think about the young Beethoven, playing around, trying to discover the capabilities of instruments with better keyboards, larger range, stronger frames, and suppler pedals. I think about all the uncoordinated work that had to happen—the collective and yet unplanned cultivation of craftsmanship, expertise, and industrial capacity—to make those pianos. I think about the staggering number of small industrial miracles that underpinned Beethoven’s keyboards, and the incomprehensibly larger number of industrial miracles that underpin the keyboard in front of me today. (...)

This past weekend, I replaced my MacBook Air with a new laptop. I wonder what it will be possible to do with this tremendous machine in a few years, or in a few weeks. New instruments for expression, and for intellectual exploration, will be built, and I will learn to use nearly all of them with my new laptop’s keyboard. It is now clear that a history-altering amount of cognitive potential will be at my fingertips, and yours, and everyone else’s. Like any technology, these new instruments will be much more useful to some than to others—but they will be useful in some way to almost everyone.

And just like the piano, what we today call “AI” will enable intellectual creations of far greater complexity, scale, and ambition—and greater repercussions, too. Higher dynamic range. I hope that among the instrument builders there will be inveterate craftsmen, and I hope that young Beethovens, practicing a wholly new kind of art, will emerge among the instrument players.

by Dean Ball, Hyperdimensional |  Read more:
Image: 1827 Broadwood & Sons grand piano/Wikipedia
[ed. Thoughtful essay throughout, well deserving of a full reading (even if you're just interested in Beethoven). On the hysterical end of the spectrum, here's what state legislators are proposing: The AI Patchwork Emerges. An update on state AI law in 2026 (so far) (Hyperdimensional):]
***
State legislative sessions are kicking into gear, and that means a flurry of AI laws are already under consideration across America. In prior years, the headline number of introduced state AI laws has been large: famously, 2025 saw over 1,000 state bills related to AI in some way. But as I pointed out, the vast majority of those laws were harmless: creating committees to study some aspect of AI and make policy recommendations, imposing liability on individuals who distribute AI-generated child pornography, and other largely non-problematic bills. The number of genuinely substantive bills—the kind that impose novel regulations on AI development or diffusion—was relatively small.

In 2026, this is no longer the case: there are now numerous substantive state AI bills floating around covering liability, algorithmic pricing, transparency, companion chatbots, child safety, occupational licensing, and more. In previous years, it was possible for me to independently cover most, if not all, of the interesting state AI bills at the level of rigor I expect of myself, and that my readers expect of me. This is no longer the case. There are simply too many of them.

Thursday, January 15, 2026

The Day NY Publishing Lost Its Soul; Fifty People Control the Culture

Everybody can see there’s a crisis in New York publishing. Even the hot new books feel lukewarm. Writers win the Pulitzer Prize and sell just few hundred copies. The big publishers rely on 50 or 100 proven authors—everything else is just window dressing or the back catalog.

You can tell how stagnant things have become from the lookalike covers. I walk into a bookstore and every title I see is like this.


They must have fired the design team and replaced it with a lazy bot. You get big fonts, random shapes, and garish colors—again and again and again. Every cover looks like it was made with a circus clown’s makeup kit.

My wife is in a book club. If I didn’t know better, I’d think they read the same book every month. It’s those same goofy colors and shapes on every one.

Of course, you can’t judge a book by its cover. But if you read enough new releases, you get the same sense of familiarity from the stories. The publishers keep returning to proven formulas—which they keep flogging long after they’ve stopped working.

And that was a long time ago.

It’s not just publishing. A similar stagnancy has settled in at the big movie studios and record labels. Nobody wants to take a risk—but (as I’ve learned through painful personal experience) that’s often the riskiest move of them all. Live by the formula, and you die by the formula.

It’s not just publishing. A similar stagnancy has settled in at the big movie studios and record labels. Nobody wants to take a risk—but (as I’ve learned through painful personal experience) that’s often the riskiest move of them all. Live by the formula, and you die by the formula.

How did we end up here?

It’s hard to pick a day when the publishing industry made its deal with the devil. But an anecdote recently shared by Steve Wasserman is as good a place to begin as any.

by Ted Gioia, Honest Broker | Read more:
Image: uncredited
[ed. I'll never buy a book that looks like this, no matter what the reviews say. I'd be embarrassed to be seen in public with it, let alone display it on my bookshelf. See also: Fifty People Control the Culture (HB).]

Willie Bobo

Tito Puente

[ed. 'Take Five' on steroids (that really gets going around 2:00).]

Big Beautiful Belly Flop

America is losing jobs in blue-collar industries, something that last occurred during the initial shock of the early pandemic and the depths of the Great Recession. The country is down 65k industrial jobs over the last year, a dramatic reversal from 2024, when the US added a lower-than-usual but still respectable 250k jobs. A major slowdown has hit all blue-collar sectors this year, including construction, mining, and utilities—though manufacturing and transportation are driving the vast majority of US job losses. via:


The US continues to lose manufacturing jobs—payrolls are down 75k over the last year, & another 8k jobs were lost in December Transportation (especially auto manufacturing), wood, and electronics/electrical manufacturing are the biggest losers, but few subsectors are doing well. via:

Wednesday, January 14, 2026

via:

Naoki Hayashi, Flight School - Breaking the Surface
via:

via:

Chairman Powell's Statement

[ed. Don't hear public comments from a Fed Chairman too often... screw around with administrative, social, legal fields and you might notch a few wins. Screw around with the nation's monitary system and expect significant pushback (from both parties). See also: Chairman Powell’s Statement (MR):]

***
Whether an independent Fed is desirable is beside the point. The core issue is lawfare: the strategic use of legal processes to intimidate, constrain, and punish institutional actors for political ends. Lawfare is the hallmark of a failing state because it erodes not just political independence, but the capacity for independent judgment.

What sort of people will work at the whim of another? The inevitable result is toadies and ideological loyalists heading complex institutions, rather than people chosen for their knowledge and experience.

[ed. And it all began with this: Trump Meets With Powell at Federal Reserve... leading to one of the most surreal political moments in recent memory.]

Tuesday, January 13, 2026

Stable Strategies For Middle Management

STABLE STRATEGIES FOR MIDDLE MANAGEMENT 
Our cousin the insect has an external skeleton made of shiny brown chitin, a material that is particularly responsive to the demands of evolution. Just as bioengineering has sculpted our bodies into new forms, so evolution has shaped the early insect's chewing mouthparts into her descendants' chisels, siphons, and stilettos, and has molded from the chitin special tools - pockets to carry pollen, combs to clean her compound eyes, notches on which she can fiddle a song.    
- From the popular science program, Insect People!
I awoke this morning to discover that bioengineering had made demands upon me during the night. My tongue had turned into a stiletto, and my left hand now contained a small chitinous comb, as if for cleaning a compound eye. Since I didn't have compound eyes, I thought that perhaps this presaged some change to come. 

I dragged myself out of bed, wondering how I was going to drink my coffee through a stiletto. Was I now expected to kill my breakfast, and dispense with coffee entirely? I hoped I was not evolving into a creature whose survival depended on early-morning alertness. My circadian rhythms would no doubt keep pace with any physical changes, but my unevolved soul was repulsed at the thought of my waking cheerfully at dawn, ravenous for some wriggly little creature that had arisen even earlier. 

I looked down at Greg, still asleep, the edge of our red and white quilt pulled up under his chin. His mouth had changed during the night too, and seemed to contain some sort of a long probe. Were we growing apart? 

I reached down with my unchanged hand and touched his hair. It was still shiny brown, soft and thick, luxurious. But along his cheek, under his beard, I could feel patches of sclerotin, as the flexible chitin in his skin was slowly hardening to an impermeable armor. 

He opened his eyes, staring blearily forward without moving his head. I could see him move his mouth cautiously, examining its internal changes. He turned his head and looked up at me, rubbing his hair slightly into my hand. 

"Time to get up?" he asked. I nodded. "Oh, God," he said. He said this every morning. It was like a prayer. 

"I'll make coffee," I said. "Do you want some?" 

He shook his head slowly. "Just a glass of apricot nectar," he said. He unrolled his long, rough tongue and looked at it, slightly cross-eyed. "This is real interesting, but it wasn't in the catalog. I'll be sipping lunch from flowers pretty soon. That ought to draw a second glance at Duke's." 

"I thought account execs were expected to sip their lunches,"I said. 

"Not from the flower arrangements..." he said, still exploring the odd shape of his mouth. Then he looked up at me and reached up from under the covers. "Come here." 

It had been a while, I thought, and I had to get to work. But he did smell terribly attractive. Perhaps he was developing aphrodisiac scent glands. I climbed back under the covers and stretched my body against his.We were both developing chitinous knobs and odd lumps that made this less than comfortable. "How am I supposed to kiss you with a stiletto in my mouth?" I asked. 

"There are other things to do. New equipment presents new possibilities." He pushed the covers back and ran his unchanged hands down my body from shoulder to thigh. "Let me know if my tongue is too rough." It was not.

Fuzzy-minded, I got out of bed for the second time and drifted into the kitchen.

Measuring the coffee into the grinder, I realized that I was no longer interested in drinking it, although it was diverting for a moment to spear the beans with my stiletto. What was the damn thing for, anyhow? I wasn't sure I wanted to find out. 

Putting the grinder aside, I poured a can of apricot nectar into a tulip glass. Shallow glasses were going to be a problem for Greg in the future, I thought. Not to mention solid food. 

My particular problem, however, if I could figure out what I was supposed to eat for breakfast, was getting to the office in time for my ten A.M. meeting. Maybe I'd just skip breakfast. I dressed quickly and dashed out the door before Greg was even out of bed.

Thirty minutes later, I was more or less awake and sitting in the small conference room with the new marketing manager, listening to him lay out his plan for the Model 2000 launch. In signing up for his bioengineering program, Harry had chosen specialized primate adaptation, B-E Option No. 4. He had evolved into a textbook example: small and long-limbed, with forward-facing eyes for judging distances and long, grasping fingers to keep him from falling out of his tree. 

He was dressed for success in a pin-striped three-piece suit that fit his simian proportions perfectly. I wondered what premium he paid for custom-made. Or did he patronize a ready-to-wear shop that catered especially to primates? 

I listened as he leaped agilely from one ridiculous marketing premise to the next. Trying to borrow credibility from mathematics and engineering, he used wildly metaphoric bizspeak, "factoring in the need for pipeline throughout," "fine-tuning the media mix," without even cracking a smile. 

Harry had been with the company only a few months, straight from business school. He saw himself as a much-needed infusion of talent. I didn't like him, but I envied his ability to root through his subconscious and toss out one half-formed idea after another. I know he felt it reflected badly on me that I didn't join in and spew forth a random selection of promotional suggestions. 

I didn't think much of his marketing plan. The advertising section was a textbook application of theory with no practical basis. I had two options: I could force him to accept a solution that would work, or I could yes him to death, making sure everybody understood it was his idea. I knew which path I'd take. 

"Yeah, we can do that for you," I told him. "No problem." We'd see which of us would survive and which was hurtling to an evolutionary dead end. 

Although Harry had won his point, he continued to belabor it. My attention wandered I'd heard it all before. His voice was the hum of an air conditioner, a familiar, easily ignored background noise. I drowsed and new emotions stirred in me, yearnings to float through moist air currents, to land on bright surfaces, to engorge myself with warm, wet food.

Adrift in insect dreams, I became sharply aware of the bare skin of Harry's arm, between his gold-plated watchband and his rolled-up sleeve, as he manipulated papers on the conference room table. He smelled greasily delicious, like a pepperoni pizza or a charcoal-broiled hamburger. I realized he probably wouldn't taste as good as he smelled but I was hungry. My stiletto-like tongue was there for a purpose, and it wasn't to skewer cubes of tofu. I leaned over his arm and braced myself against the back of his hand, probing with my styles to find a capillary. 

Harry noticed what I was doing and swatted me sharply on the side of the head. I pulled away before he could hit me again. "We were discussing the Model 200o launch. Or have you forgotten?" he said, rubbing his arm. 

"Sorry. I skipped breakfast this morning." 

I was embarrassed. "Well, get your hormones adjusted, for chrissake." He was annoyed, and I couldn't really blame him. "Let's get back to the media allocation issue, if you can keep your mind on it. I've got another meeting at eleven in Building Two.

"Inappropriate feeding behavior was not unusual in the company, and corporate etiquette sometimes allowed minor lapses to pass without pursuit. Of course, I could no longer hope that he would support me on moving some money out of the direct-mail budget...

by Eileen Gunn, Norton Book of Science Fiction |  Read more (pdf):
[ed. A pioneer in science fiction.]

The Inevitable Rise of the Art TV

The Samsung Frame TV, first announced in 2017, doesn’t look all that great as an actual television. But switch it off and it sure is pretty—certainly much better to look at than an empty black void.

This is thanks to its matte-finish, anti-glare screen and the picture-frame-like bezels that together transform whatever fine art you choose to display on the TV when it's in standby mode (Samsung offers a variety of high-resolution digital slides) into something that resembles a framed painting. In the years since its debut and through a few updates, the Frame TV has become one of the more considered options for people who live in smaller spaces without dedicated rooms for watching TV.

It has taken a while for other brands to catch up, but we're now seeing a huge wave of Frame-like TVs hit the market. The trend is largely driven by aesthetes in cities where smaller living rooms are the norm, but it's getting a boost from advances in screen design.

Late last year, Hisense announced its CanvasTV, a frame competitor that also has a matte screen and displays art. (We have a review unit coming shortly.) TCL has the similar NXTvision model that uses a Vincent van Gogh self-portrait in the marketing, and LG has announced the Gallery TV (also repping van Gogh) for later this year. Even Amazon has decided to throw its hat in the ring, with the Ember Artline TV. Announced this week at CES 2026, Amazon's $899 television can display one of 2,000 works of art (available for free to Ember Artline owners) and even has a tool that uses Alexa AI to help you decide which artworks are the best fit for your room.

So what's so great about Art TVs, and why do brands seem to be pivoting so hard into the category?

Part of it has to do with personal space. It's true that many younger buyers just don't have the same taste or sense of style as folks from previous generations. But also, young city-dwelling professionals are less likely to have the room to place a large screen in a dedicated area in their home, a pain point compounded by the fact that TV screen sizes have ballooned over the past decade.

The other reason TV makers are getting artsy has to do with the evolution of TV technology itself. Brands are choosing to step into this space now because they have finally developed the means to create matte screens that can accurately represent a painting or a fine art photograph. Though Samsung is a pioneer in the space, matte LED screens are enjoying something of a renaissance across all television brands.

A typical glossy TV display reflects light like a window, but a matte screen absorbs light like a canvas might. This effect enables any art pieces displayed on the screen to look extra realistic. Another advance in technology is backlighting. Where previous generations of these Art TVs needed to be lit from the edges of the display in order to maintain their painting-like thinness and allow them to be mounted flush against a wall, brands have recently been able to employ more advanced lighting systems while keeping the TVs slim. Local dimming, better backlighting processing, and the ability to adjust the screen brightness to match a room's ambient lighting when in “art mode” make these new displays look better than ever.

by Parker Hall, Wired |  Read more:
Image: Samsung/PCMag
[ed. See also: Ambient Intelligence in the Living Room (MDPI).]