Showing posts with label Education. Show all posts
Showing posts with label Education. Show all posts

Monday, December 29, 2025

Woodshedding It

[ed. Persevering at something even though you suck at it.]

Generally speaking, we have lost respect for how much time something takes. In our impatient and thus increasingly plagiarized society, practice is daunting. It is seen as prerequisite, a kind of pointless suffering you have to endure before Being Good At Something and Therefore an Artist instead of the very marrow of what it means to do anything, inextricable from the human task of creation, no matter one’s level of skill.

Many words have been spilled about the inherent humanity evident in artistic merit and talent; far fewer words have been spilled on something even more human: not being very good at something, but wanting to do it anyway, and thus working to get better. To persevere in sucking at something is just as noble as winning the Man Booker. It is self-effacing, humbling, frustrating, but also pleasurable in its own right because, well, you are doing the thing you want to do. You want to make something, you want to be creative, you have a vision and have to try and get to the point where it can be feasibly executed. Sometimes this takes a few years and sometimes it takes an entire lifetime, which should be an exciting rather than a devastating thought because there is a redemptive truth in practice — it only moves in one direction, which is forward. There is no final skill, no true perfection.

Practice is in service not to some abstract arbiter of craft, the insular juries of the world, the little skills bar over a character’s head in The Sims, but to you. Sure, practice is never-ending. Even Yo-Yo Ma practices, probably more than most. That’s also what’s so great about it, that it never ends. You can do it forever in an age where nothing lasts. Nobody even has to know. It’s a great trick — you just show up more improved than you were before, because, for better or for worse, rarely is practice public.

by Kate Wagner, The Late Review |  Read more:

Sunday, December 28, 2025

How NIL is Failing College Sports

Editor’s Note (September 2025): This article was first published in May 2025. Since then, NIL controversies have only grown—lawsuits over transfers, new collective rules, and court rulings are fueling even more debate. The problems outlined below remain at the heart of the chaos.

When the NCAA implemented its interim policy on Name, Image, and Likeness (NIL) in July 2021, it was heralded as a long-overdue victory for student-athletes. Finally, college athletes could monetize their personal brands while maintaining eligibility. But three years in, the reality of NIL has exposed deep, structural problems that threaten the very foundation of college sports.

Far from the fair, equitable system its proponents envisioned, NIL has morphed into a thinly veiled pay-for-play scheme dominated by wealthy donors, corporate interests, and an increasingly professionalized amateur sports landscape that’s leaving many athletes and institutions behind.

NIL Is Bad in Its Current Form, But the Concept Isn’t

Let’s be clear: this is not to say NIL is all bad. The core principle—that athletes deserve compensation for the use of their name, image, and likeness—remains valid and important. Student-athletes absolutely deserve to get paid. But this implementation ain’t it.

The problem is the execution. NIL went from zero to 200 MPH overnight with no guardrails. It’s like giving someone a supercar capable of high speeds and letting them drive it through downtown at rush hour. Just because a car can go that fast doesn’t mean it should outside of a sanctioned and governed NASCAR race. Similarly, NIL needed careful implementation with proper rules and oversight—not the free-for-all we’re currently witnessing.

NIL Is Bad for Creating the Collective Problem: Pay-for-Play in Disguise

The most troubling development in the NIL era has been the rise of “collectives” – donor-organized groups that pool money to facilitate NIL deals for athletes at specific schools. These collectives have quickly evolved from their original purpose into recruitment vehicles that effectively function as booster-funded payrolls.

College football’s biggest donors have orchestrated business ventures distributing five-, six- and seven-figure payments to athletes under the guise of endorsement opportunities and appearance fees. While technically legal within vague NCAA guidelines, these arrangements clearly violate the spirit of what NIL was supposed to be.

Consider the case of quarterback Nico Iamaleava, whose story perfectly illustrates the chaos. After signing with Tennessee on a lucrative NIL deal, he later tried to renegotiate his contract during the 2025 offseason. When Tennessee refused both because his performance didn’t warrant the increase and the amount was too high, Iamaleava explored other options. After other schools balked at his demands, he eventually landed at UCLA for significantly less money than he was seeking. Meanwhile, Texas will spend an astounding $40 million on its football roster in 2025-26. But that’s not the issue—why wouldn’t they if they can? The problem is that if another team wants to compete, there’s only one way forward: pay up.

This isn’t about athletes receiving fair compensation for actual marketing value – it’s about wealthy boosters creating slush funds to buy talent. And as long as deals include some nominal “deliverable” from the athlete and are signed after their national letter of intent, there’s little the NCAA can do to stop it. (SportsEpreneur Update as of September 2025: read more about the NIL Clearinghouse and the first NIL deal report.)

NIL Is Bad for Boosting Egos Instead of Programs

A particularly troubling aspect that’s emerged is how NIL has become an ego-driven playground for wealthy boosters. For many donors, it’s no longer about supporting their alma mater—it’s about directly influencing outcomes and claiming credit for wins.

These boosters are essentially treating college teams like fantasy sports with real money. They get a dopamine hit from watching “their” players succeed, knowing their financial contribution made it possible. It’s an addiction—the thrill of buying talent and then basking in reflected glory when that talent performs well.

This creates a dangerous dynamic where the interests of boosters, rather than educational or developmental goals, drive decisions. Coaches find themselves answering not just to athletic directors, but to the whims of deep-pocketed collectives who can control the talent pipeline.

[ed. ...and much more:]

NIL Is Bad for Widening the Gap: Competitive Balance Destroyed

NIL Is Bad for Creating Transfer Portal Chaos: The Free Agency Problem

NIL Is Bad for Athletes Making Short-Term Decisions

NIL Is Bad for the Athlete-Fan Relationship

NIL Is Bad for Corruption and Exploitation: The Dark Side

NIL Is Bad for College Sports’ Identity Crisis

NIL Is Bad for International Student-Athletes

NIL Is Bad, But Reform Is Possible

by SportsEMedia |  Read more:
Image: Tyler Kaufman/Getty
[ed. Money is killing sports (and most everything else), and nobody pays even lip service to educational opportunities anymore. See also: Limbo Field (HCB); and,  The college football spending cap is brand new, and here’s how schools already are ignoring it (The Athletic).]

Friday, December 26, 2025

A sewing and tailoring book from Dublin, complete with samples (1833).

Monday, December 22, 2025

Touched for the Very First Time

Deep in California’s East Bay, on a mild fall night, a 32-year-old we’ll call Simon told me that minutes earlier, for the first time in his life, he had felt a woman’s breasts. The two of us were hunched over a firepit on a discreet wooden terrace while he recounted what had happened: The woman, with a charitable smile and some gentle encouragement, had invited his hand to her body. She let him linger there for a spell—sensing her contours, appreciating her shape—before he pulled away. Now Simon was staring into the embers, contemplating these intrepid steps out of the virginity that had shackled him for so long. He seemed in a bit of a daze.

“I haven’t been physically intimate with a woman before,” he said softly. “I tried to do it without causing her any discomfort.”

Simon is tall, broad-shouldered, and reasonably well dressed. On that evening, he wore a wrinkle-free button-down tucked into khakis, and a well-manicured mustache on his upper lip. A lanyard dangled around his neck with an empty space where he should have Sharpied his name. Instead, he’d left it blank. After traveling here from Europe—over an ocean, craggy mountaintops, and quilted farmlands—he was, I got the sense, a little embarrassed. Not everyone travels 5,000 miles to have their first kiss. Simon felt it was his only option.

Looking around at the top-secret compound we were sitting in, it was easy to deduce why he’d come. Everything about the place bore the carnal aura of a Bachelor set: daybeds lingered in darkened nooks and crannies. A clothing-optional hot tub burbled next to a fully stocked bar. Hammocks swayed in the autumn breeze. A fleet of beautiful women patrolled the grounds, demure and kind-eyed, ready to break bread with the men. Unlike most of the women Simon had come across within the checkered complexities of his stillborn sexual development—remote, inaccessible, alien—these women were eager to teach him something. They wanted him to grasp, in excruciating detail, how to turn them on.

Simon had purchased a ticket to Slutcon, the inaugural event of a radical new approach to sex education. In its most basic definition, Slutcon is an exclusive retreat for sexually and romantically inexperienced men to learn about intimacy. The women on site had a plan for them: Over the next three days, they would break these boys out of their inhibiting psychic barriers, rebuild their confidence, and refine the seizing glitches in their courtship techniques. By the end of the weekend, the men would understand how they too could become one with the sluts.

Of the 150 or so attendees of Slutcon, many of them, like Simon, were either virgins or something close to it. Tickets ranged from $1,000 to $9,000, and the retreat was pitched as a place to learn how to interact with women—as instructed by women themselves. Slutcon is staffed almost entirely by paid and volunteer female sex workers and intimacy experts, and together, they had made themselves available to be touched, seduced, or otherwise experimented on by the novices at any moment during the convention.

In the parlance of Slutcon, these professionals are referred to as its “flirt girls” or, more colloquially, its “flirtees.” Wearing plastic green wristbands that designated their consent, they darted between the men, sultry and warm, prepared to host anyone who endeavored an approach. Men brave enough to try would be rewarded with their most coveted desire: a chance to speak with, caress, or, hell, maybe even have sex with someone they were attracted to in a controlled environment, where fears of offense were nullified. After all, Slutcon is what its founders call “a place to experiment without getting canceled.”

Its organizers believe that America needs this sort of experimentation to repair its broken relationship to sex. Young people are hooking up at astonishingly low rates, and the problem is especially acute with young men: In 2013, 9 percent of men between the ages of 22 and 34 reported that they hadn’t had sex in the past year. A decade later, nearly 25 percent of that same demographic is reporting a prolonged period of celibacy. Fifty-seven percent of single adults report not being interested in dating, and nearly half of men between the ages of 18 and 25 have never approached a woman in a flirtatious manner. Experts have attributed the drop-off to a variety of causes: There’s the post-COVID loneliness crisis, men’s increasing aversion to romantic risk and rejection, and the political ideologies that continue to divide the genders. But regardless of the cause, in 2025—an age of both Lysistrata-tinged female separatist movements and the intoxicating misogyny of Andrew Tate—it is fair to wonder if men and women still like each other in the way they once did.

To soothe this discontent, Slutcon’s organizers treat femininity like a fount of knowledge. More controversially, they also argue that most men are good—if a bit misunderstood. The conventions of 2010s liberal feminism have no quarter here. Slutcon was not founded upon the idea that men must be leached of patriarchy to be properly socialized. And if I’m being honest, that position had left me with an icy feeling in my stomach from the moment I arrived. What if an attendee took undue advantage of Slutcon’s leeway? What if they flew over the guardrails and made the women here uncomfortable—or, worse, unsafe?

It’s a dangerous game that Slutcon plays. The organizers entertain the idea that to rehabilitate our decaying norms about intimacy, men need to shake off their fears about sex—with the help of women willing to grant leniency to their erotic forays. Almost a decade removed from #MeToo and the astonishing reckoning it unleashed, it was difficult for me to completely sign off on that. It wasn’t that Slutcon was a reactionary project or was concocting a backward tradwife fantasy. But the event did unambiguously assert that men alone are unable to fix our ailing sexual culture. At Slutcon, masculinity in itself was not toxic. Women too, people here argued, had a hand in this unraveling. And if these men and women could spend a weekend committed to radical empathy between the genders—blurring the line between sex education and sex work—maybe we’d relearn a skill that feels crucial to our survival. As the weekend wore on, I started to see their point.
***
On the first night of Slutcon, Aella—the pseudonymous blogger, escort, and internet eccentric who is one of the event’s primary organizers—took the stage at the main pavilion for something of a keynote address. “We are pro-men here,” she said, outlining what the audience could expect from the days ahead. The attendees were reminded that the “flirtees” had consensually opted in to the weekend’s affairs and all were adept at interfacing with clueless suitors. Aella implored the crowd to release inhibitions, to breathe freely, to dig deep within their souls and excavate their inner vixen. Yes, she reminded the room, the women would maintain their personal boundaries, which were always to be respected. (“Some of you will find out in brutal detail that you are giving a girl the ick,” Aella said.) But also, she said, the men here shouldn’t fear bumping against those boundaries—and ought to receive the feedback that resulted graciously, with an open heart. As she wrapped up her remarks, she left the men with a homework assignment: At some point in the next three days, they should ask a woman if they could touch her boobs.

That message resonated with Ari Zerner, a 28-year-old attendee dressed—somewhat inexplicably—in a purple cape. “There’s this feeling of safety here. I know that even if there’s pushback, there’s not going to be punishment,” he said of the weekend’s social contract. Zerner told me that his top goal for being at Slutcon was to learn how to “escalate” a conversation with a woman into something more flirtatiously charged.

Earlier in the day, organizers had distributed a schedule to all participants detailing the retreat’s panels, presentations, and workshops. Some of them centered on seduction: One lecture focused on how and when someone should lean in for a kiss; another offered advice on optimizing a dating profile. Elsewhere, experts gave insight on the taxonomy of sex toys and the finer points of cunnilingus. There was a rope-play demonstration, a seminar on how to properly receive blow jobs, and an assessment of what it takes to be a tactful orgy participant. (One pointer: Shower before arriving.) Once the evening rolled around, Slutcon’s educational atmosphere would morph into a bubbly social hour, when the skills honed in the workshops could be tested on the flirtees. On Saturday night, everyone would gather for Slutcon After Dark—the weekend’s marquee party, and something of a final exam.

All of this made Slutcon sound a little bit like a pickup-artist boot camp, reminiscent of the greasy symposiums of the mid-2000s. Led by vamping gurus like The Game’s Neil Strauss, these “men’s workshops” had dispensed questionable wisdom to help guys get laid quickly, efficiently, and transactionally. (Sample advice: Be slyly rude toward the women you want to sleep with and isolate them from their friends as quickly as possible.) Yet while Slutcon featured a much softer methodology than the Tao of Mystery’s, and was expressly led by women who gave far better advice, nobody at the event ran away from that comparison. In fact, some of the enlightened organizers here wondered if, given the total backsliding of our sexual norms—and the fanatical inceldom we’re facing now—there was something worth reclaiming about an earlier age when, at the very least, men were enthusiastic about approaching women.

“I’m pro–the idea of pickup artistry, in the sense that it goes against the dominant resentful male ideology where guys feel like they’re doomed in the romantic market because their jaw is angled incorrectly,” said Noelle Perdue, a self-described porn historian and one of Slutcon’s speakers. “The idea that you can do certain things that make you more appealing to women is not only true, but there is an optimism inherent in it that I think we’re missing right now.”

After Aella’s commencement, like a class adjourning for recess, the men were unleashed. The sun had firmly tucked behind the chaparral hills, and all at once, everything was possible—for better or worse.

Nobody quite knew what to do with themselves. Some men clustered together, white-knuckling Pacificos, hoping to get lubricated enough to make conversation with the flirtees from a chaste distance. (Alcohol, throughout the weekend, was strictly rationed for safety reasons.) Others, revved up by Aella’s pep talk, hit on everyone in sight, with blissful ego death, to varying degrees of success: I watched one gentleman, balding and heavyset, tell each and every woman in the building that he found her pretty. The campus was permeated with the energy of a middle school dance, more anxious than anticipatory. But still, I admired the attendees’ gameness. Here was a legion of dudes, all gawky, stiff, and tragically horny—imprisoned by long-ossified social and fashion blunders, who write code for a living—taking a leap of faith. At last, they were putting real intention behind the hunger that had burned in them for ages. Slutcon had implored them to flirt their way out of the mess they had found themselves in, and they were willing to give it a try.

The women, meanwhile, were already hard at work. Many of them were coiled on patio furniture, maintaining disciplined eye contact with whatever attendee was currently talking to them. Some of them offered feedback on the men’s techniques, and more often than not, the counseling was astoundingly rudimentary: “It’s like, ‘You are a full foot taller than me and you’re kind of looming over me, so maybe don’t loom’ or ‘You’re not smiling, you’re not really having a playful time’ or ‘You’re getting touchy-feely too fast,’ ” said one of the flirtees, perched on a picnic table in a skirt and crop top, chronicling her interactions thus far. “It didn’t feel like teaching so much as both of us exploring the space together.”

Another flirtee, a striking 27-year-old with jet-black hair named Paola Baca, felt the same way. She had taken it upon herself to slowly disarm the layers of neuroticism that might have previously prevented some of these dudes from engaging with her back in reality. And in that sense, Baca felt that she offered a form of exposure therapy. “A lot of young men don’t think women are humans,” she said. “Not as less-than-humans, but more-than-humans. Attractive women are basically gods to them. I want to show them that we are humans too.” (In her civilian life, Baca studied evolutionary psychology at the University of Texas at Austin.) (...)
***
The boys at Slutcon, it seemed, were at least trying to unwind the multitude of traumas that had brought on their sexual maladjustment. But I remained curious about how all of this was going to turn them into better flirts. The following morning, I filed into a seminar led by Tom, the pseudonymous partner of one of the organizers and one of the few men on staff. He had convened a last-minute flirting training session after witnessing some subpar attempted courtships the night before. “I was like, Oh, gosh, a lot of this is not up to my quality standards, ” he told me. “I had the itch to step in and help.”

So, in a makeshift ballroom filled to the brim with contemplative men—many dutifully scratching down notes with ballpoint pen, eager to learn from the previous evening’s mistakes—Tom tried to adjust course. Spectators were summoned to the stage, one by one, and each of them was thrust into a simulated date with Jean Blue, a sex worker with a flop of auburn hair who had gamely volunteered to serve as a surrogate.

The problems were immediately apparent. The thrills of good flirting can be felt rather than thought—and that is a difficult principle to distill through language. How can anyone articulate the electricity of a good date, especially for those who may have never touched it before? “I basically stopped people when they made me flinch,” said Tom afterward. “And then I tried to name the flinch.”

There was, indeed, a lot of flinching. Some denizens of Slutcon offered Jean canned, dead-on-arrival opening statements (“What Harry Potter character are you like?”). Others attempted to ratchet up the intrigue in hopeless ways (“What’s your sexiest tattoo?”)...

“I was interested in being a part of a convention that was taught by women who are sexually successful and sexually open,” Jean said. “I have a mindset that isn’t You guys suck, and here are all of these ways you’re being weird. Instead, it’s like, I want to help you. I want so badly for you to hit on me better.”

by Luke Winkie, Slate |  Read more:
Image: Hua Ye

Sunday, December 21, 2025

What’s Not to Like?

Similes! I have hundreds of them on three-by-five notecards, highbrow and lowbrow, copied from newspapers, comic strips, sonnets, billboards, and fortune cookies. My desk overflows with them. They run down to the floor, trail across the room into the hallway. I have similes the way other houses have ants.

Why? To start, for the sheer laugh-out-loud pleasure of them. “His smile was as stiff as a frozen fish,” writes Raymond Chandler. “He vanished abruptly, like an eel going into the mud,” writes P. G. Wodehouse, the undoubted master of the form. Or Kingsley Amis’s probably first-hand description of a hangover: “He lay sprawled, too wicked to move, spewed up like a broken spider-crab on the tarry shingle of the morning.”

From time to time, I’ve tried to organize my collection, though mostly the task is, as the cliché nicely puts it, like herding cats. Still, a few categories come to mind. The Really Bad Simile, for instance. Examples of this pop up like blisters in contemporary “literary” fiction. Here is a woman eating a crème brûlée: “She crashed the spoon through the sugar like a boy falling through ice on a lake.” (Authors’ names omitted, per the Mercy Rule.) Or: “A slick of beer shaped like the Baltic Sea spilled on the table.” Sometimes they follow a verb like tin cans on a string: “The restraining pins tinkled to the floor like metal rain, hunks of hair tumbling across her face in feral waves.” Or sometimes they just make the page itself cringe and curl up at the corners: “Charlie’s heart rippled like a cloth spread across a wide table.”

Writing about sex can drive a writer to similes of unparalleled badness. Someone has borrowed my copy of Lady Chatterley’s Lover, but these more recent examples might do, from The Literary Review’s “Bad Sex in Fiction Award”: “Katsuro’s penis and testicles became one single mound that rolled around beneath the grip of her hand. Miyuki felt as though she was manipulating a small monkey that was curling up its paws.” Or this loving, if somewhat chiropractic moment: “her long neck, her swan’s neck … coiling like a serpent, like a serpent, coiling down on him.” Or finally (my eyes are closed as I type): “Her vaginal ratchet moved in concertina-like waves, slowly chugging my organ as a boa constrictor swallows its prey.” (...)

Donne’s simile belongs to another category as well, the epic or Homeric simile. Every reader of the Iliad knows something like this picture of an attacking army as a wildfire:

“As when the obliterating fire comes down on the timbered forest / and the roll of the wind carries it everywhere,” and so the Achaean host drives ahead for another five lines. Modern prose writers can also unscroll a simile at surprising length. John Updike dives right in: “The sea, slightly distended by my higher perspective, seems a misty old gentleman stretched at his ease in an immense armchair which has for arms the arms of this bay and for an antimacassar the freshly laundered sky. Sailboats float on his surface like idle and unrelated benevolent thoughts.” And one would not like to have been the beefy Duke of Bedford when Edmund Burke imagined how revolutionary mobs might regard him: “Like the print of the poor ox that we see in the shop windows at Charing Cross, alive as he is, and thinking no harm in the world, he is divided into rumps, and sirloins, and briskets, and into all sorts of pieces for roasting, boiling, and stewing.”
It takes a dramatic mind to carry a comparison through so logically and so far. The Homeric simile evokes a world far larger than a single flash of thought, however clever. Its length creates a scene in our minds, even a drama where contraries come alive: an army driving into battle, an ocean tamed into a harmless old gent, a bloody clash in the streets between aristocrats and rebels.

“Perceptive of resemblances,” writes Aristotle, is what the maker of similes must be. There is one more step. The maker of similes, long or short, must perceive resemblances and then, above all, obey the first, and maybe only, commandment for a writer: to make you see. Consider Wodehouse’s “He found Lord Emsworth, as usual, draped like a wet sock over the rail of the Empress’s G.H.O.,” or Patricia Cornwell’s “My thoughts scattered like marbles.”

The dictionary definition of metaphor is simply an implied comparison, a comparison without the key words like or as. The most common schoolbook example is, “She has a heart of gold,” followed by, “The world is a stage.” Latching onto the verb is, the popular website Grammarly explains, “A metaphor states that one thing is another thing.”

Close, but not enough. There is great wisdom in the roots of our language, in the origin of words. Deep down, in its first Greek form, metaphor combines meta (over, across) and pherein (to carry), and thus the full word means to carry over, to transfer, to change or alter. A metaphor does more than state an identity. In our imagination, before our eyes, metaphor changes one thing into another: “I should have been a pair of ragged claws / Scuttling across the floors of silent seas.” Eliot’s metaphor is a metamorphosis. Magically, we see Prufrock the man metamorphosed into a creature with ragged claws, like a hapless minor god in Ovid.

Too much? Consider, then, what the presence of like or as does in a simile. It announces, self-consciously, that something good is coming. The simile is a rhetorical magic trick, like a pun pulled out of a hat. A metaphor, however, feels not clever but true. Take away the announcement of like, and we read and write on a much less sophisticated level, on a level that has been called primitive, because it recalls the staggering ancient power of words as curses, as spells to transform someone into a frog, a stag, a satanic serpent.

A better term might be childlike. Psychologists know that very young children understand the metamorphosing power of words. To a child of three or four, writes Howard Gardner, the properties of a new word “may be inextricably fused with the new object: at such a time the pencil may become a rocket ship.” Older children and adults know that this isn’t so. But for most of us, and certainly for most writers I know, the childhood core of magical language play is not lost. It exists at the center and is only surrounded by adult awareness, as the rings encircle the heart of the tree.

Still too much? Here is Updike, making me gasp: “But it is just two lovers, holding hands and in a hurry to reach their car, their locked hands a starfish leaping through the dark.” No labored comparison, no signal not to take it literally. Like the pencil and rocket, their hands have become a starfish. Or Shakespeare, metamorphosing himself into an autumnal tree and then an ancient abbey: “That time of year thou may’st in me behold, / When yellow leaves, or none, or few do hang / Upon those boughs which shake against the cold, / Bare ruin’d choirs where late the sweet birds sang.” Pure magic.

Yet why be a purist? At the high point of language, James Joyce blends simile, metaphor, and extended simile into one beautiful and unearthly scene, an image created by a sorcerer.

A girl stood before him in midstream, alone and still, gazing out to sea. She seemed like one whom magic had changed into the likeness of a strange and beautiful seabird. Her long slender bare legs were delicate as a crane’s. … Her thighs, fuller and soft-hued as ivory, were bared almost to the hips, where the white fringes of her drawers were like feathering of soft white down. Her slate-blue skirts were kilted boldly about her waist and dovetailed behind her. Her bosom was as a bird’s, soft and slight, slight and soft as the breast of some dark-plumaged dove. But her long fair hair was girlish: and girlish, and touched with the wonder of mortal beauty, her face.

The passage is like a palimpsest. A reader can see through the surface of the language. A reader can penetrate to the traces of the real person still visible beneath the living words that are, as they move down the page, quietly transforming her. It is as if we are looking through the transparent chrysalis to the caterpillar growing inside, watching its slow and perfect metamorphosis into the butterfly. Too much? No.

by Max Byrd, American Scholar |  Read more:
Image: locket479/Flickr

Thursday, December 18, 2025

Finding Peter Putnam

The forgotten janitor who discovered the logic of the mind

The neighborhood was quiet. There was a chill in the air. The scent of Spanish moss hung from the cypress trees. Plumes of white smoke rose from the burning cane fields and stretched across the skies of Terrebonne Parish. The man swung a long leg over a bicycle frame and pedaled off down the street.

It was 1987 in Houma, Louisiana, and he was headed to the Department of Transportation, where he was working the night shift, sweeping floors and cleaning toilets. He was just picking up speed when a car came barreling toward him with a drunken swerve.

A screech shot down the corridor of East Main Street, echoed through the vacant lots, and rang out over the Bayou.

Then silence.
 
The 60-year-old man lying on the street, as far as anyone knew, was just a janitor hit by a drunk driver. There was no mention of it on the local news, no obituary in the morning paper. His name might have been Anonymous. But it wasn’t.

His name was Peter Putnam. He was a physicist who’d hung out with Albert Einstein, John Archibald Wheeler, and Niels Bohr, and two blocks from the crash, in his run-down apartment, where his partner, Claude, was startled by a screech, were thousands of typed pages containing a groundbreaking new theory of the mind.

“Only two or three times in my life have I met thinkers with insights so far reaching, a breadth of vision so great, and a mind so keen as Putnam’s,” Wheeler said in 1991. And Wheeler, who coined the terms “black hole” and “wormhole,” had worked alongside some of the greatest minds in science.

Robert Works Fuller, a physicist and former president of Oberlin College, who worked closely with Putnam in the 1960s, told me in 2012, “Putnam really should be regarded as one of the great philosophers of the 20th century. Yet he’s completely unknown.”

That word—unknown—it came to haunt me as I spent the next 12 years trying to find out why.

The American Philosophical Society Library in Philadelphia, with its marbled floors and chandeliered ceilings, is home to millions of rare books and manuscripts, including John Wheeler’s notebooks. I was there in 2012, fresh off writing a physics book that had left me with nagging questions about the strange relationship between observer and observed. Physics seemed to suggest that observers play some role in the nature of reality, yet who or what an observer is remained a stubborn mystery.

Wheeler, who made key contributions to nuclear physics, general relativity, and quantum gravity, had thought more about the observer’s role in the universe than anyone—if there was a clue to that mystery anywhere, I was convinced it was somewhere in his papers. That’s when I turned over a mylar overhead, the kind people used to lay on projectors, with the titles of two talks, as if given back-to-back at the same unnamed event:

Wheeler: From Reality to Consciousness

Putnam: From Consciousness to Reality

Putnam, it seemed, had been one of Wheeler’s students, whose opinion Wheeler held in exceptionally high regard. That was odd, because Wheeler’s students were known for becoming physics superstars, earning fame, prestige, and Nobel Prizes: Richard Feynman, Hugh Everett, and Kip Thorne.

Back home, a Google search yielded images of a very muscly, very orange man wearing a very small speedo. This, it turned out, was the wrong Peter Putnam. Eventually, I stumbled on a 1991 article in the Princeton Alumni Weekly newsletter called “Brilliant Enigma.” “Except for the barest outline,” the article read, “Putnam’s life is ‘veiled,’ in the words of Putnam’s lifelong friend and mentor, John Archibald Wheeler.

A quick search of old newspaper archives turned up an intriguing article from the Associated Press, published six years after Putnam’s death. “Peter Putnam lived in a remote bayou town in Louisiana, worked as a night watchman on a swing bridge [and] wrote philosophical essays,” the article said. “He also tripled the family fortune to about $40 million by investing successfully in risky stock ventures.”

The questions kept piling up. Forty million dollars?

I searched a while longer for any more information but came up empty-handed. But I couldn’t forget about Peter Putnam. His name played like a song stuck in my head. I decided to track down anyone who might have known him.

The only paper Putnam ever published was co-authored with Robert Fuller, so I flew from my home in Cambridge, Massachusetts, to Berkeley, California, to meet him. Fuller was nearing 80 years old but had an imposing presence and a booming voice. He sat across from me in his sun-drenched living room, seeming thrilled to talk about Putnam yet plagued by some palpable regret.

Putnam had developed a theory of the brain that “ranged over the whole of philosophy, from ethics to methodology to mathematical foundations to metaphysics,” Fuller told me. He compared Putnam’s work to Alan Turing’s and Kurt Gödel’s. “Turing, Gödel, and Putnam—they’re three peas in a pod,” Fuller said. “But one of them isn’t recognized.” (...)

Phillips Jones, a physicist who worked alongside Putnam in the early 1960s, told me over the phone, “We got the sense that what Einstein’s general theory was for physics, Peter’s model would be for the mind.”

Even Einstein himself was impressed with Putnam. At 19 years old, Putnam went to Einstein’s house to talk with him about Arthur Stanley Eddington, the British astrophysicist. (Eddington performed the key experiment that proved Einstein’s theory of gravity.) Putnam was obsessed with an allegory by Eddington about a fisherman and wanted to ask Einstein about it. Putnam also wanted Einstein to give a speech promoting world government to a political group he’d organized. Einstein—who was asked by plenty of people to do plenty of things—thought highly enough of Putnam to agree.

How could this genius, this Einstein of the mind, just vanish into obscurity? When I asked why, if Putnam was so important, no one has ever heard of him, everyone gave me the same answer: because he didn’t publish his work, and even if he had, no one would have understood it.

“He spoke and wrote in ‘Putnamese,’ ” Fuller said. “If you can find his papers, I think you’ll immediately see what I mean.” (...)

Skimming through the papers I saw that the people I’d spoken to hadn’t been kidding about the Putnamese. “To bring the felt under mathematical categories involves building a type of mathematical framework within which latent colliding heuristics can be exhibited as of a common goal function,” I read, before dropping the paper with a sigh. Each one went on like that for hundreds of pages at a time, on none of which did he apparently bother to stop and explain what the whole thing was really about...

Putnam spent most of his time alone, Fuller had told me. “Because of this isolation, he developed a way of expressing himself in which he uses words, phrases, concepts, in weird ways, peculiar to himself. The thing would be totally incomprehensible to anyone.” (...)


Imagine a fisherman who’s exploring the life of the ocean. He casts his net into the water, scoops up a bunch of fish, inspects his catch and shouts, “A-ha! I have made two great scientific discoveries. First, there are no fish smaller than two inches. Second, all fish have gills.”

The fisherman’s first “discovery” is clearly an error. It’s not that there are no fish smaller than two inches, it’s that the holes in his net are two inches in diameter. But the second discovery seems to be genuine—a fact about the fish, not the net.

This was the Eddington allegory that obsessed Putnam.

When physicists study the world, how can they tell which of their findings are features of the world and which are features of their net? How do we, as observers, disentangle the subjective aspects of our minds from the objective facts of the universe? Eddington suspected that one couldn’t know anything about the fish until one knew the structure of the net.

That’s what Putnam set out to do: come up with a description of the net, a model of “the structure of thought,” as he put it in a 1948 diary entry.

At the time, scientists were abuzz with a new way of thinking about thinking. Alan Turing had worked out an abstract model of computation, which quickly led not only to the invention of physical computers but also to the idea that perhaps the brain, too, was a kind of Turing machine.

Putnam disagreed. “Man is a species of computer of fundamentally different genus than those she builds,” he wrote. It was a radical claim (not only for the mixed genders): He wasn’t saying that the mind isn’t a computer, he was saying it was an entirely different kind of computer.

A universal Turing machine is a powerful thing, capable of computing anything that can be computed by an algorithm. But Putnam saw that it had its limitations. A Turing machine, by design, performs deductive logic—logic where the answers to a problem are contained in its premises, where the rules of inference are pregiven, and information is never created, only shuffled around. Induction, on the other hand, is the process by which we come up with the premises and rules in the first place. “Could there be some indirect way to model or orient the induction process, as we do deductions?” Putnam asked.

Putnam laid out the dynamics of what he called a universal “general purpose heuristic”—which we might call an “induction machine,” or more to the point, a mind—borrowing from the mathematics of game theory, which was thick in the air at Princeton. His induction “game” was simple enough. He imagined a system (immersed in an environment) that could make one mutually exclusive “move” at a time. The system is composed of a massive number of units, each of which can switch between one of two states. They all act in parallel, switching, say, “on” and “off” in response to one another. Putnam imagined that these binary units could condition one another’s behavior, so if one caused another to turn on (or off) in the past, it would become more likely to do so in the future. To play the game, the rule is this: The first chain of binary units, linked together by conditioned reflexes, to form a self-reinforcing loop emits a move on behalf of the system.

Every game needs a goal. In a Turing machine, goals are imposed from the outside. For true induction, the process itself should create its own goals. And there was a key constraint: Putnam realized that the dynamics he had in mind would only work mathematically if the system had just one goal governing all its behavior.

That’s when it hit him: The goal is to repeat. Repetition isn’t a goal that has to be programmed in from the outside; it’s baked into the very nature of things—to exist from one moment to the next is to repeat your existence. “This goal function,” Putnam wrote, “appears pre-encoded in the nature of being itself.”

So, here’s the game. The system starts out in a random mix of “on” and “off” states. Its goal is to repeat that state—to stay the same. But in each turn, a perturbation from the environment moves through the system, flipping states, and the system has to emit the right sequence of moves (by forming the right self-reinforcing loops) to alter the environment in such a way that it will perturb the system back to its original state.

Putnam’s remarkable claim was that simply by playing this game, the system will learn; its sequences of moves will become increasingly less random. It will create rules for how to behave in a given situation, then automatically root out logical contradictions among those rules, resolving them into better ones. And here’s the weird thing: It’s a game that can never be won. The system never exactly repeats. But in trying to, it does something better. It adapts. It innovates. It performs induction.

In paper after paper, Putnam attempted to show how his induction game plays out in the human brain, with motor behaviors serving as the mutually exclusive “moves” and neurons as the parallel binary units that link up into loops to move the body. The point wasn’t to give a realistic picture of how a messy, anatomical brain works any more than an abstract Turing machine describes the workings of an iMac. It was not a biochemical description, but a logical one—a “brain calculus,” Putnam called it.

As the game is played, perturbations from outside—photons hitting the retina, hunger signals rising from the gut—require the brain to emit the right sequence of movements to return to its prior state. At first it has no idea what to do—each disturbance is a neural impulse moving through the brain in search of a pathway out, and it will take the first loop it can find. That’s why a newborn’s movements start out as random thrashes. But when those movements don’t satisfy the goal, the disturbance builds and spreads through the brain, feeling for new pathways, trying loop after loop, thrash after thrash, until it hits on one that does the trick.

When a successful move, discovered by sheer accident, quiets a perturbation, it gets wired into the brain as a behavioral rule. Once formed, applying the rule is a matter of deduction: The brain outputs the right move without having to try all the wrong ones first.

But the real magic happens when a contradiction arises, when two previously successful rules, called up in parallel, compete to move the body in mutually exclusive ways. A hungry baby, needing to find its mother’s breast, simultaneously fires up two loops, conditioned in from its history: “when hungry, turn to the left” and “when hungry, turn to the right.” Deductive logic grinds to a halt; the facilitation of either loop, neurally speaking, inhibits the other. Their horns lock. The neural activity has no viable pathway out. The brain can’t follow through with a wired-in plan—it has to create a new one.

How? By bringing in new variables that reshape the original loops into a new pathway, one that doesn’t negate either of the original rules, but clarifies which to use when. As the baby grows hungrier, activity spreads through the brain, searching its history for anything that can break the tie. If it can’t find it in the brain, it will automatically search the environment, thrash by thrash. The mathematics of game theory, Putnam said, guarantee that, since the original rules were in service of one and the same goal, an answer, logically speaking, can always be found.

In this case, the baby’s brain finds a key variable: When “turn left” worked, the neural signal created by the warmth of the mother’s breast against the baby’s left cheek got wired in with the behavior. When “turn right” worked, the right cheek was warm. That extra bit of sensory signal is enough to tip the scales. The brain has forged a new loop, a more general rule: “When hungry, turn in the direction of the warmer cheek.”

New universals lead to new motor sequences, which allow new interactions with the world, which dredge up new contradictions, which force new resolutions, and so on up the ladder of ever-more intelligent behavior. “This constitutes a theory of the induction process,” Putnam wrote.

In notebooks, in secret, using language only he would understand, Putnam mapped out the dynamics of a system that could perceive, learn, think, and create ideas through induction—a computer that could program itself, then find contradictions among its programs and wrangle them into better programs, building itself out of its history of interactions with the world. Just as Turing had worked out an abstract, universal model of the very possibility of computation, Putnam worked out an abstract, universal model of the very possibility of mind. It was a model, he wrote, that “presents a basic overall pattern [or] character of thought in causal terms for the first time.”

Putnam had said you can’t understand another person until you know what fight they’re in, what contradiction they’re working through. I saw before me two stories, equally true: Putnam was a genius who worked out a new logic of the mind. And Putnam was a janitor who died unknown. The only way to resolve a contradiction, he said, is to find the auxiliary variables that forge a pathway to a larger story, one that includes and clarifies both truths. The variables for this contradiction? Putnam’s mother and money.

by Amanda Gefter, Nautilus |  Read more:
Image: John Archibald Wheeler, courtesy of Alison Lahnston.
[ed. Fascinating. Sounds like part quantum physics and part AI. But it's beyond me.]

Monday, December 15, 2025

The Story of Art + Water

For fifteen years or so, I’d been kicking around the idea of resurrecting the artist-apprentice model that reigned in the art world for hundreds of years.

Again and again, I’d heard from young people who lamented the astronomical and ever-rising cost of art school. For many college-level art programs, the total cost to undergraduates is now over $100,000 a year. I hope we can all agree that charging students $400,000 for a four-year degree in visual art is objectively absurd. And this prohibitive cost has priced tens of thousands of potential students out of even considering undertaking such an education.

For years, I mentioned this issue to friends in and out of the art world, and everyone, without exception, agreed that the system was broken. Even friends I know who teach at art schools agreed that the cost was out of control, and these spiraling costs were contributing to the implosion of many undergraduate and postgraduate art programs.

Then I brought it up with JD Beltran, a longtime friend prominent in the San Francisco art scene, who herself was suffering under the weight of $150,000 in art-school debt, which she’d incurred in the late 1990s. She’d been carrying that debt for thirty years—for a degree in painting she got in 1998 from the San Francisco Art Institute—and together we started mapping out an alternative.

It’s important to note that the current model for art schools is very new. For about a thousand years, until the twentieth century, artists typically either apprenticed for a master artist, learning their trade by working in a studio, or attended loose ateliers where a group of artist-students studied under an established artist, and paid very little to do so. These students would help maintain the studio, they would hire models, they would practice their craft together, and the studio’s owner would instruct these students while still creating his own work—usually in the same building.

Somehow, though, we went from a model where students paid little to nothing, and learned techniques passed down through the centuries, to a system where students pay $100,000, and often learn very little beyond theory. A recent graduate of one of our country’s most respected MFA programs—not in the Bay Area—told me that in her third year as an MFA student, she paid over $100,000 in tuition and fees, and in exchange, she met with her advisor once every two weeks. That third year, there were no classes, no skills taught—there was only a twice-monthly meeting with this advisor. Each meeting lasted one hour. Over the course of that third year, she met with this advisor twenty times, meaning that each of these one-hour sessions cost the MFA student $5,000. And during these sessions, again, no hard skills were taught. It was only theory, only discussion. At the rate of $5,000 an hour (and of course her instructor was not the recipient of this $5,000/hr!) This seems to be an inequitable system in need of adjustment.

So JD Beltran and I started thinking of an alternative. For years, it was little more than idle chatter until one day in 2022, I was biking around the Embarcadero, and happened to do a loop around Pier 29, and because one of its roll-top doors was open, I saw that it was enormous, and that it was empty.

JD and I started making inquiries with the Port of San Francisco, a government agency that oversees the waterfront. They’re the agency that helped the Giants ballpark get built, who helped reopen the Ferry Building, and made it possible for the Exploratorium to relocate from the Palace of Fine Arts to their current location on the waterfront. In the forty years since the collapse of the wretched highway that used to cover the Embarcadero, the Port of SF has done great things to make that promenade a jewel of the city...

The core of our proposal was this: Ten established artists would get free studio space in the pier. At a time when all visual artists are struggling to find and keep studio space in this expensive city, this free studio space would help some of our best local artists stay local.

In exchange for this free studio space, these ten established artists would agree to teach a cohort of twenty emerging artists, who also would be given free studio space in the pier.

That was the core of the idea. Simple, we hoped. And it would bring thirty visual artists all to Pier 29, to learn from each other, and the emerging artists would get a world-class, graduate-level education. And because thirty artists would be occupying the pier, the staffing required to maintain the program would be minimal. The thirty resident artists would become caretakers of the space.

Thus began fourteen months of meetings, proposals, and permitting discussions. The Port’s staff were encouraging, because that part of the Embarcadero is a very quiet zone, with few restaurants or cafés—and those who were there, struggle. (The famed Fog City Diner of Mrs. Doubtfire, recently went under.) But finally, after fourteen months and thousands of hours put in by Art + Water and CAST, the Port and the City granted us a lease on Pier 29.

OUR NEW MODEL, WHICH IS A VARIATION ON THE OLD MODEL

For the educational component of the Art + Water program, I did some napkin math and discovered something so simple that I assumed it couldn’t work: If each of these ten established artists taught just three hours a week, together they would provide these twenty emerging artists with thirty hours of instruction per week. These three hours wouldn’t put too great a burden on any one of the established artists, but the accumulated knowledge imparted each week by these ten established—and varied, and successful—artists would be immeasurable. And they would be able to do it for free.

And because the thirty artists, established and emerging, would be sharing one pier, they’d be able to consult with each other regularly, even outside of class hours, and more mentorship and camaraderie would occur organically. (One of the strangest things about many advanced art-school programs is how distant the teachers’ and students’ studios are from each other. For hundreds of years, apprentices were able to see, and even participate in, the making of the established artists’ work. Now, that’s largely lost. Professors work across town, or in distant cities; the two practices are miles apart, and so much knowledge is never transferred. When BFA and MFA students are around only other students, they can’t see how successful working artists make their art, or indeed how they make a living.)

With Art + Water, the hope was that if these emerging artists had their studios right next to successful artists, they could see how the work was created, they could ask questions, and they could even assist (just as apprentices used to assist the master artists). Infinitely more knowledge would be transferred through this proximity than could ever be in a classroom-only program.

So when I did my 3 × 10 = 30 napkin math, JD Beltran, who had not only gotten an MFA from the San Francisco Art Institute but had also taught at SFAI, the California College of Art, SF State, and Stanford, shocked me by agreeing that my napkin math made sense to her, too. So we kept pressing on.

by Dave Eggers, McSweeny's |  Read more:
Image: McSweeny's
[ed. Great idea. Why did mentorships fall away?]

Home-Schooled Kids Are Not All Right

By my third year of home-schooling — in 1994, when I was 12 — Mom’s project of turning me back into an infant was nearly complete.

Ever since she’d pulled me out of school, she had been applying lighteners and hydrogen peroxide to restore my brownish hair to the bright blonde of its baby color. After reading that a crawling phase might help an infant develop fine motor control, she determined that, even at age 12, it might not be too late for me to crawl my way to better handwriting.

She had me crawl whenever I was at home, which was most of the time. Mom home schooled me between fourth and eighth grades, and even today, as a parent who has come to see plainly how damaging those years were, I know that she believed that her choice was in my best interest.

It was the lack of state oversight or standards that allowed our situation. It was the laws that failed me. Today, as home-schooling numbers continue to surge, similar laws fail to protect millions of kids.

Mom called what we did “unschooling,” a concept championed by the home-schooling pioneer John Holt. She agreed with his assertion that “schools are bad places for kids,” or at least for a certain kind of kid; my brother Aaron, she decided, was better suited for public school and was sent off on the bus each morning.

I, on the other hand, was a “creative global learner,” and Mom said that she was going to give me a “free-form education” in order to “pursue passions.” Other than math, which I began to do by correspondence course, I mostly spent my days with her visiting shops, libraries and restaurants of our rapidly-growing suburb, or else having “project time” — drawing superheroes, rereading my David Macaulay and Roald Dahl books, or writing short stories by the pool as Mom reapplied my hair bleach.

Mom had been going through a hard time — ever since we’d moved to Plano, Texas, her social life was dim, her career as a children’s magazine editor had been put on hiatus, and her own mother had begun a long decline into dementia — but my presence by her side seemed to lift her spirits. “You are better than any grown-up, Stef. You are more than all I need,” she told me.

I felt proud to help her, but silently I worried. The longer I spent at home with her (Dad was at work five days a week), the more impossible it seemed that I might ever go back into the world. I knew how badly my return to school would hurt her, and increasingly school seemed to me a terrifying place. I’d mostly lost my friendships from my old school, and my few attempts to re-enter the land of other kids had been failures; after just a day or two at a Boy Scout camp, I’d actively tried to contract conjunctivitis so that I could be sent home early.

Sometimes, flipping through one of my brother’s old textbooks, I’d see how far behind I’d already fallen. But who could I speak with about any of that?

As the years passed, my isolation deepened. My mom needed to take on part-time work, so now I largely spent my afternoons alone in my room, where there was no one to witness the long AOL Instant Messenger romance I carried on with a supposed teenage girl, who in fact turned out to be an older sexual predator. No one noticed the track of scars I’d been making on my hip with the tip of a compass. No one saw how I’d spend countless hours alone in my room with a portable TV inches from my face, wanting to disappear into the worlds onscreen.

Not once, in the four and a half years I spent at home, did anyone from the state come to assess what sort of education I was receiving, or even just to check on me.

I didn’t know it at the time, but our home-school had fallen into a newly legislated invisible space, where a child could easily vanish from public view. For much of the 20th century, the law was essentially silent with regard to home-schooling. The 1972 Supreme Court case of Wisconsin v. Yoder granted Amish parents the right to withdraw their children from school after eighth grade due to unique religious beliefs and practices, but the U.S. Supreme Court has never specifically addressed a constitutional right to home-schooling in general.

In the absence of any federal law or Supreme Court decision, home-schooling regulation was left to the states. In large part driven by fundamentalist Christian lobby groups like the Home School Legal Defense Association (H.S.L.D.A.), home-schooling had become formally legalized in nearly every state by the time Mom pulled me out of school in the 1990s. Over the following three decades, H.S.L.D.A. and an associated network of smaller organizations have been staggeringly successful in furthering their anti-regulation agenda and quashing dissent. Current home-school laws still differ by state, but in nearly all states the lack of oversight beggars belief.

In 48 states, registered sex offenders and adults with a conviction of crimes against minors can still home-school a child, effectively removing the child from the observation of other adults and peers, even if that child’s safety is under active investigation by child protective services. In 12 states (including Texas), parents aren’t required to submit any documentation to home-school. They can simply remove a child from school, and then they will no longer be subject to any mandatory state assessments or contact with officials. In another 17 states, families are required only to provide notice to the state of their intention to home-school, but they too face no state-mandated assessments.

In 19 of the 21 remaining states that do have laws requiring assessments of home-schooled children, the laws are not enforced in all home-schooling situations. In 49 states, home-schooling parents are not required to have their children screened for medical issues or ensure that they receive care. In 40 states, home-schooling parents are not required to have a high school diploma.

As the number of home-schoolers has surged — around half a million when I was a kid, driven to around 3.5 million since the pandemic — the country has passively endorsed a nationwide system of blind spots, where the fate of home-schooled children has been left almost entirely to their parents. States continue to allow parents to operate with little or no oversight, resigning the fates of millions of kids to the assumption that parents know best, even if evidence abounds that this is not always the case...

Legal definitions of abuse vary, but the choice to isolate a child from peers and outsiders seems to me plainly abusive. I would also characterize as abuse a parent’s decision to limit a child’s access to learning materials, or to indoctrinate a child into one mind-set or ideology without the possibility of other perspectives, or to willingly limit a child’s ability to function in a larger society.

Each home-school is different, and of course most home-schooling parents do not abuse or neglect their children. Indeed, for many parents, the choice to home-school is about prioritizing a child’s safety and better meeting a child’s special learning needs.

But what home-schooling experiences — good or bad — have in common is that they remove what schools provide: a place where children learn and are with one another, a place where adults outside the home interact with children and can intervene on a child’s behalf, and also a transparent, public minimum standard of education. (...)

In my research, I met a 45-year-old woman who was home-schooled in Mississippi in the late 1980s and 1990s, and her experience resembles a woefully familiar pattern. When she was in second grade, she says, her parents found a simple way of avoiding the questions that her teachers might ask if they saw the bruises on her body: They simply removed her and her siblings from that school and so from the gaze of concerned adults. She says that once she and her siblings were behind the legal veil of home-schooling, their parents continued to beat them, locked away any educational materials in the house, and forced the children to spend their days doing chores on the property.

Those who oppose regulation claim that such cases are rare, and they rightfully argue that educational neglect and abuse happen at school as well. But we’ve created a system in which it’s impossible to know how common home-schooling abuse might actually be. Because home-schoolers in many states are not even required to officially register, proper data collection can be nearly impossible, and children who exist under the sovereign power of a home-schooling parent face enormous risks by speaking out.

Those in favor of home-schooling also point to a multitude of home-school successes under current laws, and certainly there are a great many. I agree with the pro-home-school lobby that the close attention of parent-educators and the student-interest-led learning model can teach children how to learn in creative and uncommon ways.

All of which makes me wonder why it is that this same lobby fights so fiercely to keep these children from even minimal oversight. Indeed, it would seem that these parents would invite outside assessments to demonstrate the success of their approach. At the very least, it’s hard to understand why home-schooling parents would not warmly welcome routine checks of health and wellness, in order to protect those children whose parents misuse home-schooling to abuse and isolate their children.

My father and brother, reading my account of those years, tell me that they recognize the episodes I describe, but that even they — the closest observers of that time — did not know how alone and often lost I felt while they were at work and school, or how desperately I wished someone would put an end to the situation. “I just assumed your mom knew what was best,” Dad says. 

by Stefan Merrill Block, NY Times | Read more:
Image: Olivia Arthur/Magnum Photos
[ed. Boggles the mind. We have truancy laws for school attendance but no oversight or standardized testing for home-schooling?]

Thursday, December 11, 2025

Populism Fast and Slow

It is natural that a person who is both concerned by the rise of right-wing populism and possessed of a bookish disposition might turn to the academic political science literature in search of a better understanding of the phenomenon. Such a person is likely to be disappointed. It does not take much reading to discover that political scientists are quite conflicted. (One might take this review article to provide a decent snapshot of the relatively large academic literature on the subject.) There is a modest level of agreement about what populism is, but the most widely accepted definition is both superficial and misleading. That is inauspicious, as far as combating the forces of populism is concerned.

Most importantly, academics have not done a great job confronting the most confounding aspect of populism, which is that the more it gets criticized by intellectuals, the more powerful it becomes. As a result, most of us are still playing the same old game, with the same old strategies, without realizing that the metagame has changed.

It is not difficult to see where the academic discussion went wrong. An unfortunately large number of writers on populism were wrongfooted by the decision, made early on, to treat populism as a type of political ideology, along the lines of socialism or liberalism. This gave rise to an immediate puzzle, because populism seems to be compatible with a large number of other conventional political ideologies. In particular, it comes in both left-wing (e.g. Chavez) and right-wing (e.g. Bolsonaro) variants. So if populism is a political ideology, it’s a strange sort of ideology, because it doesn’t seem to exclude other views in the way that a conventional ideology does.

The most obvious alternative is to treat it as a strategy, used to gain specific advantage in a democratic electoral system. This is a more promising approach, but it also generates its own puzzles. If populism is merely a strategy, not an ideology, then why are certain ideas seemingly present in all populist movements (such as the hostility to foreigners, or the distrust of central banking)? And if it’s just an electoral strategy, why do populists rule the way they do? For example, why are they so keen on undermining the rule of law (leading to conflict with the courts, attempts to limit judicial independence, etc.)?

The solution that many people have settled on is to accept a watered-down version of the first view, treating populism as an ideology, but only a “thin” one. The most commonly cited definition is from Cas Mudde:
I define populism as an ideology that considers society to be ultimately separated into two homogeneous and antagonistic groups, “the pure people” versus “the corrupt elite,” and which argues that politics should be an expression of the volonté générale (general will) of the people.
The major problem with this definition stems from the fact that it needs to be so minimal, in order to accommodate the fact that populism comes in both left-wing and right-wing flavours, but as a result it is simply too minimal to explain many of the specific features of populist movements. For example, why are “the people” always conceptualized as a culturally homogeneous mass, even in the context of societies that are quite pluralistic (which forces the introduction of additional constructs, such as la France profonde, or “real Americans”)? Furthermore, reading the definition, it would seem as though the left should be able to get significant mileage out of populism, and yet throughout Europe the rise of populism has almost uniformly benefited the right.

A clue to the solution can be found in a further specification that is often made, with respect to this definition, which is that the “general will” of the people is not for any old thing, but takes the specific form of what is called “common sense.” The crucial feature of common sense, as Frank Luntz helpfully observed, is that it “doesn’t requires any fancy theories; it is self-evidently correct.” (One can think of this as the primary point of demarcation between the people and the elites – the people have “common sense,” whereas elites subscribe to “fancy theories.”) This distinction, in turn, does not arise from the ideological content of a belief system, but rather from the form of cognition employed in its production. More specifically, it is a consequence of the distinction between what Daniel Kahneman referred to as “fast and slow” thinking. (...)

Analytical reasoning is sometimes a poor substitute for intuitive cognition. There is a vast literature detailing the hubris of modern rationalism. Elites are perfectly capable of succumbing to faddish theories (and as we have seen in recent years, they are susceptible to moral panics). But in such cases, it is not all that difficult to find other elites willing to take up the cause and oppose those intellectual fads. In specific domains, however, a very durable elite consensus has developed. This is strongest in areas where common sense is simply wrong, and so anyone who studies the evidence, or is willing to engage in analytical reasoning, winds up sharing the elite view. In these areas, the people find it practically impossible to find allies among the cognitive elite. This generates anger and resentment, which grows over time.

This reservoir of discontent creates the opportunity that is exploited by populist politicians. Democratic political systems are fairly responsive to public opinion, but they are still systems of elite rule, and so there are specific issues on which the people genuinely have not been listened to, no matter how angry or upset they got. This creates an incentive to do an end-run around elites, and around institutions dominated by elites (e.g. traditional political parties), in order to tap into this fund of resentment, positioning oneself as the champion of the people. What is noteworthy about populists is that they do not champion all of the interests of the people, but instead focus on the specific issues where there is the greatest divergence between common sense and elite opinion, in order to champion the views of the people on these issues.

Seen from this perspective, it is not difficult to see why populism can be an effective political strategy, and why it has become dramatically more effective in the age of social media. As one can tell from the title of Kahneman’s book, a central feature of intuitive cognition is that it is “fast,” while analytical reasoning is “slow.” This means that an acceleration in the pace of communication favours intuitive over analytical thinking. Populists will always have the best 30-second TV commercials. Social media further amplifies the problem by removing all gatekeepers, making it so that elites are no longer able to exercise any control over public communication. This makes it easy to circumvent them and appeal directly to the aggrieved segment of the population. The result is the creation of a communications environment that is dramatically more hostile to the analytical thinking style.

Working through the consequences of this, it is not difficult to see why the left has been unable to get much traction out of these changes, especially in developed countries. People are not rebelling against economic elites, but rather against cognitive elites. Narrowly construed, it is a rebellion against executive function. More generally, it is a rebellion against modern society, which requires the ceaseless exercise of cognitive inhibition and control, in order to evade exploitation, marginalization, addiction, and stigma. Elites have basically rigged all of society so that, increasingly, one must deploy the cognitive skills possessed by elites to successfully navigate the social world. (Try opening a bank account, renting an apartment, or obtaining a tax refund, without engaging in analytical processing.) The left, to the extent that it favours progress, is essentially committed to intensifying the features of the modern world that impose the greatest burdens of self-inhibition on individuals.

Seeing things in this way makes it easier to understand why people get so worked up over seemingly minor issues, like language policing. The problem with demanding political correctness in speech, and punishing or ostracizing those who fail, is that it turns every conversation into a Stroop test, allowing elites the opportunity to exhibit conspicuous self-control. It requires the typical person, while speaking, to actively suppress the familiar word that is primed (e.g. “homeless”), and to substitute through explicit cognition the recently-minted word that is now favoured (e.g. “unhoused”). Elites are not just insensitive, but positively dismissive of the burdens that this imposes on many people. As a result, by performing the cognitive operation with such fluidity, they are not only demonstrating their superiority, they are rubbing other people’s faces in it. (From this perspective, it is not surprising that the demand for “they/them” pronouns upset some people even more, because the introduction of a plural pronoun forces a verb change, which requires an even more demanding cognitive performance.)

This analysis explains why populism, despite being a mere strategy, also winds up having a characteristic ideological tone and content. The key is to see it as a political strategy that privileges a particular style of cognition. (...)

This privileging of intuitive (or System 1) cognition generates a set of diverse features that can be found in most populist movements. What follows is a non-exhaustive list:

1. Frustration with elites on specific issues. Crime is an ongoing source of frustration, in part because elites – even those who declare themselves “tough on crime” – believe that punishment should be imposed within a legal framework. This creates an opening for populist politicians like Rodrigo Duterte in the Philippines, who empowered the police to carry out summary executions, and Donald Trump in the U.S. who explicitly authorized a return to “street justice” by urban police forces, and has used the U.S. military to carry out summary executions (so far only in international waters). (...)

2. Collective action problems. Populists have never met a collective action problem that they did not feel inclined to make worse (e.g. climate change). That’s because, whenever something bad happens, there is an impulse to blame some other person, but in a collective action problem, the bad effects that you suffer genuinely are the fault of the other person! The catch is that the situation is symmetric — the bad effects they are suffering are your fault. Getting out of the situation therefore requires the cognitive insight that you must both stop, and that you must refrain from free-riding despite the incentives. Intuition, on the other hand, suggests that the correct response is to punish the other person, and since the best way to do this is typically by defecting, the intuitive response is just a formula for transforming a collective action problem into a race to the bottom. This is why civilizations collapse into barbarism and not the other way around.

3. Communication style. A very prominent feature of populist politicians is their speaking style, which has an unscripted, stream-of-consciousness quality (e.g. see Hugo Chavez’s Aló Presidente TV show, which one could also, totally imagine Trump doing). This is important precisely because it is the opposite of the self-controlled, calculated speaking style favored by mainstream politicians (which the French have the perfect term for: langue de bois). This is why populist politicians are perceived, by a large segment of the population, as being more “honest,” even when everything that comes out of their mouth is a lie. Elites typically focus on the content of what is said and ignore the manner in which it is said. Often this is because they themselves employ the controlled speaking style, and so are not bothered by others using it. And yet it is perfectly clear, when listening to Donald Trump, that what he is saying is exactly what he is thinking. Indeed, he obviously lacks the verbal self-inhibition required to speak in any other way. This is what leads people to trust him – especially if they are relying on intuitive cues, rather than analytic evaluation, to determine trustworthiness. (The use of vulgarity is another common tactic of populist politicians, to demonstrate their lack of verbal inhibition. Traditional politicians sometimes try to imitate this, without success, because they fail to realize that it is not the vulgarity, but rather the disinhibition, that achieves the important communicative effect.)

4. Illiberalism. Populists have great difficulty respecting the rule of law. If one listens to the explanations that they offer for their actions, a great deal of this reflects a bias toward concreteness in their thinking. They think the purpose of the rules is to stop bad people from doing bad things, but since they themselves are good people trying to do good things, they cannot see why they should be constrained by the rules. They have enormous difficulty treating themselves and the other political parties symmetrically. (Americans are currently being subjected to a non-stop display of this.) Unfortunately, as those of us who teach liberal political philosophy know, there is an essential feat of abstraction at the foundation of all liberal principles. John Stuart Mill described it as a rejection of the the “logic of persecutors”: “that we may persecute others because we are right... but they must not persecute us because they are wrong.” (...)

5. Conspiracy theory. Many people have wondered why populists are so drawn to conspiracy theories, or “conspiracist” thinking. Again, this is a straightforward consequence of the privileging of intuitive thought. The natural bias of the human mind is toward belief in conspiracy theories, through a combination of apophenia, hyperactive agency-detection, and confirmation bias. Rational suspicion is achieved through the subsequent imposition of explicit test procedures, designed to eliminate false positives. In other words, it requires active suppression of conspiracist thoughts. To the extent that populists reject the style of cognition involved in that override, they open themselves up to a variety of irrational thought-patterns. When criticized by elites, many are inclined to double down on the conspiracism, because the cognitive style being pressed upon them is precisely what they hate most about elites.

by Joseph Heath, In Due Course |  Read more:
Image: Philip Lorca di-Corsica
[ed. See also: The prospects for left-wing populism (IDC):]
***
The crucial thing to understand about populism, and populist anger, is that it is a revolt directed against cognitive elites, not economic elites. Its centerpiece is the affirmation of “common sense” against the sort of “fancy theories” defended by intellectuals and their lackeys. (...)

From this analysis, one can see also why the Bernie/AOC “billionaires are bad” pitch is not genuine populism. The problem with criticizing inequality is that inequality is another abstraction, one that only intellectuals care about per se. There’s lots of research showing that most people have no idea what the distribution of income and wealth is in their society, in part because they don’t really care. What they do care about, first and foremost, is their own financial situation. To the extent that they are bothered by what others have, their attitudes are based on comparison to a specific reference group. They pick out an individual or group who is thought to be comparably situated to themselves (e.g. neighbours, high-school classmates, siblings, etc.), who then serve as a source of primary representations. They judge their own level of success and material comfort based on how well their situation compares to that of these people. (Hence the kernel of truth at the heart of H. L. Mencken’s observation that a truly wealthy man is one who earns more than his wife’s sister’s husband.)

The problem with complaining about Jeff Bezos’s yacht, or Elon Musk’s effective tax rate, as a political strategy, is that these people are completely outside the reference class of all but a small handful of Americans. As a result, their financial situation is completely incommensurable with that of the average person. It is very difficult to cultivate resentment, or any other strong feeling, by inviting people to contemplate an abstraction.

Wednesday, December 10, 2025

Are We Getting Stupider?

Stupidity is surprising: this is the main idea in “A Short History of Stupidity,” by the accomplished British critic Stuart Jeffries. It’s easy to be stupid about stupidity, Jeffries argues—to assume that we know what counts as stupid and who is acting stupidly. Stupidity is, more than anything else, familiar. (Jeffries quotes Arthur Schopenhauer, who wrote that “the wise in all ages have always said the same thing, and the fools, who at all times form the immense majority, have in their way, too, acted alike, and done just the opposite; and so it will continue.”) But it’s also the case, in Jeffries’s view, that “stupidity evolves, that it mutates and thereby eludes extinction.” It’s possible to write a history of stupidity only because new kinds are always being invented.

Jeffries begins in antiquity, with the ancient Greek philosophers, who distinguished between being ignorant—which was perfectly normal, and not all that shameful—and being stupid, which involved an unwillingness to acknowledge and attempt to overcome one’s (ultimately insurmountable) cognitive and empirical limitations. A non-stupid person, from this perspective, is someone who’s open to walking a “path of self-humiliation” from unknowing ignorance to self-conscious ignorance. He might even welcome that experience, seeing it as the start of a longer journey of learning. (To maintain this good attitude, it’s helpful to remember that stupidity is often “domain-specific”: even if we’re stupid in some areas of life, Jeffries notes, we’re capable in others.)...

For nineteenth-century writers like Gustave Flaubert, the concept of stupidity came to encompass the lazy drivel of cliché and received opinion; one of Flaubert’s characters says that, in mass society, “the germs of stupidity . . . spread from person to person,” and we end up becoming lemming-like followers of leaders, trends, and fads. (This “modern stupidity,” Jeffries explains, “is hastened by urbanization: the more dense a population is in one sense, the more dense it is in another.”) And the twentieth and twenty-first centuries have seen further innovations. We’re now conscious of the kinds of stupidity that might reveal themselves through intelligence tests or bone-headed bureaucracies; we know about “bullshit jobs” and “the banality of evil” and digital inundation. Jeffries considers a light fixture in his bedroom; it has a recessed design that’s hard to figure out, so he goes to YouTube in search of videos that might show him how to change the bulb. Modern, high-tech life is complicated. And so, yes, in a broad sense, we may very well be getting stupider—not necessarily because we’re dumber but because the ways in which we can be stupid keep multiplying.

“A Short History of Stupidity” doesn’t always engage with the question of whether the multiplication of stupidities is substantive or rhetorical. When Flaubert writes that people today are drowning in cliché and received opinion, is he right? Is it actually true that, before newspapers, individuals held more diverse and original views? That seems unlikely. The general trend, over the past few hundred years, has been toward more education for more people. Flaubert may very well have been exposed to more stupid thoughts, but this could have reflected the fact that more thoughts were being shared...

And yet, it seems undeniable that something is out of joint in our collective intellectual life. The current political situation makes this “a good time to write about stupidity,” Jeffries writes. When he notes that a central trait of stupidity is that it “can be relied upon to do the one thing expressly designed not to achieve the desired result”—or “to laughably mismatch means and ends”—he makes “stupid” seem like the perfect way to characterize our era, in which many people think that the key to making America healthy again is ending vaccination. Meanwhile, in a recent issue of New York magazine—“The Stupid Issue”—the journalist Andrew Rice describes troubling and widespread declines in the abilities of high-school students to perform basic tasks, such as calculating a tip on a restaurant check. These declines are happening even in well-funded school districts, and they’re part of a larger academic pattern, in which literacy is fading and standards are slipping.

Maybe we are getting stupider. Still, one of the problems with the discourse of stupidity is that it can feel reductive, aggressive, even abusive. Self-humiliation is still humiliating; when we call one another stupid, we spread humiliation around, whether our accusation is just or unjust. In a recent post on Substack, the philosopher Joseph Heath suggested that populism might be best understood as a revolt against “the cognitive elite”—that is, against the people who demand that we check our intuitions and think more deliberately about pretty much everything. According to this theory, the world constructed by the cognitive élite is one in which you have to listen to experts, and keep up with technology, and click through six pages of online forms to buy a movie ticket; it sometimes “requires the typical person, while speaking, to actively suppress the familiar word that is primed (e.g. ‘homeless’), and to substitute through explicit cognition the recently-minted word that is now favoured (e.g. ‘unhoused’).” The cognitive élites are right to say that people who think about things intuitively are often wrong; on issues including crime and immigration, the truth is counterintuitive. (Legal procedures are better than rough justice; immigrants increase both the supply and the demand for labor.) But the result of this has been that unreasonable people have hooked up to form an opposition party. What’s the way out of this death spiral? No one knows.

In 1970, a dead sperm whale washed up on the beach in Florence, Oregon. It was huge, and no one knew how to dispose of it. Eventually, the state’s Highway Division, which was in charge of the operation, hit upon the idea of blowing the carcass up with dynamite. They planted half a ton of explosives—that’s a lot!—on the leeward side of the whale, figuring that what wasn’t blown out to sea would disintegrate into bits small enough to be consumed by crabs and seagulls. Onlookers gathered to watch the explosion. It failed to destroy the whale, and instead created a dangerous hailstorm of putrid whale fragments. “I realized blubber was hitting around us,” Paul Linnman, a reporter on the scene, told Popular Mechanics magazine. “Blubber is so dense, a piece the size of your fingertip can go through your head. As we started to run down [the] trail, we heard a second explosion in our direction, and we saw blubber the size of a coffee table flatten a car.” (The video of the incident—which was first popularized by Dave Barry, after he wrote about it in 1990—is a treasure of the internet, and benefits from Linnman’s deadpan TV-news narration.)

There can be joy and humor in stupidity—think fail videos, reality television, and “Dumb and Dumber.” It doesn’t have to be mean-spirited, either. The town of Florence now boasts an outdoor space called Exploding Whale Memorial Park; last year, after a weeklong celebration leading up to Exploding Whale Day, people gathered there in costume. Watching the original video, I find myself empathizing with the engineer who conceived the dynamite plan. I’ve been there. To err is human. Intelligent people sometimes do stupid things. We all blow up a whale from time to time; the important point is not to do it again.

by Joshua Rothman, New Yorker |  Read more:
Image: markk
[ed. Stupider? Not so sure, but maybe in some cases. It could be just as likely that we've offshored our cognitive abilities and attention spans to social media, smartphones, streaming tv, and other forms of distraction (including AI), with no help from news media who dumb down nuance and detail in favor of engagement and click bait algorithms. See also: The New Anxiety of Our Time Is Now on TV (HB).]