Showing posts with label Education. Show all posts
Showing posts with label Education. Show all posts

Friday, January 30, 2026

Hawaiʻi Could See Nation’s Highest Drop In High School Graduates

Hawaiʻi Could See Nation’s Highest Drop In High School Graduates (CB)

Hawaiʻi is expected to see the greatest decline in high school graduates in the nation over the next several years, raising concerns from lawmakers and Department of Education officials about the future of small schools in shrinking communities.

Between 2023 and 2041, Hawaiʻi could see a 33% drop in the number of students graduating from high school, according to the Western Interstate Commission for Higher Education. The nation as a whole is projected to see a 10% drop in graduates, according to the commission’s most recent report, published at the end of 2024.

Image: Chart: Megan Tagami/Civil BeatSource: Western Interstate Commission for Higher Education

Thursday, January 29, 2026

What is College For in the Age of AI?

When I left for college in the fall of 1991, the internet era was just beginning. By sophomore year, I received my first email address. By junior year, the first commercial web browser was released. The summer after graduation, I worked as a reporter at the Arizona Republic covering the internet’s rise in our everyday lives, writing about the opening of internet cafés and businesses launching their first websites. I was part of an in-between class of graduates who went off to college just before a new technology transformed what would define our careers.

So when Alina McMahon, a recent University of Pittsburgh graduate, described her job search to me, I immediately recognized her predicament. McMahon began college before AI was a thing. Three and a half years later, she graduated into a world where it was suddenly everywhere. McMahon majored in marketing, with a minor in film and media studies. “I was trying to do the stable option,” she said of her business degree. She followed the standard advice given to all undergraduates hoping for a job after college: Network and intern. Her first “coffee chat” with a Pitt alumnus came freshman year; she landed three internships, including one in Los Angeles at Paramount in media planning. There she compiled competitor updates and helped calculate metrics for which billboard advertisements the company would buy.

But when she started to apply for full-time jobs, all she heard back — on the rare occasions she heard anything — was that roles were being cut, either because of AI or outsourcing. Before pausing her job search recently, McMahon had applied to roughly 150 jobs. “I know those are kind of rookie numbers in this environment,” she said jokingly. “It’s very discouraging.”

McMahon’s frustrations are pretty typical among job seekers freshly out of college. There were 15 percent fewer entry-level and internship job postings in 2025 than the year before, according to Handshake, a job-search platform popular with college students; meanwhile, applications per posting rose 26 percent. The unemployment rate for new college graduates was 5.7 percent in December, more than a full percentage point above the national average and higher even than what high-school graduates face.

How much AI is to blame for the fragile entry-level job market is unclear. Several research studies show AI is hitting young college-educated workers disproportionately, but broader economic forces are part of the story, too. As Christine Cruzvergara, Handshake’s chief education-strategy officer, told me, AI isn’t “taking” jobs so much as employers are “choosing” to replace parts of jobs with automation rather than redesign roles around workers. “They’re replacing people instead of enabling their workforce,” she said.

The fact that Gen-Z college interns and recent graduates are the first workers being affected by AI is surprising. Historically, major technological shifts favored junior employees because they tend to make less money and be more skilled and enthusiastic in embracing new tools. But a study from Stanford’s Digital Economy Lab in August showed something quite different. Employment for Gen-Z college graduates in AI-affected jobs, such as software development and customer support, has fallen by 16 percent since late 2022. Meanwhile, more experienced workers in the same occupations aren’t feeling the same impact (at least not yet), said Erik Brynjolfsson, an economist who led the study. Why the difference? Senior workers, he told me, “learn tricks of the trade that maybe never get written down,” which allow them to better compete with AI than those new to a field who lack such “tacit knowledge.” For instance, that practical know-how might allow senior workers to better understand when AI is hallucinating, wrong, or simply not useful.

For employers, AI also complicates an already delicate calculus around hiring new talent. College interns and recent college graduates require — as they always have — time and resources to train. “It’s real easy to say ‘college students are expensive,’” Simon Kho told me in an interview. “Not from a salary standpoint, but from the investment we have to make.” Until recently, Kho ran early career programs at Raymond James Financial, where it took roughly 18 months for new college hires to pay off in terms of productivity. And then? “They get fidgety,” he added, and look for other jobs. “So you can see the challenges from an HR standpoint: ‘Where are we getting value? Will AI solve this for us?’”

Weeks after Stanford’s study was released, another by two researchers at Harvard University also found that less experienced employees were more affected by AI. And it revealed that where junior employees went to college influenced whether they stayed employed. Graduates from elite and lower-tier institutions fared better than those from mid-tier colleges, who experienced the steepest drop in employment. The study didn’t spell out why, but when I asked one of the authors, Seyed Mahdi Hosseini Maasoum, he offered a theory: Elite graduates may have stronger skills; lower-tier graduates may be cheaper. “Mid-tier graduates end up somewhat in between — they’re relatively costly to hire but not as skilled as graduates of the very prestigious universities — so they are hit the hardest,” Maasoum wrote to me.

Just three years after ChatGPT’s release, the speed of AI’s disruption on the early career job market is even catching the attention of observers at the highest level of the economy. In September, Fed chair Jerome Powell flagged the “particular focus on young people coming out of college” when asked about AI’s effects on the labor market. Brynjolfsson told me that if current trends hold, the impact of AI will be “quite a bit more noticeable” by the time the next graduating class hits the job market this spring. Employers already see it coming: In a recent survey by the National Association of Colleges and Employers, nearly half of 200 employers rated the outlook for the class of 2026 as poor or fair, the most pessimistic outlook since the first year of the pandemic.

The upheaval in the early career job market has caught higher education flat-footed. Colleges have long had an uneasy relationship with their unofficial role as vocational pipelines. When generative AI burst onto campuses in 2022, many administrators and faculty saw it primarily as a threat to learning — the world’s greatest cheating tool. Professors resurrected blue books for in-classroom exams and demanded that AI tools added to software be blocked in their classes.

Only now are colleges realizing that the implications of AI are much greater and are already outrunning their institutional ability to respond. As schools struggle to update their curricula and classroom policies, they also confront a deeper problem: the suddenly enormous gap between what they say a degree is for and what the labor market now demands. In that mismatch, students are left to absorb the risk. Alina McMahon and millions of other Gen-Zers like her are caught in a muddled in-between moment: colleges only just beginning to think about how to adapt and redefine their mission in the post-AI world, and a job market that’s changing much, much faster.

What feels like a sudden, unexpected dilemma for Gen-Z graduates has only been made worse by several structural changes across higher education over the past decade.

by Jeffrey Selingo, Intelligencer | Read more:
Image: Intelligencer; Photos:Getty

Friday, January 9, 2026

Why I Fell For Transcendental Meditation

We might consider yogic flying the crowning oddity of transcendental meditation (TM), a practice that promises higher states of consciousness as well as a happier, calmer, more productive daily life. The basics of TM are not particularly out there – a 15- to 20-minute meditation, twice a day, in which you silently repeat a mantra to yourself. But for those who want to take things to the next level, the “TM-Sidhi program” taught by the Maharishi Foundation (which runs the Peace Palace), allows meditators to go even deeper – culminating in what I witness in the men’s flying hall. And this is only the first of three stages of yogic flying (though it is the only one for which there is evidence of anyone managing to achieve). In the second stage, you briefly hover above the ground; in the third, you actually… move through the air.

It is a most curious ending to my three-night retreat at the Peace Palace, which I am undertaking having started to practise TM two months before.
 
I turn up to my first session at the Foundation’s London headquarters with a collection of items I have been asked to bring along – two pieces of sweet fruit, some freshly cut flowers, a new white handkerchief – and press the buzzer on which I find a little label: “TM – a simple effortless effective meditation for everyone.”

A bald Russian man opens the door, looking more finance bro than guru in smart jeans, a pink shirt and a black gilet. His name is Pavel Khokhlachev and he will be my teacher. An interpreter, he is also “the voice of Putin on Sky News”, he tells me. He brings me down into the basement, past a little shrine to Maharishi Mahesh Yogi, the man who brought TM to the west in the late 1950s (both the meditation technique itself and the yogic flying are ancient Vedic practices), and into a room containing a couple of chairs and an altar covered in a gold-trimmed white cloth. Above us looms a large picture of the Hindu monk Brahmananda Saraswati, more commonly referred to as Guru Dev, who was Maharishi’s teacher.

Khokhlachev begins by performing a little ceremony, which I am told to keep confidential, and I am given my mantra, which I am also told I must never share with anyone. The mantra is a Sanskrit sound that does not convey any meaning. It is allocated to me using a system that is kept secret but which also comes from India’s ancient Vedic religion. The idea is that repeating it will allow some reprieve from one’s mental chatter – Khokhlachev likens it to giving a puppy something to chew on so that it doesn’t chew up your furniture. We sit down on the chairs and I do my first meditation. Unlike in some other meditation practices, in TM you don’t need to sit up poker straight or in lotus position to practise; you just need to be comfortable. If you have an itch, you can scratch it. If you want to cross your legs around the other way, you can. Even if you find yourself thinking, that’s also fine; thoughts aren’t the enemy. Just “innocently return to the mantra”, Khokhlachev tells me. The idea is that it should all feel easy, simple, effortless. If it doesn’t, you’re doing something wrong.
 
Like many people, I was drawn to TM by David Lynch, the filmmaker and artist who would have turned 80 on 15 January (the one-year anniversary of his death is five days after that). Lynch practised TM for more than 50 years and devoted much of the last two decades of his life to promoting it, setting up his own foundation in 2005 to fund its teaching in schools and to at-risk populations around the world. 

Lynch’s passion notwithstanding, I have always suspected TM to be a bit of a cult. Even the fact that it’s abbreviated to TM has always felt a bit off to me, somehow. I was quite ready for this piece to be an exposé of what a scam the whole thing is.
 
But while I can’t say I immediately feel the same level of bliss that some describe during my first meditation, something does happen that takes me by surprise. Suddenly, it’s like I’ve fallen down a hole – a very nice, quiet, relaxing hole. And the strangest thing is that it feels somehow… familiar. It’s as if I have fallen asleep, and yet I am wide awake. Some people have described it as “falling awake”. I describe my experience to Khokhlachev, and he tells me it sounds like I transcended. I leave the centre feeling most pleased with myself.
 
Over the four days of consecutive sessions – the introductory course is priced between £295 and £725 depending on one’s earnings – we continue to discuss and refine my TM technique. After my first successful session, I find it harder to access the transcendent for the next few days but I’m told not to worry. “We should come to the meditation with no anticipation and no expectation,” Khokhlachev advises. “Don’t chase the transcendence, because then it’s not innocent.”

How is this form of meditation really different from any other? Bob Roth, CEO of the David Lynch Foundation, who has taught TM to Oprah Winfrey, Tom Hanks, Jerry Seinfeld and Sting, as well as many thousands of others, tells me that there are three different meditation techniques that all have measurably different effects on the brain. There’s focused attention, such as when you concentrate on your breath, which produces gamma waves such as you might see if you were solving a complex maths problem. Open monitoring, in which you observe your thoughts coming and going in a non-judgmental way, which generates calming theta brain waves, such as we experience just before we dream. And then there’s this one, “automatic self-transcending”, which produces “alpha coherence” – increased and synchronised activity across the brain. Scientists call this “restful alertness”; some TM practitioners call it “pure consciousness”. The idea is that it has a twofold effect: the lovely feeling of transcendence while you are in it, and then the extra energy, clarity and creativity you are left with. When you have a really good meditation, the time really flies.
 
Research has demonstrated that transcendental meditation specifically has strong positive effects on a whole range of conditions. In 2013, the American Heart Association formally recognised TM as a complementary technique for reducing blood pressure and cardiovascular risk, and noted its association with a reduced risk of heart attack, stroke and death in patients with heart disease. Other studies have shown TM significantly reduces anxiety and stress more effectively than other relaxation or meditation techniques, while long-term practitioners have been found to have increased cognitive clarity, memory and emotional resilience. 

After about a month of practising TM, I start finding it easier to “transcend” – I begin to reach that place most times that I do it (although not every time). I’m struck by how much more focused I am for several hours after meditating, and how much energy it gives me – meditating in the morning sets me up for the day; meditating in the afternoon feels a bit like having a nap, but more powerful and without the grogginess. It isn’t just a vague feeling, either: according to my Fitbit, during meditation my heart rate tends to drop a beat below its lowest rate during my nightly sleep.
 
I was not expecting any of this to happen. I have meditated before and found it helpful for reducing anxiety and putting things into perspective. But I haven’t ever found it transformational in this way. I have also always found doing it a bit of an effort – something I should be doing – whereas now, most of the time, I relish the chance to do it. Lynch said that he never missed a single one of his twice-daily sessions and, inspired by him, I have so far kept a clean record, though admittedly not always for the full 20 minutes. I would suggest, tentatively, that TM might be a gamechanger.

by Jemima Kelly, Financial Times/AT | Read more:
Image: Getty
[ed. I took up TM in the early 70s (but just an occasional practioner now). Everything described here is exactly how the TM experience feels. Highly recommended.]

Thursday, January 8, 2026

Fossil Words and the Road to Damascus


Caravaggio, The Conversion of Saint Paul
via:
[ed. Fossil word(s). When a word is broadly obsolete but remains in use due to its presence in an idiom or phrase. 

For example, I've always understood the phrase Road to Damascus to be a sort of epiphany or form of enlightment (without knowing what it actually meant). Another example would be Crossing the Rubicon (a point of no return; or decision with no turning back). Of course, these aren't outdated words/phrases as much as shorthand for mental laziness (or trite writing habits). Wikipedia provides a number of examples of actual fossil words, including "much ado about nothing" or "without further ado" (who uses ado in any other context these days?); or "in point", as in "a case in point", or "in point of fact". So, to help promote a little more clarity around here -- Road to Damascus:] 
***
The conversion of Paul the Apostle was, according to the New Testament, an event in the life of Saul/Paul the Apostle that led him to cease persecuting early Christians and to become a follower of Jesus. Paul, who also went by Saul, was "a Pharisee of Pharisees" who "intensely persecuted" the followers of Jesus. Paul describes his life before conversion in his Epistle to the Galatians:
For you have heard of my previous way of life in Judaism, how intensely I persecuted the church of God and tried to destroy it. I was advancing in Judaism beyond many of my own age among my people and was extremely zealous for the traditions of my fathers...
As he neared Damascus on his journey, suddenly a light from heaven flashed around him. He fell to the ground and heard a voice say to him, "Saul, Saul, why do you persecute me?"

"Who are you, Lord?" Saul asked.

"I am Jesus, whom you are persecuting," he replied. "Now get up and go into the city, and you will be told what you must do."

The men traveling with Saul stood there speechless; they heard the sound but did not see anyone. Paul got up from the ground, but when he opened his eyes he could see nothing. So they led him by the hand into Damascus. For three days he was blind, and did not eat or drink anything.

— Acts 9:3–9

If You Give a Mouse a Cookie

If You Give a Mouse a Cookie

Illustrations: Felicia Bond
[ed. For future reference. Wish I'd known about this book (and series) when my grandaughter was a bit younger, but maybe it's not too late (still seven, but she's growing up fast).]

Wikipedia Style Guide

Many people edit Wikipedia because they enjoy writing; however, that passion can result in overlong composition. This reflects a lack of time or commitment to refine an effort through successively more concise drafts. With some application, natural redundancies and digressions can often be eliminated. Recall the venerable paraphrase of Pascal: "I made this so long because I did not have time to make it shorter." [Wikipedia: tl;dr]

Inverted pyramid

Some articles follow the inverted pyramid structure of journalism, which can be seen in news articles that get directly to the point. The main feature of the inverted pyramid is placement of important information first, with a decreasing importance as the article advances. Originally developed so that the editors could cut from the bottom to fit an item into the available layout space, this style encourages brevity and prioritizes information, because many people expect to find important material early, and less important information later, where interest decreases. (...)

What Wikipedia is not

Wikipedia is not a manual, guidebook, textbook, or scientific journal. Articles and other encyclopedic content should be written in a formal tone. Standards for formal tone vary depending upon the subject matter but should usually match the style used in Featured- and Good-class articles in the same category. Encyclopedic writing has a fairly academic approach, while remaining clear and understandable. Formal tone means that the article should not be written using argot, slang, colloquialisms, doublespeak, legalese, or jargon that is unintelligible to an average reader; it means that the English language should be used in a businesslike manner (e.g. use "feel" or "atmosphere" instead of "vibes").

News style or persuasive writing

A Wikipedia article should not sound like a news article. Especially avoid bombastic wording, attempts at humor or cleverness, over-reliance on primary sources, editorializing, recentism, pull quotes, journalese, and headlinese.

Similarly, avoid persuasive writing, which has many of those faults and more of its own, most often various kinds of appeals to emotion and related fallacies. This style is used in press releases, advertising, editorial writing, activism, propaganda, proposals, formal debate, reviews, and much tabloid and sometimes investigative journalism. It is not Wikipedia's role to try to convince the reader of anything, only to provide the salient facts as best they can be determined, and the reliable sources for them.

Comparison of styles

via: Wikipedia: Writing better articles
Image: Benjamin Busch/Import Projects - Wikimedia commons 
[ed. In celebration of Wikipedia Day (roughly Jan. 15). It's easy to forget how awesome this product really is: a massive, free, indispensable resource tended to by hundreds (thousands?) of volunteers simply for altruistic reasons. The best of the internet (and reminder of what could have been). See also: Wikipedia:What Wikipedia is not]

Tuesday, January 6, 2026

1912 Eighth Grade Exam


8th grade graduation exam from 1912 (ages 13-14). Rural Kentucky.
via: Bullitt County History Museum
[ed. I'd just settle for stronger civics, history, reading, and home/personal economics lessons. And, most importantly, instilling a love of learning over rote memorization. See also: No, You’re Probably Not Smarter Than a 1912-Era 8th Grader (Smithsonian).]

Take the Messy Job

I am often approached by students and other young people for advice about their careers. In the past, my answers were often based on a piece of advice I myself got from Bengt Holmstrom: “when in doubt, choose the job where you will learn more.” In the last few years, there is a new variable to consider: the likelihood that artificial intelligence will automate all or large pieces of the job you do. Given that, what should a student choose today? The answers below are motivated by a book on artificial intelligence and the organization of work on which I am currently working with Jin Li and Yanhui Wu.

One way of thinking about this is that all knowledge work varies along one important spectrum: messiness. On one end, there is one defined task to execute, say helping clients fill their taxes. You get the expenses and payslips on email, you use some rules to put them on a form, you obtain a response. Over time, you become better at this task, and get a higher salary. On the other end of the spectrum, there is a wide bundle of complex tasks. Running a factory, or a family, involves many different tasks that are very hard to specify in advance.

The risk of the single-task job is that artificial intelligence excels at single tasks. Humans are still often in the loop, since the rate of errors in many fields is still too high to allow for unsupervised artificial intelligence. But the rate of errors is rapidly decreasing. (...)

The result is that workers with simple tasks will become continuously more productive (and richer), until their work is worth nothing. A junior customer support agent gets more and more effective while the AI provides her the accumulated knowledge of senior customer support agents, as in the recent Brynjolfsson, Li; Ramond (2025) paper, until the AI is good enough that she can be replaced. (...)

The end of work? Not so fast

The other option is to go for a messy job, where the output is the product of many different tasks, many of which affect each other.

The head of engineering at a manufacturing plant I know well must decide who to hire, which machines to buy, how to lay them down in the plant, negotiate with the workers and the higher ups the solutions proposed, and mobilise the resources to implement them. That task is extraordinarily hard to automate. Artificial intelligence commoditizes codified knowledge: textbooks, proofs, syntax. But it does not interface in a meaningful way with local knowledge, where a much larger share of the value of messy jobs is created. Even if artificial intelligence excelled at most of the single tasks that make up her job, it could not walk the factory floor to cajole a manager to redesign a production process.

A management consultant whose job consists entirely of producing slide decks is exposed. A consultant who spends half of her time reading the room, building client relationships, and navigating organizational politics has a bundle AI cannot replicate.

In 2016, star AI researcher Geoffrey Hinton leaped from automation of reading scans to the automation of the full radiologist job, and gave the advice to stop training radiologists. But even fields that can look simple from the outside, like radiology, can be quite messy. A small study from 2013 (cited in this Works in Progress article) found that radiologists only spend 36 percent of their time looking at scans. The rest is spent talking to patients, training others, and talking with the nurses and doctors treating the patient.

A radiologist’s job is a bundle. You can automate reading scans and still need a radiologist. The question is not whether AI can do one part of your job. It is whether the remaining parts cohere in a manner that justifies a role.

To me, a key characteristic of these “messy jobs” is execution. Execution is hard because it faces the friction of the real world. Consider a general contractor on a building site. Artificial intelligence can sketch a blueprint and calculate load-bearing requirements in seconds. That is codified knowledge. But the contractor must handle the delivery of lumber that arrived late, the ground that is too muddy to pour concrete, or the bickering between the electrician and the plumber.

Or consider the manager in charge of post-merger integration at a corporation. Again, the algorithm will map financial synergies and redraw org charts, but it will not have the “tribal” knowledge required to merge two distinct cultures and have the tact to prevent an exodus.

Corporate law is increasingly vulnerable to automation because contracts are essentially code, but I would expect trial attorneys to subsist.

AI implementation itself could be the ultimate messy job. Improvements will require drastically changing existing workflows, a process that will be resisted by internal politics, fear, and legacy business models. For instance, law firms have always relied on “billable hours” to charge clients, a concept that will be useless in an AI world. But this organizational inertia is a gift: the transformation will be messier and more delayed than the charts suggest and it will require a lot of consultants, managers and workers, well versed in what AI can do, but with sufficient domain knowledge to know how to use it and how to redefine the process.

In the extreme instances, the feared AI transformation may not take place. Jobs defined by empathy, care, and real-time judgment will become the economy’s ‘luxury goods.’ In these fields, artificial intelligence is not your competitor; it generates the wealth (and lowers the costs of goods and services) that will fund your higher wages.

by Luis Garicano, Silicon Continent |  Read more:
Image: uncredited via

Friday, January 2, 2026

A Tale of Two College Towns

I began life in a Michigan college town, and I may spend the rest of it in another one. It surprises me to put the matter this way, because the two places do not seem similar: Alma, a small town far too vulnerable to globalization and deindustrialization, and Ann Arbor, a rich city that seems, at first glance, far too insulated from everything. One of Michigan’s lovable qualities, of course, is its tendency to transform across relatively small distances: the beach towns to the west seem to belong to another order of things than the picturesque or dingy farm towns only so many miles to the interior, the Upper Peninsula constitutes its own multiple worlds, and so on. Still, the two towns feel particularly dissimilar. You could reduce them to battling stock personages in any number of morality plays: red vs. blue America, insular past vs. centerless future, one awful phase of capitalism vs. some later awful phase of it. At least, you could do that until very recently—less than a year ago, as I write this. Now, as we’ll see, they face the same axe.

“College town” is one of those terms that is useful because it’s somewhat empty. Or, more generously, it’s a handle for many sorts of cargo. Historian Blake Gumprecht, setting out to survey The American College Town in his 2008 study by that name, suggests that the name properly applies to any school where “the number of four-year college students equals at least 20 percent of a town’s population.” Gumprecht admits that this cutoff is “arbitrary.” The next scholarly book that I was able to find on the subject uses a somewhat more expansive definition:
Traditionally, Americans have viewed college towns as one of three principal kinds or a combination of the three. The first is a campus closely connected to a city or town and within its boundaries. In the second, the campus “is located next to a city or town but remains somewhat separate from it.” In the view of architect William Rawn, Yale would be an example of the first type, and the University of Virginia, on the edge of Charlottesville, of the second. Finally, perhaps the most common type of college town is one in which the college or university may be near a locality yet essentially unconnected to it. Duke and Rice Universities are offered by Rawn as examples of this model.
To which I say: Rice? Rice in Houston? That Rice? If the biggest city in Texas is a “college town,” then everywhere is. Better to be a little arbitrary.

The Pervading Life

Between the too-arbitrary and the too-expansive, there is the conveniently vague. For Wikipedia, the college town is one where an institution of higher learning “pervades” the life of the place. Good enough. I like this verb, “pervade.” In cities or towns that have enough other things going on—places we wouldn’t, or shouldn’t, call “college towns”—it’s rather the place that pervades the school. (...)

What is it like to be pervaded by a college? Alma College is a prototypical small liberal-arts college, or SLAC: founded in the late nineteenth century, a vestigially Protestant institution still somewhat attached to a mainline denomination (the Presbyterian Church, USA). It has a pretty campus with a decent amount of green space, human-scale class sizes, and a handful of reasonably famous alums. The only SLAC-standard quality it misses is a rumored former Underground Railroad stop, such as you would find at Knox College or Oberlin—both the town and the college came along too late for that.

My impression is that it’s an excellent school, slightly overpriced for the location. The only parts of Alma College that I can really vouch for are the library, where I first read about the films of Akira Kurosawa, and the bookstore, where I bought a tape of the self-titled third Velvet Underground album, far too young in both cases, and therefore at the perfect time. In the summers, its weight room was so easy for us local high schoolers to sneak into that I suspect the ease was intentional on someone’s part—another small act of gown-to-town benevolence. I never paid tuition to the place, but for these reasons, I will die in a minor and unpayable sort of debt to it. At its best, the small college in a small college town functions this way for the nonstudent residents, as a slightly mysterious world within the world that, while pursuing its own ends, expands everyone’s sense of what is possible. The college calendar makes a pleasant polyrhythm against the calendar of the seasons, the schedule of the high-school football team, and the motorik pulse of daily nine-to-five town life.

Someone Else’s Utopia

For this to happen at all, the college has to be its own distinct place, present and familiar but in some ways opaque. The small liberal-arts college, whatever else it is, is always the hopelessly scrambled remains of someone else’s Utopia. It’s a carved-out community where a group of students and teachers try to figure out what it would mean to give some transcendent idea—Plato’s forms, Calvin’s God, Newton’s law-abiding universe, the revivalist blessed community of the early-nineteenth-century abolitionists—its proper place in daily life. (...)

As a kid, I learned about town-gown tension from the movie Breaking Away (1979), in which Indiana University frat boys have nothing better to do than start riots with the town boys and everyone is inexplicably devoted to bicycle racing. As a sports movie, a romantic comedy, and a bildungsroman, and as a testament to the odd, flat beauty of the Midwest, Breaking Away holds up fabulously and always will. Nobody should mistake it for a sociological treatise. I read the college boys in the movie as almost exact stand-ins for the meanest of my middle-school classmates and never noted the contradiction. The kids who most plagued me were not necessarily college bound—although, at that age, I didn’t think that I was, either.

There must have been town-gown tension between the place where I grew up and the liberal-arts college I didn’t go to, but it was off my radar. The one incident I remember sharply is far more ambiguous in its implications than “the townies were uncivilized” or “the students were snobby.” Like many of the most pleasant memories I have of my adolescence, it involves a gas station more or less right in the middle of town, where, I know not how, one of the smart, underachieving stoners of my acquaintance found a job as a cashier. He promptly secured a job for another smart, underachieving stoner, whereupon the place became, for months, until management cracked down, an intellectual and cultural salon for my town’s smart, underachieving stoners and also their goody-goody churchgoing friends who did not smoke. You would drink fountain soda at employee-discount rates while listening to David Bowie and Phish on the tape player: What, if you had no girlfriend, could be more urgent than this?

One night, I was having a heart-to-heart with yet another of these fellows, a talented visual artist who looked like Let It Be–era John Lennon after a good shave, when a group of college-age women we didn’t know—therefore, students—walked past us. They were loud, probably drunk. One of them turned and looked at us, flashed us her rear, then kept on walking, without addressing a word to us.

What did this gesture mean? Contempt was encoded in it, obviously. (Only in male fantasy and pop culture—but I repeat myself—could mooning qualify as flirtation.) Two teenagers with nowhere more interesting to sit on a weekend evening than the stoop outside a gas station: Let us remind them of what they will never have access to. We looked, to them, like people who at best would study accounting at Davenport University, or “business” at Lansing Community College, or who would answer one of those once-ubiquitous TV ads imploring us to enjoy the freedom of the independent trucker. These young women, hemmed in on all sides by the threat of male sexual violence, wanted a safe way to test the boundaries of that hemming-in and correctly judged the two of us as no threat to the four of them: That is a somewhat more sympathetic, Dworkinite reading of the situation, and probably true. But either way, the gesture was baldly classist, an exercise of power. There is no reading of it that is not an insult; you can make it somewhat better only by thinking of it as misdirected revenge on the many guys who had probably insulted them.

On this score, I’m not sure our flasher was successful. My friend’s response to her briefly visible, panty-clad buttocks was one of the most emotional displays I have ever seen, so total as to make one question the idea that even the rawest physical desire is necessarily simple or shallow. For a moment, he was wonder-struck and said nothing, merely looked at me as though we had both just seen a UFO and he needed me to confirm it. Then, long after the women had walked away, he began to apostrophize them, in a voice as full of longing as Hank Williams’s: “Please come back. I’ll pay you. I have a bag of weed in my pocket,” and so on. There are many ways to expand a person’s sense of what’s possible.

In this moment, I knew myself, really for the first time, as a townie. Within a few years, I had already shaken off that identity. So, I think, did my friend. It takes all the sting out of being a townie when it is an option rather than a fate. We, like untold millions of others, were both able to move back and forth between town and gown because Americans effected a fundamental change in our sense of who college is for. What is most striking about the threefold typology of American college students offered in Helen Horowitz’s much-cited Campus Life (1987) is that, today, most college students are—her word—“outsiders”:
The term college life has conventionally been used to denote the undergraduate subculture presumably shared by all students. My study clarifies that college life, in fact, is and has been the world of only a minority of students.
by Phil Christman, Hedgehog Review | Read more:
Image: markk

Monday, December 29, 2025

Woodshedding It

[ed. Persevering at something even though you suck at it.]

Generally speaking, we have lost respect for how much time something takes. In our impatient and thus increasingly plagiarized society, practice is daunting. It is seen as prerequisite, a kind of pointless suffering you have to endure before Being Good At Something and Therefore an Artist instead of the very marrow of what it means to do anything, inextricable from the human task of creation, no matter one’s level of skill.

Many words have been spilled about the inherent humanity evident in artistic merit and talent; far fewer words have been spilled on something even more human: not being very good at something, but wanting to do it anyway, and thus working to get better. To persevere in sucking at something is just as noble as winning the Man Booker. It is self-effacing, humbling, frustrating, but also pleasurable in its own right because, well, you are doing the thing you want to do. You want to make something, you want to be creative, you have a vision and have to try and get to the point where it can be feasibly executed. Sometimes this takes a few years and sometimes it takes an entire lifetime, which should be an exciting rather than a devastating thought because there is a redemptive truth in practice — it only moves in one direction, which is forward. There is no final skill, no true perfection.

Practice is in service not to some abstract arbiter of craft, the insular juries of the world, the little skills bar over a character’s head in The Sims, but to you. Sure, practice is never-ending. Even Yo-Yo Ma practices, probably more than most. That’s also what’s so great about it, that it never ends. You can do it forever in an age where nothing lasts. Nobody even has to know. It’s a great trick — you just show up more improved than you were before, because, for better or for worse, rarely is practice public.

by Kate Wagner, The Late Review |  Read more:

Sunday, December 28, 2025

How NIL is Failing College Sports

Editor’s Note (September 2025): This article was first published in May 2025. Since then, NIL controversies have only grown—lawsuits over transfers, new collective rules, and court rulings are fueling even more debate. The problems outlined below remain at the heart of the chaos.

When the NCAA implemented its interim policy on Name, Image, and Likeness (NIL) in July 2021, it was heralded as a long-overdue victory for student-athletes. Finally, college athletes could monetize their personal brands while maintaining eligibility. But three years in, the reality of NIL has exposed deep, structural problems that threaten the very foundation of college sports.

Far from the fair, equitable system its proponents envisioned, NIL has morphed into a thinly veiled pay-for-play scheme dominated by wealthy donors, corporate interests, and an increasingly professionalized amateur sports landscape that’s leaving many athletes and institutions behind.

NIL Is Bad in Its Current Form, But the Concept Isn’t

Let’s be clear: this is not to say NIL is all bad. The core principle—that athletes deserve compensation for the use of their name, image, and likeness—remains valid and important. Student-athletes absolutely deserve to get paid. But this implementation ain’t it.

The problem is the execution. NIL went from zero to 200 MPH overnight with no guardrails. It’s like giving someone a supercar capable of high speeds and letting them drive it through downtown at rush hour. Just because a car can go that fast doesn’t mean it should outside of a sanctioned and governed NASCAR race. Similarly, NIL needed careful implementation with proper rules and oversight—not the free-for-all we’re currently witnessing.

NIL Is Bad for Creating the Collective Problem: Pay-for-Play in Disguise

The most troubling development in the NIL era has been the rise of “collectives” – donor-organized groups that pool money to facilitate NIL deals for athletes at specific schools. These collectives have quickly evolved from their original purpose into recruitment vehicles that effectively function as booster-funded payrolls.

College football’s biggest donors have orchestrated business ventures distributing five-, six- and seven-figure payments to athletes under the guise of endorsement opportunities and appearance fees. While technically legal within vague NCAA guidelines, these arrangements clearly violate the spirit of what NIL was supposed to be.

Consider the case of quarterback Nico Iamaleava, whose story perfectly illustrates the chaos. After signing with Tennessee on a lucrative NIL deal, he later tried to renegotiate his contract during the 2025 offseason. When Tennessee refused both because his performance didn’t warrant the increase and the amount was too high, Iamaleava explored other options. After other schools balked at his demands, he eventually landed at UCLA for significantly less money than he was seeking. Meanwhile, Texas will spend an astounding $40 million on its football roster in 2025-26. But that’s not the issue—why wouldn’t they if they can? The problem is that if another team wants to compete, there’s only one way forward: pay up.

This isn’t about athletes receiving fair compensation for actual marketing value – it’s about wealthy boosters creating slush funds to buy talent. And as long as deals include some nominal “deliverable” from the athlete and are signed after their national letter of intent, there’s little the NCAA can do to stop it. (SportsEpreneur Update as of September 2025: read more about the NIL Clearinghouse and the first NIL deal report.)

NIL Is Bad for Boosting Egos Instead of Programs

A particularly troubling aspect that’s emerged is how NIL has become an ego-driven playground for wealthy boosters. For many donors, it’s no longer about supporting their alma mater—it’s about directly influencing outcomes and claiming credit for wins.

These boosters are essentially treating college teams like fantasy sports with real money. They get a dopamine hit from watching “their” players succeed, knowing their financial contribution made it possible. It’s an addiction—the thrill of buying talent and then basking in reflected glory when that talent performs well.

This creates a dangerous dynamic where the interests of boosters, rather than educational or developmental goals, drive decisions. Coaches find themselves answering not just to athletic directors, but to the whims of deep-pocketed collectives who can control the talent pipeline.

[ed. ...and much more:]

NIL Is Bad for Widening the Gap: Competitive Balance Destroyed

NIL Is Bad for Creating Transfer Portal Chaos: The Free Agency Problem

NIL Is Bad for Athletes Making Short-Term Decisions

NIL Is Bad for the Athlete-Fan Relationship

NIL Is Bad for Corruption and Exploitation: The Dark Side

NIL Is Bad for College Sports’ Identity Crisis

NIL Is Bad for International Student-Athletes

NIL Is Bad, But Reform Is Possible

by SportsEMedia |  Read more:
Image: Tyler Kaufman/Getty
[ed. Money is killing sports (and most everything else), and nobody is even paying lip service to educational opportunities anymore. Should athletes get compensated for their contributions? Sure. But the current system is insane and needs more thought. See also: Limbo Field (HCB); and,  The college football spending cap is brand new, and here’s how schools already are ignoring it (The Athletic).]

Friday, December 26, 2025

A sewing and tailoring book from Dublin, complete with samples (1833).

Monday, December 22, 2025

Touched for the Very First Time

Deep in California’s East Bay, on a mild fall night, a 32-year-old we’ll call Simon told me that minutes earlier, for the first time in his life, he had felt a woman’s breasts. The two of us were hunched over a firepit on a discreet wooden terrace while he recounted what had happened: The woman, with a charitable smile and some gentle encouragement, had invited his hand to her body. She let him linger there for a spell—sensing her contours, appreciating her shape—before he pulled away. Now Simon was staring into the embers, contemplating these intrepid steps out of the virginity that had shackled him for so long. He seemed in a bit of a daze.

“I haven’t been physically intimate with a woman before,” he said softly. “I tried to do it without causing her any discomfort.”

Simon is tall, broad-shouldered, and reasonably well dressed. On that evening, he wore a wrinkle-free button-down tucked into khakis, and a well-manicured mustache on his upper lip. A lanyard dangled around his neck with an empty space where he should have Sharpied his name. Instead, he’d left it blank. After traveling here from Europe—over an ocean, craggy mountaintops, and quilted farmlands—he was, I got the sense, a little embarrassed. Not everyone travels 5,000 miles to have their first kiss. Simon felt it was his only option.

Looking around at the top-secret compound we were sitting in, it was easy to deduce why he’d come. Everything about the place bore the carnal aura of a Bachelor set: daybeds lingered in darkened nooks and crannies. A clothing-optional hot tub burbled next to a fully stocked bar. Hammocks swayed in the autumn breeze. A fleet of beautiful women patrolled the grounds, demure and kind-eyed, ready to break bread with the men. Unlike most of the women Simon had come across within the checkered complexities of his stillborn sexual development—remote, inaccessible, alien—these women were eager to teach him something. They wanted him to grasp, in excruciating detail, how to turn them on.

Simon had purchased a ticket to Slutcon, the inaugural event of a radical new approach to sex education. In its most basic definition, Slutcon is an exclusive retreat for sexually and romantically inexperienced men to learn about intimacy. The women on site had a plan for them: Over the next three days, they would break these boys out of their inhibiting psychic barriers, rebuild their confidence, and refine the seizing glitches in their courtship techniques. By the end of the weekend, the men would understand how they too could become one with the sluts.

Of the 150 or so attendees of Slutcon, many of them, like Simon, were either virgins or something close to it. Tickets ranged from $1,000 to $9,000, and the retreat was pitched as a place to learn how to interact with women—as instructed by women themselves. Slutcon is staffed almost entirely by paid and volunteer female sex workers and intimacy experts, and together, they had made themselves available to be touched, seduced, or otherwise experimented on by the novices at any moment during the convention.

In the parlance of Slutcon, these professionals are referred to as its “flirt girls” or, more colloquially, its “flirtees.” Wearing plastic green wristbands that designated their consent, they darted between the men, sultry and warm, prepared to host anyone who endeavored an approach. Men brave enough to try would be rewarded with their most coveted desire: a chance to speak with, caress, or, hell, maybe even have sex with someone they were attracted to in a controlled environment, where fears of offense were nullified. After all, Slutcon is what its founders call “a place to experiment without getting canceled.”

Its organizers believe that America needs this sort of experimentation to repair its broken relationship to sex. Young people are hooking up at astonishingly low rates, and the problem is especially acute with young men: In 2013, 9 percent of men between the ages of 22 and 34 reported that they hadn’t had sex in the past year. A decade later, nearly 25 percent of that same demographic is reporting a prolonged period of celibacy. Fifty-seven percent of single adults report not being interested in dating, and nearly half of men between the ages of 18 and 25 have never approached a woman in a flirtatious manner. Experts have attributed the drop-off to a variety of causes: There’s the post-COVID loneliness crisis, men’s increasing aversion to romantic risk and rejection, and the political ideologies that continue to divide the genders. But regardless of the cause, in 2025—an age of both Lysistrata-tinged female separatist movements and the intoxicating misogyny of Andrew Tate—it is fair to wonder if men and women still like each other in the way they once did.

To soothe this discontent, Slutcon’s organizers treat femininity like a fount of knowledge. More controversially, they also argue that most men are good—if a bit misunderstood. The conventions of 2010s liberal feminism have no quarter here. Slutcon was not founded upon the idea that men must be leached of patriarchy to be properly socialized. And if I’m being honest, that position had left me with an icy feeling in my stomach from the moment I arrived. What if an attendee took undue advantage of Slutcon’s leeway? What if they flew over the guardrails and made the women here uncomfortable—or, worse, unsafe?

It’s a dangerous game that Slutcon plays. The organizers entertain the idea that to rehabilitate our decaying norms about intimacy, men need to shake off their fears about sex—with the help of women willing to grant leniency to their erotic forays. Almost a decade removed from #MeToo and the astonishing reckoning it unleashed, it was difficult for me to completely sign off on that. It wasn’t that Slutcon was a reactionary project or was concocting a backward tradwife fantasy. But the event did unambiguously assert that men alone are unable to fix our ailing sexual culture. At Slutcon, masculinity in itself was not toxic. Women too, people here argued, had a hand in this unraveling. And if these men and women could spend a weekend committed to radical empathy between the genders—blurring the line between sex education and sex work—maybe we’d relearn a skill that feels crucial to our survival. As the weekend wore on, I started to see their point.
***
On the first night of Slutcon, Aella—the pseudonymous blogger, escort, and internet eccentric who is one of the event’s primary organizers—took the stage at the main pavilion for something of a keynote address. “We are pro-men here,” she said, outlining what the audience could expect from the days ahead. The attendees were reminded that the “flirtees” had consensually opted in to the weekend’s affairs and all were adept at interfacing with clueless suitors. Aella implored the crowd to release inhibitions, to breathe freely, to dig deep within their souls and excavate their inner vixen. Yes, she reminded the room, the women would maintain their personal boundaries, which were always to be respected. (“Some of you will find out in brutal detail that you are giving a girl the ick,” Aella said.) But also, she said, the men here shouldn’t fear bumping against those boundaries—and ought to receive the feedback that resulted graciously, with an open heart. As she wrapped up her remarks, she left the men with a homework assignment: At some point in the next three days, they should ask a woman if they could touch her boobs.

That message resonated with Ari Zerner, a 28-year-old attendee dressed—somewhat inexplicably—in a purple cape. “There’s this feeling of safety here. I know that even if there’s pushback, there’s not going to be punishment,” he said of the weekend’s social contract. Zerner told me that his top goal for being at Slutcon was to learn how to “escalate” a conversation with a woman into something more flirtatiously charged.

Earlier in the day, organizers had distributed a schedule to all participants detailing the retreat’s panels, presentations, and workshops. Some of them centered on seduction: One lecture focused on how and when someone should lean in for a kiss; another offered advice on optimizing a dating profile. Elsewhere, experts gave insight on the taxonomy of sex toys and the finer points of cunnilingus. There was a rope-play demonstration, a seminar on how to properly receive blow jobs, and an assessment of what it takes to be a tactful orgy participant. (One pointer: Shower before arriving.) Once the evening rolled around, Slutcon’s educational atmosphere would morph into a bubbly social hour, when the skills honed in the workshops could be tested on the flirtees. On Saturday night, everyone would gather for Slutcon After Dark—the weekend’s marquee party, and something of a final exam.

All of this made Slutcon sound a little bit like a pickup-artist boot camp, reminiscent of the greasy symposiums of the mid-2000s. Led by vamping gurus like The Game’s Neil Strauss, these “men’s workshops” had dispensed questionable wisdom to help guys get laid quickly, efficiently, and transactionally. (Sample advice: Be slyly rude toward the women you want to sleep with and isolate them from their friends as quickly as possible.) Yet while Slutcon featured a much softer methodology than the Tao of Mystery’s, and was expressly led by women who gave far better advice, nobody at the event ran away from that comparison. In fact, some of the enlightened organizers here wondered if, given the total backsliding of our sexual norms—and the fanatical inceldom we’re facing now—there was something worth reclaiming about an earlier age when, at the very least, men were enthusiastic about approaching women.

“I’m pro–the idea of pickup artistry, in the sense that it goes against the dominant resentful male ideology where guys feel like they’re doomed in the romantic market because their jaw is angled incorrectly,” said Noelle Perdue, a self-described porn historian and one of Slutcon’s speakers. “The idea that you can do certain things that make you more appealing to women is not only true, but there is an optimism inherent in it that I think we’re missing right now.”

After Aella’s commencement, like a class adjourning for recess, the men were unleashed. The sun had firmly tucked behind the chaparral hills, and all at once, everything was possible—for better or worse.

Nobody quite knew what to do with themselves. Some men clustered together, white-knuckling Pacificos, hoping to get lubricated enough to make conversation with the flirtees from a chaste distance. (Alcohol, throughout the weekend, was strictly rationed for safety reasons.) Others, revved up by Aella’s pep talk, hit on everyone in sight, with blissful ego death, to varying degrees of success: I watched one gentleman, balding and heavyset, tell each and every woman in the building that he found her pretty. The campus was permeated with the energy of a middle school dance, more anxious than anticipatory. But still, I admired the attendees’ gameness. Here was a legion of dudes, all gawky, stiff, and tragically horny—imprisoned by long-ossified social and fashion blunders, who write code for a living—taking a leap of faith. At last, they were putting real intention behind the hunger that had burned in them for ages. Slutcon had implored them to flirt their way out of the mess they had found themselves in, and they were willing to give it a try.

The women, meanwhile, were already hard at work. Many of them were coiled on patio furniture, maintaining disciplined eye contact with whatever attendee was currently talking to them. Some of them offered feedback on the men’s techniques, and more often than not, the counseling was astoundingly rudimentary: “It’s like, ‘You are a full foot taller than me and you’re kind of looming over me, so maybe don’t loom’ or ‘You’re not smiling, you’re not really having a playful time’ or ‘You’re getting touchy-feely too fast,’ ” said one of the flirtees, perched on a picnic table in a skirt and crop top, chronicling her interactions thus far. “It didn’t feel like teaching so much as both of us exploring the space together.”

Another flirtee, a striking 27-year-old with jet-black hair named Paola Baca, felt the same way. She had taken it upon herself to slowly disarm the layers of neuroticism that might have previously prevented some of these dudes from engaging with her back in reality. And in that sense, Baca felt that she offered a form of exposure therapy. “A lot of young men don’t think women are humans,” she said. “Not as less-than-humans, but more-than-humans. Attractive women are basically gods to them. I want to show them that we are humans too.” (In her civilian life, Baca studied evolutionary psychology at the University of Texas at Austin.) (...)
***
The boys at Slutcon, it seemed, were at least trying to unwind the multitude of traumas that had brought on their sexual maladjustment. But I remained curious about how all of this was going to turn them into better flirts. The following morning, I filed into a seminar led by Tom, the pseudonymous partner of one of the organizers and one of the few men on staff. He had convened a last-minute flirting training session after witnessing some subpar attempted courtships the night before. “I was like, Oh, gosh, a lot of this is not up to my quality standards, ” he told me. “I had the itch to step in and help.”

So, in a makeshift ballroom filled to the brim with contemplative men—many dutifully scratching down notes with ballpoint pen, eager to learn from the previous evening’s mistakes—Tom tried to adjust course. Spectators were summoned to the stage, one by one, and each of them was thrust into a simulated date with Jean Blue, a sex worker with a flop of auburn hair who had gamely volunteered to serve as a surrogate.

The problems were immediately apparent. The thrills of good flirting can be felt rather than thought—and that is a difficult principle to distill through language. How can anyone articulate the electricity of a good date, especially for those who may have never touched it before? “I basically stopped people when they made me flinch,” said Tom afterward. “And then I tried to name the flinch.”

There was, indeed, a lot of flinching. Some denizens of Slutcon offered Jean canned, dead-on-arrival opening statements (“What Harry Potter character are you like?”). Others attempted to ratchet up the intrigue in hopeless ways (“What’s your sexiest tattoo?”)...

“I was interested in being a part of a convention that was taught by women who are sexually successful and sexually open,” Jean said. “I have a mindset that isn’t You guys suck, and here are all of these ways you’re being weird. Instead, it’s like, I want to help you. I want so badly for you to hit on me better.”

by Luke Winkie, Slate |  Read more:
Image: Hua Ye

Sunday, December 21, 2025

What’s Not to Like?

Similes! I have hundreds of them on three-by-five notecards, highbrow and lowbrow, copied from newspapers, comic strips, sonnets, billboards, and fortune cookies. My desk overflows with them. They run down to the floor, trail across the room into the hallway. I have similes the way other houses have ants.

Why? To start, for the sheer laugh-out-loud pleasure of them. “His smile was as stiff as a frozen fish,” writes Raymond Chandler. “He vanished abruptly, like an eel going into the mud,” writes P. G. Wodehouse, the undoubted master of the form. Or Kingsley Amis’s probably first-hand description of a hangover: “He lay sprawled, too wicked to move, spewed up like a broken spider-crab on the tarry shingle of the morning.”

From time to time, I’ve tried to organize my collection, though mostly the task is, as the cliché nicely puts it, like herding cats. Still, a few categories come to mind. The Really Bad Simile, for instance. Examples of this pop up like blisters in contemporary “literary” fiction. Here is a woman eating a crème brûlée: “She crashed the spoon through the sugar like a boy falling through ice on a lake.” (Authors’ names omitted, per the Mercy Rule.) Or: “A slick of beer shaped like the Baltic Sea spilled on the table.” Sometimes they follow a verb like tin cans on a string: “The restraining pins tinkled to the floor like metal rain, hunks of hair tumbling across her face in feral waves.” Or sometimes they just make the page itself cringe and curl up at the corners: “Charlie’s heart rippled like a cloth spread across a wide table.”

Writing about sex can drive a writer to similes of unparalleled badness. Someone has borrowed my copy of Lady Chatterley’s Lover, but these more recent examples might do, from The Literary Review’s “Bad Sex in Fiction Award”: “Katsuro’s penis and testicles became one single mound that rolled around beneath the grip of her hand. Miyuki felt as though she was manipulating a small monkey that was curling up its paws.” Or this loving, if somewhat chiropractic moment: “her long neck, her swan’s neck … coiling like a serpent, like a serpent, coiling down on him.” Or finally (my eyes are closed as I type): “Her vaginal ratchet moved in concertina-like waves, slowly chugging my organ as a boa constrictor swallows its prey.” (...)

Donne’s simile belongs to another category as well, the epic or Homeric simile. Every reader of the Iliad knows something like this picture of an attacking army as a wildfire:

“As when the obliterating fire comes down on the timbered forest / and the roll of the wind carries it everywhere,” and so the Achaean host drives ahead for another five lines. Modern prose writers can also unscroll a simile at surprising length. John Updike dives right in: “The sea, slightly distended by my higher perspective, seems a misty old gentleman stretched at his ease in an immense armchair which has for arms the arms of this bay and for an antimacassar the freshly laundered sky. Sailboats float on his surface like idle and unrelated benevolent thoughts.” And one would not like to have been the beefy Duke of Bedford when Edmund Burke imagined how revolutionary mobs might regard him: “Like the print of the poor ox that we see in the shop windows at Charing Cross, alive as he is, and thinking no harm in the world, he is divided into rumps, and sirloins, and briskets, and into all sorts of pieces for roasting, boiling, and stewing.”
It takes a dramatic mind to carry a comparison through so logically and so far. The Homeric simile evokes a world far larger than a single flash of thought, however clever. Its length creates a scene in our minds, even a drama where contraries come alive: an army driving into battle, an ocean tamed into a harmless old gent, a bloody clash in the streets between aristocrats and rebels.

“Perceptive of resemblances,” writes Aristotle, is what the maker of similes must be. There is one more step. The maker of similes, long or short, must perceive resemblances and then, above all, obey the first, and maybe only, commandment for a writer: to make you see. Consider Wodehouse’s “He found Lord Emsworth, as usual, draped like a wet sock over the rail of the Empress’s G.H.O.,” or Patricia Cornwell’s “My thoughts scattered like marbles.”

The dictionary definition of metaphor is simply an implied comparison, a comparison without the key words like or as. The most common schoolbook example is, “She has a heart of gold,” followed by, “The world is a stage.” Latching onto the verb is, the popular website Grammarly explains, “A metaphor states that one thing is another thing.”

Close, but not enough. There is great wisdom in the roots of our language, in the origin of words. Deep down, in its first Greek form, metaphor combines meta (over, across) and pherein (to carry), and thus the full word means to carry over, to transfer, to change or alter. A metaphor does more than state an identity. In our imagination, before our eyes, metaphor changes one thing into another: “I should have been a pair of ragged claws / Scuttling across the floors of silent seas.” Eliot’s metaphor is a metamorphosis. Magically, we see Prufrock the man metamorphosed into a creature with ragged claws, like a hapless minor god in Ovid.

Too much? Consider, then, what the presence of like or as does in a simile. It announces, self-consciously, that something good is coming. The simile is a rhetorical magic trick, like a pun pulled out of a hat. A metaphor, however, feels not clever but true. Take away the announcement of like, and we read and write on a much less sophisticated level, on a level that has been called primitive, because it recalls the staggering ancient power of words as curses, as spells to transform someone into a frog, a stag, a satanic serpent.

A better term might be childlike. Psychologists know that very young children understand the metamorphosing power of words. To a child of three or four, writes Howard Gardner, the properties of a new word “may be inextricably fused with the new object: at such a time the pencil may become a rocket ship.” Older children and adults know that this isn’t so. But for most of us, and certainly for most writers I know, the childhood core of magical language play is not lost. It exists at the center and is only surrounded by adult awareness, as the rings encircle the heart of the tree.

Still too much? Here is Updike, making me gasp: “But it is just two lovers, holding hands and in a hurry to reach their car, their locked hands a starfish leaping through the dark.” No labored comparison, no signal not to take it literally. Like the pencil and rocket, their hands have become a starfish. Or Shakespeare, metamorphosing himself into an autumnal tree and then an ancient abbey: “That time of year thou may’st in me behold, / When yellow leaves, or none, or few do hang / Upon those boughs which shake against the cold, / Bare ruin’d choirs where late the sweet birds sang.” Pure magic.

Yet why be a purist? At the high point of language, James Joyce blends simile, metaphor, and extended simile into one beautiful and unearthly scene, an image created by a sorcerer.

A girl stood before him in midstream, alone and still, gazing out to sea. She seemed like one whom magic had changed into the likeness of a strange and beautiful seabird. Her long slender bare legs were delicate as a crane’s. … Her thighs, fuller and soft-hued as ivory, were bared almost to the hips, where the white fringes of her drawers were like feathering of soft white down. Her slate-blue skirts were kilted boldly about her waist and dovetailed behind her. Her bosom was as a bird’s, soft and slight, slight and soft as the breast of some dark-plumaged dove. But her long fair hair was girlish: and girlish, and touched with the wonder of mortal beauty, her face.

The passage is like a palimpsest. A reader can see through the surface of the language. A reader can penetrate to the traces of the real person still visible beneath the living words that are, as they move down the page, quietly transforming her. It is as if we are looking through the transparent chrysalis to the caterpillar growing inside, watching its slow and perfect metamorphosis into the butterfly. Too much? No.

by Max Byrd, American Scholar |  Read more:
Image: locket479/Flickr

Thursday, December 18, 2025

Finding Peter Putnam

The forgotten janitor who discovered the logic of the mind

The neighborhood was quiet. There was a chill in the air. The scent of Spanish moss hung from the cypress trees. Plumes of white smoke rose from the burning cane fields and stretched across the skies of Terrebonne Parish. The man swung a long leg over a bicycle frame and pedaled off down the street.

It was 1987 in Houma, Louisiana, and he was headed to the Department of Transportation, where he was working the night shift, sweeping floors and cleaning toilets. He was just picking up speed when a car came barreling toward him with a drunken swerve.

A screech shot down the corridor of East Main Street, echoed through the vacant lots, and rang out over the Bayou.

Then silence.
 
The 60-year-old man lying on the street, as far as anyone knew, was just a janitor hit by a drunk driver. There was no mention of it on the local news, no obituary in the morning paper. His name might have been Anonymous. But it wasn’t.

His name was Peter Putnam. He was a physicist who’d hung out with Albert Einstein, John Archibald Wheeler, and Niels Bohr, and two blocks from the crash, in his run-down apartment, where his partner, Claude, was startled by a screech, were thousands of typed pages containing a groundbreaking new theory of the mind.

“Only two or three times in my life have I met thinkers with insights so far reaching, a breadth of vision so great, and a mind so keen as Putnam’s,” Wheeler said in 1991. And Wheeler, who coined the terms “black hole” and “wormhole,” had worked alongside some of the greatest minds in science.

Robert Works Fuller, a physicist and former president of Oberlin College, who worked closely with Putnam in the 1960s, told me in 2012, “Putnam really should be regarded as one of the great philosophers of the 20th century. Yet he’s completely unknown.”

That word—unknown—it came to haunt me as I spent the next 12 years trying to find out why.

The American Philosophical Society Library in Philadelphia, with its marbled floors and chandeliered ceilings, is home to millions of rare books and manuscripts, including John Wheeler’s notebooks. I was there in 2012, fresh off writing a physics book that had left me with nagging questions about the strange relationship between observer and observed. Physics seemed to suggest that observers play some role in the nature of reality, yet who or what an observer is remained a stubborn mystery.

Wheeler, who made key contributions to nuclear physics, general relativity, and quantum gravity, had thought more about the observer’s role in the universe than anyone—if there was a clue to that mystery anywhere, I was convinced it was somewhere in his papers. That’s when I turned over a mylar overhead, the kind people used to lay on projectors, with the titles of two talks, as if given back-to-back at the same unnamed event:

Wheeler: From Reality to Consciousness

Putnam: From Consciousness to Reality

Putnam, it seemed, had been one of Wheeler’s students, whose opinion Wheeler held in exceptionally high regard. That was odd, because Wheeler’s students were known for becoming physics superstars, earning fame, prestige, and Nobel Prizes: Richard Feynman, Hugh Everett, and Kip Thorne.

Back home, a Google search yielded images of a very muscly, very orange man wearing a very small speedo. This, it turned out, was the wrong Peter Putnam. Eventually, I stumbled on a 1991 article in the Princeton Alumni Weekly newsletter called “Brilliant Enigma.” “Except for the barest outline,” the article read, “Putnam’s life is ‘veiled,’ in the words of Putnam’s lifelong friend and mentor, John Archibald Wheeler.

A quick search of old newspaper archives turned up an intriguing article from the Associated Press, published six years after Putnam’s death. “Peter Putnam lived in a remote bayou town in Louisiana, worked as a night watchman on a swing bridge [and] wrote philosophical essays,” the article said. “He also tripled the family fortune to about $40 million by investing successfully in risky stock ventures.”

The questions kept piling up. Forty million dollars?

I searched a while longer for any more information but came up empty-handed. But I couldn’t forget about Peter Putnam. His name played like a song stuck in my head. I decided to track down anyone who might have known him.

The only paper Putnam ever published was co-authored with Robert Fuller, so I flew from my home in Cambridge, Massachusetts, to Berkeley, California, to meet him. Fuller was nearing 80 years old but had an imposing presence and a booming voice. He sat across from me in his sun-drenched living room, seeming thrilled to talk about Putnam yet plagued by some palpable regret.

Putnam had developed a theory of the brain that “ranged over the whole of philosophy, from ethics to methodology to mathematical foundations to metaphysics,” Fuller told me. He compared Putnam’s work to Alan Turing’s and Kurt Gödel’s. “Turing, Gödel, and Putnam—they’re three peas in a pod,” Fuller said. “But one of them isn’t recognized.” (...)

Phillips Jones, a physicist who worked alongside Putnam in the early 1960s, told me over the phone, “We got the sense that what Einstein’s general theory was for physics, Peter’s model would be for the mind.”

Even Einstein himself was impressed with Putnam. At 19 years old, Putnam went to Einstein’s house to talk with him about Arthur Stanley Eddington, the British astrophysicist. (Eddington performed the key experiment that proved Einstein’s theory of gravity.) Putnam was obsessed with an allegory by Eddington about a fisherman and wanted to ask Einstein about it. Putnam also wanted Einstein to give a speech promoting world government to a political group he’d organized. Einstein—who was asked by plenty of people to do plenty of things—thought highly enough of Putnam to agree.

How could this genius, this Einstein of the mind, just vanish into obscurity? When I asked why, if Putnam was so important, no one has ever heard of him, everyone gave me the same answer: because he didn’t publish his work, and even if he had, no one would have understood it.

“He spoke and wrote in ‘Putnamese,’ ” Fuller said. “If you can find his papers, I think you’ll immediately see what I mean.” (...)

Skimming through the papers I saw that the people I’d spoken to hadn’t been kidding about the Putnamese. “To bring the felt under mathematical categories involves building a type of mathematical framework within which latent colliding heuristics can be exhibited as of a common goal function,” I read, before dropping the paper with a sigh. Each one went on like that for hundreds of pages at a time, on none of which did he apparently bother to stop and explain what the whole thing was really about...

Putnam spent most of his time alone, Fuller had told me. “Because of this isolation, he developed a way of expressing himself in which he uses words, phrases, concepts, in weird ways, peculiar to himself. The thing would be totally incomprehensible to anyone.” (...)


Imagine a fisherman who’s exploring the life of the ocean. He casts his net into the water, scoops up a bunch of fish, inspects his catch and shouts, “A-ha! I have made two great scientific discoveries. First, there are no fish smaller than two inches. Second, all fish have gills.”

The fisherman’s first “discovery” is clearly an error. It’s not that there are no fish smaller than two inches, it’s that the holes in his net are two inches in diameter. But the second discovery seems to be genuine—a fact about the fish, not the net.

This was the Eddington allegory that obsessed Putnam.

When physicists study the world, how can they tell which of their findings are features of the world and which are features of their net? How do we, as observers, disentangle the subjective aspects of our minds from the objective facts of the universe? Eddington suspected that one couldn’t know anything about the fish until one knew the structure of the net.

That’s what Putnam set out to do: come up with a description of the net, a model of “the structure of thought,” as he put it in a 1948 diary entry.

At the time, scientists were abuzz with a new way of thinking about thinking. Alan Turing had worked out an abstract model of computation, which quickly led not only to the invention of physical computers but also to the idea that perhaps the brain, too, was a kind of Turing machine.

Putnam disagreed. “Man is a species of computer of fundamentally different genus than those she builds,” he wrote. It was a radical claim (not only for the mixed genders): He wasn’t saying that the mind isn’t a computer, he was saying it was an entirely different kind of computer.

A universal Turing machine is a powerful thing, capable of computing anything that can be computed by an algorithm. But Putnam saw that it had its limitations. A Turing machine, by design, performs deductive logic—logic where the answers to a problem are contained in its premises, where the rules of inference are pregiven, and information is never created, only shuffled around. Induction, on the other hand, is the process by which we come up with the premises and rules in the first place. “Could there be some indirect way to model or orient the induction process, as we do deductions?” Putnam asked.

Putnam laid out the dynamics of what he called a universal “general purpose heuristic”—which we might call an “induction machine,” or more to the point, a mind—borrowing from the mathematics of game theory, which was thick in the air at Princeton. His induction “game” was simple enough. He imagined a system (immersed in an environment) that could make one mutually exclusive “move” at a time. The system is composed of a massive number of units, each of which can switch between one of two states. They all act in parallel, switching, say, “on” and “off” in response to one another. Putnam imagined that these binary units could condition one another’s behavior, so if one caused another to turn on (or off) in the past, it would become more likely to do so in the future. To play the game, the rule is this: The first chain of binary units, linked together by conditioned reflexes, to form a self-reinforcing loop emits a move on behalf of the system.

Every game needs a goal. In a Turing machine, goals are imposed from the outside. For true induction, the process itself should create its own goals. And there was a key constraint: Putnam realized that the dynamics he had in mind would only work mathematically if the system had just one goal governing all its behavior.

That’s when it hit him: The goal is to repeat. Repetition isn’t a goal that has to be programmed in from the outside; it’s baked into the very nature of things—to exist from one moment to the next is to repeat your existence. “This goal function,” Putnam wrote, “appears pre-encoded in the nature of being itself.”

So, here’s the game. The system starts out in a random mix of “on” and “off” states. Its goal is to repeat that state—to stay the same. But in each turn, a perturbation from the environment moves through the system, flipping states, and the system has to emit the right sequence of moves (by forming the right self-reinforcing loops) to alter the environment in such a way that it will perturb the system back to its original state.

Putnam’s remarkable claim was that simply by playing this game, the system will learn; its sequences of moves will become increasingly less random. It will create rules for how to behave in a given situation, then automatically root out logical contradictions among those rules, resolving them into better ones. And here’s the weird thing: It’s a game that can never be won. The system never exactly repeats. But in trying to, it does something better. It adapts. It innovates. It performs induction.

In paper after paper, Putnam attempted to show how his induction game plays out in the human brain, with motor behaviors serving as the mutually exclusive “moves” and neurons as the parallel binary units that link up into loops to move the body. The point wasn’t to give a realistic picture of how a messy, anatomical brain works any more than an abstract Turing machine describes the workings of an iMac. It was not a biochemical description, but a logical one—a “brain calculus,” Putnam called it.

As the game is played, perturbations from outside—photons hitting the retina, hunger signals rising from the gut—require the brain to emit the right sequence of movements to return to its prior state. At first it has no idea what to do—each disturbance is a neural impulse moving through the brain in search of a pathway out, and it will take the first loop it can find. That’s why a newborn’s movements start out as random thrashes. But when those movements don’t satisfy the goal, the disturbance builds and spreads through the brain, feeling for new pathways, trying loop after loop, thrash after thrash, until it hits on one that does the trick.

When a successful move, discovered by sheer accident, quiets a perturbation, it gets wired into the brain as a behavioral rule. Once formed, applying the rule is a matter of deduction: The brain outputs the right move without having to try all the wrong ones first.

But the real magic happens when a contradiction arises, when two previously successful rules, called up in parallel, compete to move the body in mutually exclusive ways. A hungry baby, needing to find its mother’s breast, simultaneously fires up two loops, conditioned in from its history: “when hungry, turn to the left” and “when hungry, turn to the right.” Deductive logic grinds to a halt; the facilitation of either loop, neurally speaking, inhibits the other. Their horns lock. The neural activity has no viable pathway out. The brain can’t follow through with a wired-in plan—it has to create a new one.

How? By bringing in new variables that reshape the original loops into a new pathway, one that doesn’t negate either of the original rules, but clarifies which to use when. As the baby grows hungrier, activity spreads through the brain, searching its history for anything that can break the tie. If it can’t find it in the brain, it will automatically search the environment, thrash by thrash. The mathematics of game theory, Putnam said, guarantee that, since the original rules were in service of one and the same goal, an answer, logically speaking, can always be found.

In this case, the baby’s brain finds a key variable: When “turn left” worked, the neural signal created by the warmth of the mother’s breast against the baby’s left cheek got wired in with the behavior. When “turn right” worked, the right cheek was warm. That extra bit of sensory signal is enough to tip the scales. The brain has forged a new loop, a more general rule: “When hungry, turn in the direction of the warmer cheek.”

New universals lead to new motor sequences, which allow new interactions with the world, which dredge up new contradictions, which force new resolutions, and so on up the ladder of ever-more intelligent behavior. “This constitutes a theory of the induction process,” Putnam wrote.

In notebooks, in secret, using language only he would understand, Putnam mapped out the dynamics of a system that could perceive, learn, think, and create ideas through induction—a computer that could program itself, then find contradictions among its programs and wrangle them into better programs, building itself out of its history of interactions with the world. Just as Turing had worked out an abstract, universal model of the very possibility of computation, Putnam worked out an abstract, universal model of the very possibility of mind. It was a model, he wrote, that “presents a basic overall pattern [or] character of thought in causal terms for the first time.”

Putnam had said you can’t understand another person until you know what fight they’re in, what contradiction they’re working through. I saw before me two stories, equally true: Putnam was a genius who worked out a new logic of the mind. And Putnam was a janitor who died unknown. The only way to resolve a contradiction, he said, is to find the auxiliary variables that forge a pathway to a larger story, one that includes and clarifies both truths. The variables for this contradiction? Putnam’s mother and money.

by Amanda Gefter, Nautilus |  Read more:
Image: John Archibald Wheeler, courtesy of Alison Lahnston.
[ed. Fascinating. Sounds like part quantum physics and part AI. But it's beyond me.]