Showing posts with label history. Show all posts
Showing posts with label history. Show all posts

Thursday, January 1, 2026

Leonardo’s Wood Charring Method Predates Japanese Practice

Yakisugi is a Japanese architectural technique for charring the surface of wood. It has become quite popular in bioarchitecture because the carbonized layer protects the wood from water, fire, insects, and fungi, thereby prolonging the lifespan of the wood. Yakisugi techniques were first codified in written form in the 17th and 18th centuries. But it seems Italian Renaissance polymath Leonardo da Vinci wrote about the protective benefits of charring wood surfaces more than 100 years earlier, according to a paper published in Zenodo, an open repository for EU funded research.

Check the notes

As previously reported, Leonardo produced more than 13,000 pages in his notebooks (later gathered into codices), less than a third of which have survived. The notebooks contain all manner of inventions that foreshadow future technologies: flying machines, bicycles, cranes, missiles, machine guns, an “unsinkable” double-hulled ship, dredges for clearing harbors and canals, and floating footwear akin to snowshoes to enable a person to walk on water. Leonardo foresaw the possibility of constructing a telescope in his Codex Atlanticus (1490)—he wrote of “making glasses to see the moon enlarged” a century before the instrument’s invention.

In 2003, Alessandro Vezzosi, director of Italy’s Museo Ideale, came across some recipes for mysterious mixtures while flipping through Leonardo’s notes. Vezzosi experimented with the recipes, resulting in a mixture that would harden into a material eerily akin to Bakelite, a synthetic plastic widely used in the early 1900s. So Leonardo may well have invented the first manmade plastic.

The notebooks also contain Leonardo’s detailed notes on his extensive anatomical studies. Most notably, his drawings and descriptions of the human heart captured how heart valves can control blood flow 150 years before William Harvey worked out the basics of the human circulatory system. (In 2005, a British heart surgeon named Francis Wells pioneered a new procedure to repair damaged hearts based on Leonardo’s heart valve sketches and subsequently wrote the book The Heart of Leonardo.)

In 2023, Caltech researchers made another discovery: lurking in the margins of Leonardo’s Codex Arundel were several small sketches of triangles, their geometry seemingly determined by grains of sand poured out from a jar. The little triangles were his attempt to draw a link between gravity and acceleration—well before Isaac Newton came up with his laws of motion. By modern calculations, Leonardo’s model produced a value for the gravitational constant (G) to around 97 percent accuracy. And Leonardo did all this without a means of accurate timekeeping and without the benefit of calculus. The Caltech team was even able to re-create a modern version of the experiment.

“Burnt Japanese cedar”


Annalisa Di Maria, a Leonardo expert with the UNESCO Club of Florence, collaborated with molecular biologist and sculptor Andrea da Montefeltro and art historian Lucica Bianchi on this latest study, which concerns the Codex Madrid II. They had noticed one nearly imperceptible phrase in particular on folio 87r concerning wood preservation: “They will be better preserved if stripped of bark and burned on the surface than in any other way,” Leonardo wrote.

“This is not folklore,” the authors noted. “It is a technical intuition that precedes cultural codification.” Leonardo was interested in the structural properties of materials like wood, stone, and metal, as both an artist and an engineer, and would have noticed from firsthand experience that raw wood with its bark intact retained moisture and decayed more quickly. Furthermore, Leonardo’s observation coincides with what the authors describe as a “crucial moment for European material culture,” when “woodworking was receiving renewed attention in artistic workshops and civil engineering studies.”

Leonardo did not confine his woody observations to just that one line. The Codex includes discussions of how different species of wood conferred different useful properties: oak and chestnut for strength, ash and linden for flexibility, and alder and willow for underwater construction. Leonardo also noted that chestnut and beech were ideal as structural reinforcements, while maple and linden worked well for constructing musical instruments given their good acoustic properties. He even noted a natural method for seasoning logs: leaving them “above the roots” for better sap drainage.

The Codex Madrid II dates to 1503-1505, over a century before the earliest known written codifications of yakisugi, although it is probable that the method was used a bit before then. Per Di Maria et al., there is no evidence of any direct contact between Renaissance European culture and Japanese architectural practices, so this seems to be a case of “convergent invention.”

The benefits of this method of wood preservation have since been well documented by science, although the effectiveness is dependent on a variety of factors, including wood species and environmental conditions. The fire’s heat seals the pores of the wood so it absorbs less water—a natural means of waterproofing. The charred surface serves as natural insulation for fire resistance. And stripping the bark removes nutrients that attract insects and fungi, a natural form of biological protection.

by Jennifer Ouellette, Ars Technica |  Read more:
Images: A. Di maria et al., 2025; Unimoi/CC BY-SA 4.0; and Lorna Satchell/CC BY 4.0

Wednesday, December 31, 2025

Tom Petty and the Heartbreakers


The lake was Lake Alice, on the campus of the University of Florida, in Gainesville. My parents moved there for work at the university in 1970, just before I was born, and we stayed until I was eight years old, living in a ranch house with a carport, a big backyard, and bright pink azalea bushes springing up in front of my bedroom window.

I’ve been thinking about those years a lot lately thanks to my discovery of Tom Petty’s “Gainesville.” The song was recorded in 1998 but not released until 2018, one year after Petty’s death from a drug overdose at age 66. Petty was born in Gainesville in 1950, twenty years and one day before I was, and lived there until 1974, when he left for Los Angeles with his first band, Mudcrutch. The song’s music video is full of shots of parts of the city he was known to have frequented. There are one-story ranch houses like the one I grew up in; red-brick university buildings; Griffin Stadium (“the Swamp”), where the Gators play; trees decorated with Spanish moss. And there’s Lake Alice and its alligators. As I watched the video, childhood memories surged from the back of my brain to the front, and I felt a sadness for my old town I hadn’t felt in years. Gainesville was a big town, Petty sings. It wasn’t really, but for a while it was the only one we both knew.

The video also has a shot of the mailbox at one of Petty’s childhood homes. It shows the address: 1715 NW 6th Terrace. I grew up on 16th Terrace, a 38-minute walk away (according to Google Maps). In 2019, after the video came out, someone stole the mailbox. (...)

Petty and I overlapped in Gainesville for just four years and obviously led very different lives. (I wasn’t playing in Mudcrutch; I was going to pre-kindergarten.) But it turns out we both transgressed at Lake Alice. Watching the “Gainesville” video sent me down a rabbit hole of research into Petty’s early life, savoring the chance to connect with my own story through his. I found a Gainesville Sun article about how, in 1966, when Petty was 16 and had just earned his driver’s license, he accidentally drove his mother’s old Chevy Impala into the lake. He was supposed to be at a dance, and his mom had to come pick him up in their other family car. (...)

Reading that Gainesville Sun article, I found myself wondering about Tom Petty’s mom. What was she thinking as she drove her son home from Lake Alice that night, unaware of the fame that would find him just a few years later? Did she try to teach him some kind of lesson? Or was she thinking, instead, of her own transgressions, perhaps invisible to her son? Did he—sitting, embarrassed in the passenger seat—still believe she was larger than life? Or was he already past that?

You’re all right anywhere you land, he would write 22 years later. You’re okay anywhere you fall. For both of us, that was Gainesville, for a while. And then Gainesville shrank, becoming something else: somewhere we used to live, somewhere we no longer know, somewhere we were all so young. Long ago and far away, another time, another day.  ~ Tracks on Tracks

[ed. Thought I'd heard most TP songs, but not this one.]

Tuesday, December 30, 2025

Tatiana Schlossberg Dies at 35

Tatiana Schlossberg, an environmental journalist and a daughter of Caroline Kennedy — and granddaughter of President John F. Kennedy — whose harrowing essay about her rare and aggressive blood cancer, published in The New Yorker magazine in November, drew worldwide sympathy and praise for Ms. Schlossberg’s courage and raw honesty, died on Tuesday. She was 35.

Her death was announced in an Instagram post by the John F. Kennedy Library Foundation, signed by her family. It did not say where she died.

Titled “A Battle With My Blood,” the essay appeared online on Nov. 22, the 62nd anniversary of her grandfather’s assassination. (It appeared in print in the Dec. 8 issue of the magazine with a different headline, “A Further Shore.”) In it, Ms. Schlossberg wrote of how she learned of her cancer after the birth of her daughter in May 2024. There was something off about her blood count, her doctor noticed, telling her, “It could just be something related to pregnancy and delivery, or it could be leukemia.”

It was leukemia, with a rare mutation. Ms. Schlossberg had a new baby, and a 2-year-old son.

“I did not — could not — believe that they were talking about me,” she wrote. “I had swum a mile in the pool the day before, nine months pregnant. I wasn’t sick. I didn’t feel sick. I was actually one of the healthiest people I knew. I regularly ran five to ten miles in Central Park. I once swam three miles across the Hudson River — eerily, to raise money for the Leukemia and Lymphoma Society.”

She added, “This could not possibly be my life.”

She wrote of months of chemotherapy and a postpartum hemorrhage, from which she almost bled to death, followed by more chemo and then a stem cell transplant — a Hail Mary pass that might cure her. Her older sister, Rose Schlossberg, was a match and would donate her cells. Her brother, Jack Schlossberg, now running for Congress in New York’s 12th district, was a half-match; nonetheless he pressed the doctors, asking if a half-match might be good enough. Could he donate, too? (He could not.)

After the transplant, when Ms. Schlossberg’s hair fell out, Jack shaved his head in solidarity. She wore scarves to cover her bare scalp; when her son came to visit her in the hospital, he did, too.

She was never able to fully care for her daughter — to feed, diaper or bathe her — because of the risk of infection, and her treatments had kept her away from home for nearly half of her daughter’s first year of life.

“I don’t know who, really, she thinks I am,” Ms. Schlossberg wrote, “and whether she will feel or remember, when I am gone, that I am her mother.”

She went into remission, had more chemo, relapsed and joined a clinical trial. There were blood transfusions, another stem cell transplant, from an unrelated donor, more chemo, more setbacks. She went into remission again, relapsed, joined another clinical trial and contracted a form of the Epstein-Barr virus. The donated cells attacked her own, a condition called graft-versus-host disease. When she came home after a stint in the hospital in October, she was too weak to pick up her children.

Her oncologist told her that he thought he could, maybe, keep her alive for another year.

“For my whole life, I have tried to be good,” she wrote, “to be a good student and a good sister and a good daughter, and to protect my mother and never make her upset or angry. Now I have added a new tragedy to her life, to our family’s life, and there’s nothing I can do to stop it.”

Tragedy, of course, has trailed the Kennedy family for decades. Caroline Kennedy, a former ambassador to Australia and Japan, was just 5 when her father was assassinated on Nov. 22, 1963; she was 10 when her uncle Robert F. Kennedy, a presidential candidate in the Democratic primary of 1968, was murdered. Her brother, John F. Kennedy Jr., died in 1999, when the plane he was piloting crashed off Martha’s Vineyard, killing him, his wife, Carolyn Bessette Kennedy, and her sister, Lauren Bessette. He was 38 years old, and Tatiana had been a flower girl at his wedding three years earlier.

Having grown up in the glare of her parents’ glamour, and her family’s tragedies, Ms. Kennedy largely succeeded in giving her own children a life out of the spotlight — a relatively normal, if privileged, upbringing, along with a call to public service that was the Kennedy legacy.

by Penelope Green, NY Times |  Read more:
Image: Sonia Moskowitz/Globe Photos/ZUMA
[ed. A strong, intelligent woman. And another Kennedy tragedy. See also: A Battle With My Blood (New Yorker).]

Saturday, December 27, 2025

The Last Good Thing

On a late-winter Chicago day that was more gray than cold, I retrieved a binder from a neighbor’s front porch. The binder was fat and unexpectedly heavy, and I had the deranged thought that it might be filled with sand, but it wasn’t filled with sand. It was filled with 92 DVDs. DVDs can seem heavy if you haven’t held them in a while.

I had not been on the lookout for DVDs, and until I became aware of this binder, I had no special attachment to DVDs of any sort. There was no box of Criterion Collection masterpieces lugged from apartment to apartment since my college days. I certainly did not long for the color-coded cables that always had to be untangled and reconnected to the DVD player my husband weirdly couldn’t bring himself to throw away, nor did I miss hunting for the special remote that only ever made an appearance when I was looking for the regular one. Society had moved past DVDs, and frankly, so had I.

Still, the second I saw the binder—containing “practically every major kid’s cartoon movie from the last 20 years on DVD”—appear on my local Free Box Facebook group (where my neighbors give away everything from original artwork to half-empty bottles of shampoo), I wanted it deeply, covetously, like when you see someone wearing a wool sweater that is so entirely your style, you can’t believe it isn’t already yours. Ninety-two disks! Without a moment’s hesitation, I typed, “Interested!” and pressed return. And the next day, I stood awkwardly on my neighbor’s porch to collect my prize.

At this point, I still assumed my excitement about the DVDs had primarily to do with thriftiness, or perhaps a kind of rugged self-reliance. I still assumed their appeal came not from what they could offer me but from what they could free me of, namely going along with the ever-more-expensive whims of Disney+ executives.

In other words, I considered a binder containing 92 DVDs to be the children’s media equivalent of F*** You Money—Take that streaming bill and shove it!—and not, say, something to build my identity as a parent around.

Obviously.

That evening, while my husband sautéed asparagus on the stovetop and my children squabbled over whether to watch Peppa Pig on Amazon Prime or All Engines Go on Netflix, I announced to my family that we were quitting our streaming services and going analog.

“Well, more analog,” I said, suddenly unsure. “Digital analog. Is that a thing?” I sensed that it might not be, but also that this wasn’t particularly important. What was important was that our viewing habits were moving back in time to an era when watching television didn’t require keeping a credit card on file with five different companies.

Then I inhaled sharply, cringing the way one does while uncorking a particularly volatile bottle of champagne. Ditching streaming would be no great struggle for me, someone who watches about as much television as your typical giant Pacific octopus. But the rest of them?

To my surprise, the anticipated shrieks of displeasure never came. My children, whose ears shut down at six p.m. though their bodies keep kicking until eight, wouldn’t even register the change until the end of the month, when our Netflix account finally ran out of gas. At that point they would look at me as though I’d shredded a sacred contract formed between them and the universe. I would, in turn, cheerfully remind them about the DVDs.

“That’s right,” I would say. “They are very shiny. No, stop—you can’t touch them! They scratch.”

Even my husband merely nodded and flipped the asparagus. I could only assume that he was deep in thought, considering the transformative possibilities of spending less time watching television. The two of us have always shared some private dismay about not being altogether more impressive people—Times obit–worthy, ideally, but at the very least, people who exercise more often. Besides, it went without saying that I would not be canceling YouTube Premium, which is where my husband watches sports highlights. In my quest to become a thriftier parent, I had no desire to become a single parent.

An honest account of the binder’s out-of-nowhere appeal should also include observing how neatly DVDs’ technological primacy aligns with my own “reminiscence bump.” This is what psychologists call the increased salience for the autobiographical memories we form between the ages of approximately 10 and 30. For the rest of our lives, although what came before and after will predictably recede, the events of those 20 years will maintain their privileged place in our minds. Researchers aren’t entirely sure why this is. Some suspect novelty: New things are inherently more memorable, and this is a time of new things. Others chalk it up to the sheer number of culturally significant milestones that happen during our teens and 20s, from first kisses and summer jobs and driver’s licenses to weddings and college graduations and—well, more common until recently—first homes. Another theory focuses on storytelling: As we come of age, the places we go and the music we listen to and the people we bond with become the settings and soundtracks and characters for the stories we tell ourselves about the people we are becoming, stories that we’ll carry all our lives.

If these theories sound similar, it’s because they’re all trying to explain the same phenomenon: why our formative years are so very formative. They are all trying to explain why some part of a reasonably well-adjusted, middle-aged woman with a husband and two kids will always be a teenager with spiky hair, trying desperately to convince herself that she likes watching low-budget horror movies.

Low-budget horror movies on DVD, that is. In 1997, when the disks first hit American shelves, I was just 13; by the time revenue from streaming eventually eclipsed that from DVDs (and their higher-definition Blu-Ray cousins), I had already left my 20s behind. Which means that for me, the pinnacle of home entertainment is and will always be synonymous with a fat binder of DVDs.

For a few weeks, quitting our streaming services and embracing DVDs indeed seemed like a sacrifice. Quickly, though, the experiment morphed into something quite different. I found myself proselytizing about the Way of the DVD. They’re so cheap, I’d say to another parent at pre-K pickup. People are literally giving them away. Go to a garage sale of any size and there you go: more DVDs for the collection.

It’s nice to really own a thing, I’d say to a colleague with children of her own. It’s nice not to worry something will go poof in the night.

It’s great for the kids to have choices but not too many choices, I’d say to anyone still listening. It’s great when what they want to watch is in the binder, and it’s great when it isn’t and they have to decide whether they want to purchase How to Train Your Dragon: The Hidden World with their tooth-fairy money (both of my kids were in highly productive tooth-losing phases) or wait for a free disk to arrive at the library. Because when everything can be yours just like that, is anything even real?

It’s good for movies to be real, I’d say. Treat them badly—roll them down the stairs or throw them like frisbees or wear them because it’s fun to pretend to have large, glassy robot eyes—and they will scratch. Natural consequences! It’s good for there to be natural consequences. (...)

Unlike VHS tapes, DVDs encode data digitally, allowing for higher video resolution and superior audio quality. DVDs also store more data, and they store the information more efficiently. This is what frees up space for the bells and whistles: dubbed audio tracks and subtitles, director’s cuts and deleted scenes. DVDs are read by laser; so long as they aren’t used as coasters or hockey pucks, they shouldn’t wear or tear at all. On a commercial DVD, even the most determined fool cannot accidentally tape over a favorite movie. And remember the days before opening menus, when you stood by the television and pressed “REW” on the VCR until the members of your family screamed that you’d gone too far, in which case you’d press “FF” until they screamed again? DVDs have menus, and when they arrived, America let out a collective, “Hell yeah.”

But VHS, the technology that DVDs supplanted, was the truly transformative one. VHS was what let us all own movies in the first place, to watch whenever we wanted to. Or was it color television that transformed home entertainment? The rise of network programming? That very first public broadcast? It hardly matters. By the time DVDs came along, the latest crest among so many waves of progress, it seemed inevitable that they would be good, and that the technology that eventually replaced them would be even better.

A lot of things seemed inevitable then.

I grew up, after all, when the growing up was good. The Berlin Wall was coming down, and the world was opening up. The economy was strong and college attendance was on the rise and Americans were more optimistic that children would live better lives than their parents. There were problems, sure, but they were problems that would resolve themselves in time, as a new, more enlightened generation took the helm. I grew up when time itself seemed on my side.

I watched social media connect us, and then I watched it detonate us into a billion tiny factions. I watched smartphones liberate us, and then I watched them capture us all over again. Now I see artificial intelligence on the horizon, and even as I am awestruck by its potential, I shudder.

“When you invent the ship, you also invent the shipwreck,” said the philosopher Paul Virilio. Here’s the thing: I grew up when it still felt possible that we could invent the ship and then put our heads together to avoid the shipwreck. In the world bequeathed to my children, it can seem like there is no avoiding the wreck. And in this world, in this widening gyre of uncertain outcomes and frictionless gratification, DVDs are shiny and real and the same shape as life preservers. DVDs are the last unambiguously good thing: the last technology that arrived and only made things better and would never ever let us down.

by Jess Love, The American Scholar |  Read more:
Image: Gracia Lamb

Friday, December 26, 2025

How Willie Nelson Sees America

When Willie Nelson performs in and around New York, he parks his bus in Weehawken, New Jersey. While the band sleeps at a hotel in midtown Manhattan, he stays on board, playing dominoes, napping. Nelson keeps musician’s hours. For exercise, he does sit-ups, arm rolls, and leg lifts. He jogs in place. “I’m in pretty good shape, physically, for ninety-two,” he told me recently. “Woke up again this morning, so that’s good.”

On September 12th, Nelson drove down to the Freedom Mortgage Pavilion, in Camden. His band, a four-piece, was dressed all in black; Nelson wore black boots, black jeans, and a Bobby Bare T-shirt. His hair, which is thicker and darker than it appears under stage lights, hung in two braids to his waist. A scrim masked the front of the stage, and he walked out unseen, holding a straw cowboy hat. Annie, his wife of thirty-four years, rubbed his back and shoulders. A few friends watched from the wings: members of Sheryl Crow’s band, which had opened for him, and John Doe, the old punk musician, who had flown in from Austin. (At the next show, in Holmdel, Bruce Springsteen showed up.) Out front, big screens played the video for Nelson’s 1986 single “Living in the Promiseland.”

“Promiseland” joined Nelson’s preshow in the spring, after ICE ramped up its raids on immigrants. The lyrics speak on behalf of newcomers: “Give us your tired and weak / And we will make them strong / Bring us your foreign songs / And we will sing along.” The video cuts between footage of Holocaust survivors arriving on Liberty ships and of Haitian migrants on wooden boats. In Camden—two nights after the assassination of Charlie Kirk, one night after the State Department warned immigrants against “praising” his murder, hours after bomb threats forced the temporary closure of seven historically Black colleges—the images hit hard. When the video ended, three things happened at once: stagehands yanked the scrim away, Nelson sang the first notes of “Whiskey River,” and a giant American flag unfurled behind him.

“Whiskey River” has been Nelson’s opener for decades. He tends to start it with a loud, ringing G chord, struck nine times, like a bell. On this night, he sat out the beginning and took the first solo instead, strumming forcefully, pushing the tempo. “I don’t know what I’m going to do when I pick up a guitar,” Nelson said. He plays to find out, discovering new ways into songs he’s been singing, in some cases, since he was a child. “Willie loves to play music more than anyone I’ve ever met,” the musician Norah Jones told me. “He can’t stop, and he shouldn’t.” For Nelson, music is medicine—he won’t do the lung exercises his doctors prescribe, but “singing for an hour is good for you,” he says. His daughter Amy put it more bluntly: “I think it’s literally keeping him alive.”

Last year, Nelson didn’t make it to every performance. On those nights, his older son, Lukas, filled in. At the end of the tour, no one knew if Nelson would go out again; five months later, he did. I started following him in February, in Florida. In Key West, Lukas and Annie flanked Nelson as he sat and rested before going on. Annie had her hand on the small of his back and Lukas on his shoulder; they looked like two cornermen coaxing a boxer back into the ring. Nelson suffers from emphysema. He barely survived COVID-19. (He got so sick he wanted to die; Annie told him if he did she would kill him.) His voice is still inky, he struggles for air, but he stays in charge, or lets go, as the moment requires.

“I’m definitely following Willie,” Nelson’s harmonica player, Mickey Raphael, told me. “He sets the tempo. He picks the songs.” Raphael is tall, with dark, curly hair and the easy swagger of a man who has spent his life onstage. When he started with Nelson, in 1973, there was no set list. Every night was “stream of consciousness,” catch-as-catch-can. Now, even with set lists taped to the carpet, Nelson might switch songs or skip ahead, lose his way, or drop verses—things he did as a younger man, too. At the end of a number that’s really careened, he’ll look over his shoulder and cross his arms in an umpire’s safe sign. “We made it,” he’s telling Raphael on these occasions. “We’re home.” (...)

“Willie means more to me than the Liberty Bell,” Jeff Tweedy told me. Tweedy and his band, Wilco, played a few dates with Nelson this year, as part of the annual Outlaw Music Festival, which Nelson headlined along with Bob Dylan. (Other performers included Billy Strings and Lucinda Williams.) Tweedy said he admires Nelson’s vision of America—“a big tent, and it should be”—and the way Nelson says what he thinks without rancor, always punching up. “He doesn’t aim at his fellow-citizens. He aims at corporations. He aims at injustice.”

Nelson has a knack for leaning left without losing the room. He stumped for Jimmy Carter, who was a friend, and for the former congressman and Presidential candidate Dennis Kucinich; he co-chairs the advisory board of the National Organization for the Reform of Marijuana Laws; he has pushed for the use of biofuels, running his tour buses on vegetable oil and soybeans; he opposed the war in Iraq. In 2006, he recorded a Ned Sublette song called “Cowboys Are Frequently, Secretly Fond of Each Other.” “I’ve known straight and gay people all my life,” he told Texas Monthly. “I can’t tell the difference. People are people where I came from.” (“Beer for My Horses,” a hang-’em-high duet with Toby Keith, has aged less well.)

In 2018, when the government began separating families at the southern border, Nelson said, “Christians everywhere should be up in arms.” That fall, he played a new song, “Vote ’Em Out,” at a rally for Beto O’Rourke, who was running for Senate. O’Rourke told me the point wasn’t only the stand Nelson took; it was the idea of Texas he represented. There was a temptation, O’Rourke said, to accept the caricature of Texas as “extreme, conservative, macho, tough-guy,” though for people like him, who’d lived there all their lives, “true Texas is kindness, hospitality, open hearts.” Nelson, he said, embodied “the best of Texas: you can be a freak, a weirdo, a cowboy, a rancher, a cello player, whatever. He’s the patron saint of that—growing his hair, rejecting corporate music, and just being a good fucking human being.”

At Nelson’s concerts, all of those types gather. They always have. In the seventies, when Nelson was still playing dance halls, ranch hands and refinery workers shared the floor with hippies who’d heard his songs on FM radio. It was a volatile mix. At the Half-Dollar, outside Houston, groups of long-haired kids sat in front of the stage as cowboys two-stepped behind them. The cowboys “would start dancing, do a little spin, and kick somebody in the back,” Steve Earle recalled. “Willie caught it out of the corner of his eye.” Nelson stopped the band in the middle of a song. “There’s room for some to sit and for some to dance,” he said, and, as soon as he did so, there was.

“People out there get to clap their hands and sing for a couple hours, and then they go home feeling better,” Nelson said. “I get the same enjoyment that they do—it’s an equal exchange of energy.” As a young man in Texas, Nelson taught Sunday school and considered the ministry. On the bus in Weehawken, I asked if he saw his work as akin to a preacher’s. “Oh, I don’t know about that,” Nelson said. “I don’t try to preach to nobody.” Annie disagreed: “I think he’s a shaman.” Musicians like him draw strangers together, she said. “Let’s face it, we’re being divided intentionally. That’s part of the playbook—divide and conquer. It’s been around a long time. When somebody’s saying hello to somebody without knowing their political ideology, and they’re just enjoying music together, that’s church. That’s healing. That’s really important right now. Really, really important.” (...)

Nelson doesn’t mind doing two or three takes of a number. He bristles at four. Don Was, who produced Nelson’s album “Across the Borderline,” in 1992, told me about recording the title track in Dublin, where Nelson had a night off from touring. They spent an hour working out the arrangement—talking, not playing—then went for the first take. Halfway through the second verse, Was thought, Oh, man, this is unbelievable. Please, nobody fuck up. “He plays this incredible solo in the middle. Third verse, I’m really freaking out—please, nobody. And nobody did.” Kris Kristofferson added harmonies; that was the only overdub. Then Nelson rolled a joint and marked it with a Sharpie, about three-quarters of the way down. He told the house engineer, “I’m going to smoke this joint. When it gets burned down to the blue dot, your mix is done.” Forty-five minutes later, it was. “That’s the mix on the album,” Was said.

These days, Cannon cuts backing tracks with musicians who “get Willie and don’t look at the clock.” Nelson comes in later, as he was doing now, to play and sing. “He has no pitch issues,” Cannon says. “He’s allergic to out-of-tune-ness.” But Nelson plays odd tricks with rhythm—phrasing behind the beat while his guitar rushes forward. “Willie’s timing is so weird,” Raphael told me. “It’s like a snake slithering across the ground.” Nelson is one of the most imitated guitarists in the world, Cannon says, but, without his feel, imitators “sound silly.” When Nelson plays, “even the crazy shit sounds beautiful.” Cannon tries not to sand down the edges: “I love his music too much to screw it up.” (...)

“You never know exactly what he’s going to do,” Micah Nelson told me, describing the concerts he’s played with his dad. He went on, “You’re always present. Nobody’s phoning it in, because you never know where the spirit’s going to take him.” Nelson may sing a verse way ahead of everyone, when they’re “still on the first chord,” and the instinct is to speed up, to catch him, Micah said. “It’s, like, No, no, he’s waiting for us over there, three blocks away.” Nelson lets the band close the gap, then keep going. “He’s singing so outside of the pocket, there is no pocket. He’s obliterating any sort of timing,” Micah continued. Somehow, it works. Any number of times, Micah has thought, Oh, shit, he’s lost the plot. He always finds it again. Playing with Nelson is like performing with the Flying Wallendas, Micah said, or with Neil Young’s band. It’s the opposite of perfectly choreographed shows with backing tracks that all but play themselves. There’s never a safety net. “Obviously, it helps to have great songs,” he added. “Now that I say it, the songs are the safety net. You really can’t go wrong when you have good songs.” (...)

Amy recalled a time when she and her sister were trampled by fans trying to get to their father: “My mom said, ‘He’s not going to really know what that’s like, because they stop when they get to him. They will plow through you to get to him.’ ” Any hard feelings fell away when she thought about the alternative—years her father had spent going nowhere, the life he might have led had he not broken through. “Whatever resentment I had for his fans disappeared when I started looking at it from that perspective.”

by Alex Abramovich, New Yorker | Read more:
Image: Danny Clinch
[ed. What more is there to say about Willie at this point? Well, this profile of a recent tour, is one example. Then there's this, by Bob Dylan:]

I asked Dylan about Nelson, and he wrote back with a warning: “It’s hard to talk about Willie without saying something stupid or irrelevant, he is so much of everything.” He went on:
How can you make sense of him? How would you define the indefinable or the unfathomable? What is there to say? Ancient Viking Soul? Master Builder of the Impossible? Patron poet of people who never quite fit in and don’t much care to? Moonshine Philosopher? Tumbleweed singer with a PhD? Red Bandana troubadour, braids like twin ropes lassoing eternity? What do you say about a guy who plays an old, battered guitar that he treats like it’s the last loyal dog in the universe? Cowboy apparition, writes songs with holes that you can crawl through to escape from something. Voice like a warm porchlight left on for wanderers who kissed goodbye too soon or stayed too long. I guess you can say all that. But it really doesn’t tell you a lot or explain anything about Willie. Personally speaking I’ve always known him to be kind, generous, tolerant and understanding of human feebleness, a benefactor, a father and a friend. He’s like the invisible air. He’s high and low. He’s in harmony with nature. And that’s what makes him Willie.

The Precipice

May 1991. Mumbai. Night.

While politicians slept, trucks were loading gold—67 tonnes of it—at the Reserve Bank of India’s vaults in South Bombay. Essentially all of India’s gold reserves. The trucks drove 35 kilometers to the airport under armed guard. There, the gold was loaded onto chartered cargo planes.

Commercial airlines had refused the job. Too risky. Too desperate.

Between May 21 and 31, four flights carried India’s treasure out of the country: 20 tonnes to UBS in Switzerland, 47 tonnes to the Bank of England in London. The RBI had to charter something called “Heavy Lift Cargo Airlines” because nobody else would touch this operation.

The gold was collateral. India was pawning its jewelry.

If you want to understand what this meant culturally, consider: In India, gold isn’t just an asset. It’s sacred. The goddess Lakshmi is depicted sitting on gold coins. Indian weddings feature kilograms of gold because “Does she have gold?” is the first question asked about brides. Women remove their gold only at death or divorce.

And here was the nation shipping its treasure to its former colonizer. At night. In secret. Like a family selling heirlooms to pay the landlord.

When the news leaked, there was public outrage: “We have pawned our mother’s jewelry!”

The operation raised $600 million.

It bought India about three weeks.

Foreign exchange reserves had fallen to $1.2 billion—enough for roughly fifteen days of imports. Fifteen days until the food shipments stopped. Fifteen days until the oil stopped. Fifteen days until a nuclear-armed nation of 900 million people defaulted on its debts.

What happens when a country that size defaults? What happens when the imports stop?

We know what happened to the Soviet Union. It collapsed. India was heading there—fast.

The Most Important People You’ve Never Heard Of

Three men you’ve probably never heard of—P.V. Narasimha Rao, Manmohan Singh, Montek Singh Ahluwalia—may be the three most important people of the late 20th century.

Bold claim. Audacious, even. Let me defend it.

Here are the numbers. In 1991, over 45% of Indians lived below the poverty line—roughly 400 million people. By 2024, extreme poverty in India had fallen to under 3%.

That’s 400 to 500 million people lifted out of poverty.

The largest democratic poverty alleviation in human history. (...)

Nothing else comes close to democratic poverty alleviation at this scale.

And here’s the thing about crises: they don’t automatically produce reform. Crisis alone doesn’t fix anything.

Argentina has had crisis after crisis—and keeps defaulting, keeps returning to the same failed policies. Greece in 2010 accepted bailouts, changed almost nothing structural, and remains economically fragile. Venezuela’s oil crises led not to reform but to doubling down on socialism, and now people eat from garbage trucks.

The Soviet Union faced a crisis and collapsed. It didn’t reform. It disintegrated.

India could have gone any of those directions. What makes these three men remarkable isn’t that they faced a crisis—it’s that they converted crisis into transformation. That almost never happens.

And because it worked—because the catastrophe was prevented—nobody remembers.

You can’t feel gratitude for the plane that didn’t crash. You can’t celebrate the engineer who prevented the disaster you never experienced. The counterfactual isn’t real to anyone.

This is why India forgot them. But that’s for Part 3. First, let’s understand what they were saving us from.

by Samir Varna |  Read more:
Image: uncredited

Sunday, December 21, 2025

The Day the Dinosaurs Died

A young paleontologist may have discovered a record of the most significant event in the history of life on Earth. “It’s like finding the Holy Grail clutched in the bony fingers of Jimmy Hoffa, sitting on top of the Lost Ark."

If, on a certain evening about sixty-­six million years ago, you had stood somewhere in North America and looked up at the sky, you would have soon made out what appeared to be a star. If you watched for an hour or two, the star would have seemed to grow in brightness, although it barely moved. That’s because it was not a star but an asteroid, and it was headed directly for Earth at about forty-five thousand miles an hour. Sixty hours later, the asteroid hit. The air in front was compressed and violently heated, and it blasted a hole through the atmosphere, generating a supersonic shock wave. The asteroid struck a shallow sea where the Yucatán peninsula is today. In that moment, the Cretaceous period ended and the Paleogene period began.

A few years ago, scientists at Los Alamos National Laboratory used what was then one of the world’s most powerful computers, the so-called Q Machine, to model the effects of the impact. The result was a slow-motion, second-by-second false-color video of the event. Within two minutes of slamming into Earth, the asteroid, which was at least six miles wide, had gouged a crater about eighteen miles deep and lofted twenty-five trillion metric tons of debris into the atmosphere. Picture the splash of a pebble falling into pond water, but on a planetary scale. When Earth’s crust rebounded, a peak higher than Mt. Everest briefly rose up. The energy released was more than that of a billion Hiroshima bombs, but the blast looked nothing like a nuclear explosion, with its signature mushroom cloud. Instead, the initial blowout formed a “rooster tail,” a gigantic jet of molten material, which exited the atmosphere, some of it fanning out over North America. Much of the material was several times hotter than the surface of the sun, and it set fire to everything within a thousand miles. In addition, an inverted cone of liquefied, superheated rock rose, spread outward as countless red-hot blobs of glass, called tektites, and blanketed the Western Hemisphere.

Some of the ejecta escaped Earth’s gravitational pull and went into irregular orbits around the sun. Over millions of years, bits of it found their way to other planets and moons in the solar system. Mars was eventually strewn with the debris—just as pieces of Mars, knocked aloft by ancient asteroid impacts, have been found on Earth. A 2013 study in the journal Astrobiology estimated that tens of thousands of pounds of impact rubble may have landed on Titan, a moon of Saturn, and on Europa and Callisto, which orbit Jupiter—three satellites that scientists believe may have promising habitats for life. Mathematical models indicate that at least some of this vagabond debris still harbored living microbes. The asteroid may have sown life throughout the solar system, even as it ravaged life on Earth.

The asteroid was vaporized on impact. Its substance, mingling with vaporized Earth rock, formed a fiery plume, which reached halfway to the moon before collapsing in a pillar of incandescent dust. Computer models suggest that the atmosphere within fifteen hundred miles of ground zero became red hot from the debris storm, triggering gigantic forest fires. As the Earth rotated, the airborne material converged at the opposite side of the planet, where it fell and set fire to the entire Indian subcontinent. Measurements of the layer of ash and soot that eventually coated the Earth indicate that fires consumed about seventy per cent of the world’s forests. Meanwhile, giant tsunamis resulting from the impact churned across the Gulf of Mexico, tearing up coastlines, sometimes peeling up hundreds of feet of rock, pushing debris inland and then sucking it back out into deep water, leaving jumbled deposits that oilmen sometimes encounter in the course of deep-sea drilling.

The damage had only begun. Scientists still debate many of the details, which are derived from the computer models, and from field studies of the debris layer, knowledge of extinction rates, fossils and microfossils, and many other clues. But the over-all view is consistently grim. The dust and soot from the impact and the conflagrations prevented all sunlight from reaching the planet’s surface for months. Photosynthesis all but stopped, killing most of the plant life, extinguishing the phytoplankton in the oceans, and causing the amount of oxygen in the atmosphere to plummet. After the fires died down, Earth plunged into a period of cold, perhaps even a deep freeze. Earth’s two essential food chains, in the sea and on land, collapsed. About seventy-five per cent of all species went extinct. More than 99.9999 per cent of all living organisms on Earth died, and the carbon cycle came to a halt.

Earth itself became toxic. When the asteroid struck, it vaporized layers of limestone, releasing into the atmosphere a trillion tons of carbon dioxide, ten billion tons of methane, and a billion tons of carbon monoxide; all three are powerful greenhouse gases. The impact also vaporized anhydrite rock, which blasted ten trillion tons of sulfur compounds aloft. The sulfur combined with water to form sulfuric acid, which then fell as an acid rain that may have been potent enough to strip the leaves from any surviving plants and to leach the nutrients from the soil.

Today, the layer of debris, ash, and soot deposited by the asteroid strike is preserved in the Earth’s sediment as a stripe of black about the thickness of a notebook. This is called the KT boundary, because it marks the dividing line between the Cretaceous period and the Tertiary period. (The Tertiary has been redefined as the Paleogene, but the term “KT” persists.) Mysteries abound above and below the KT layer. In the late Cretaceous, widespread volcanoes spewed vast quantities of gas and dust into the atmosphere, and the air contained far higher levels of carbon dioxide than the air that we breathe now. The climate was tropical, and the planet was perhaps entirely free of ice. Yet scientists know very little about the animals and plants that were living at the time, and as a result they have been searching for fossil deposits as close to the KT boundary as possible.

One of the central mysteries of paleontology is the so-called “three-­metre problem.” In a century and a half of assiduous searching, almost no dinosaur remains have been found in the layers three metres, or about nine feet, below the KT boundary, a depth representing many thousands of years. Consequently, numerous paleontologists have argued that the dinosaurs were on the way to extinction long before the asteroid struck, owing perhaps to the volcanic eruptions and climate change. Other scientists have countered that the three-metre problem merely reflects how hard it is to find fossils. Sooner or later, they’ve contended, a scientist will discover dinosaurs much closer to the moment of destruction.

Locked in the KT boundary are the answers to our questions about one of the most significant events in the history of life on the planet. If one looks at the Earth as a kind of living organism, as many biologists do, you could say that it was shot by a bullet and almost died. Deciphering what happened on the day of destruction is crucial not only to solving the three-­metre problem but also to explaining our own genesis as a species.

On August 5, 2013, I received an e-mail from a graduate student named Robert DePalma. I had never met DePalma, but we had corresponded on paleontological matters for years, ever since he had read a novel I’d written that centered on the discovery of a fossilized Tyrannosaurus rex killed by the KT impact. “I have made an incredible and unprecedented discovery,” he wrote me, from a truck stop in Bowman, North Dakota. “It is extremely confidential and only three others know of it at the moment, all of them close colleagues.” He went on, “It is far more unique and far rarer than any simple dinosaur discovery. I would prefer not outlining the details via e-mail, if possible.” He gave me his cell-phone number and a time to call...

DePalma’s find was in the Hell Creek geological formation, which outcrops in parts of North Dakota, South Dakota, Montana, and Wyoming, and contains some of the most storied dinosaur beds in the world. At the time of the impact, the Hell Creek landscape consisted of steamy, subtropical lowlands and floodplains along the shores of an inland sea. The land teemed with life and the conditions were excellent for fossilization, with seasonal floods and meandering rivers that rapidly buried dead animals and plants.

Dinosaur hunters first discovered these rich fossil beds in the late nineteenth century. In 1902, Barnum Brown, a flamboyant dinosaur hunter who worked at the American Museum of Natural History, in New York, found the first Tyrannosaurus rex here, causing a worldwide sensation. One paleontologist estimated that in the Cretaceous period Hell Creek was so thick with T. rexes that they were like hyenas on the Serengeti. It was also home to triceratops and duckbills. (...)

Today, DePalma, now thirty-seven, is still working toward his Ph.D. He holds the unpaid position of curator of vertebrate paleontology at the Palm Beach Museum of Natural History, a nascent and struggling museum with no exhibition space. In 2012, while looking for a new pond deposit, he heard that a private collector had stumbled upon an unusual site on a cattle ranch near Bowman, North Dakota. (Much of the Hell Creek land is privately owned, and ranchers will sell digging rights to whoever will pay decent money, paleontologists and commercial fossil collectors alike.) The collector felt that the site, a three-foot-deep layer exposed at the surface, was a bust: it was packed with fish fossils, but they were so delicate that they crumbled into tiny flakes as soon as they met the air. The fish were encased in layers of damp, cracked mud and sand that had never solidified; it was so soft that it could be dug with a shovel or pulled apart by hand. In July, 2012, the collector showed DePalma the site and told him that he was welcome to it. (...)

The following July, DePalma returned to do a preliminary excavation of the site. “Almost right away, I saw it was unusual,” he told me. He began shovelling off the layers of soil above where he’d found the fish. This “overburden” is typically material that was deposited long after the specimen lived; there’s little in it to interest a paleontologist, and it is usually discarded. But as soon as DePalma started digging he noticed grayish-white specks in the layers which looked like grains of sand but which, under a hand lens, proved to be tiny spheres and elongated ­droplets. “I think, Holy shit, these look like microtektites!” DePalma recalled. Micro­tektites are the blobs of glass that form when molten rock is blasted into the air by an asteroid impact and falls back to Earth in a solidifying drizzle. The site appeared to contain micro­tektites by the million.

As DePalma carefully excavated the upper layers, he began uncovering an extraordinary array of fossils, exceedingly delicate but marvellously well preserved. “There’s amazing plant material in there, all interlaced and interlocked,” he recalled. “There are logjams of wood, fish pressed against cypress-­tree root bundles, tree trunks smeared with amber.” Most fossils end up being squashed flat by the pressure of the overlying stone, but here everything was three-dimensional, including the fish, having been encased in sediment all at once, which acted as a support. “You see skin, you see dorsal fins literally sticking straight up in the sediments, species new to science,” he said. As he dug, the momentousness of what he had come across slowly dawned on him. If the site was what he hoped, he had made the most important paleontological discovery of the new century.

by Douglas Preston, New Yorker |  Read more:
Image: Richard Barnes

Saturday, December 20, 2025

John & Yoko: One to One

A fire alert disrupts the Venice screening of One to One: John & Yoko, Kevin Macdonald and Sam Rice-Edwards’ documentary about Lennon’s rambunctious post-Beatles heyday, when he and his artist wife Ono were first putting down roots in New York. Inside the hushed screening room, the flashing red lights and blaring alarm provide the second big surprise of the night. The first was how much I was enjoying the show.

Short of a documentary that unearths incontrovertible new evidence that he faked his own death, I’m not convinced that the world needs another John Lennon film. The medium, surely, has him well covered already. But Macdonald and Rice-Edwards have managed to find and mine a rich source of material, tightly tucked away amid all the other wildcat wells. Their film turns back the clock to the early 1970s and a benefit gig that occurred around the time of Lennon’s deportation battle with Nixon (see previous documentaries for details) and his extended lost weekend with May Pang (ditto). Crucially, too, it throws this concert against the maelstrom of the US political scene, with a channel-surfing aesthetic that skips from car and Coke commercials to the Attica prison riot and the near-fatal shooting of Alabama governor George Wallace.


While Lennon claims that he spent his first year in New York mostly watching TV, One to One suggests otherwise. Instead he hit the ground running, hurling himself at the action to become the standard bearer and figurehead for whatever progressive leftist cause was doing the rounds that week. The film blends archive footage with a trove of previously unheard phone conversations to show the ways in which he and Ono leveraged their celebrity status and surrounded themselves with a crew of colourful upstarts, from Allen Ginsberg to Jerry Rubin. The oddest of these, perhaps, is the activist AJ Weberman, who is tasked with a mission to raid Bob Dylan’s bins in order to prove what a “multimillionaire hypocrite” the singer has become. Ono pleads with Weberman to apologise, explaining that they need Dylan to perform at a planned “Free the People” concert in Miami, but AJ is unrepentant and initially won’t be budged.

In the event, the Free the People event was cancelled. But Lennon promptly finds a new focus with the One to One benefit for disabled children from the Willowbrook state school. Macdonald and Rice-Edwards have remastered Phil Spector’s muddy original recording so that the footage now plays with a fresh, bullish swagger. This was Lennon’s first full-length concert since the Beatles performed at Candlestick Park and, it transpired, the last he would ever play.

If only more nostalgic music documentaries could muster such a fun, fierce and full-blooded take on old, familiar material. One to One, against the odds, makes Lennon feel somehow vital again. It catches him like a butterfly at arguably his most interesting period, when he felt liberated and unfettered and was living “like a student” in a two-room loft in Greenwich Village. He’s radioactive with charisma, tilting at windmills and kicking out sparks. 

by Xan Brooks, The Guardian |  Read more:
Image: One to One/YT
[ed. Haven't seen this yet, but the link above about May Pang and her relationship with John was fascinating. Didn't know Yoko set them up to take pressure off of John's straying, and that, after a couple years (and an alleged affair of her own), became jealous and reeled him back in.]

Friday, December 19, 2025

Favorite Rob Reiner Credits

When Rob Reiner was killed earlier this week, along with his wife and creative partner Michelle, the world of film lost one of its most beloved and respected figures, an artist who had done very good and extremely popular work in a variety of genres, first in front of the camera, then behind it as a writer, producer, and director, and then again in his later life as an actor. All the while, Reiner maintained a spotless reputation as a mensch, in an industry with vanishingly few of those. He was one of the most sophisticated and successful political activists in California, and his work (and money) helped pass the state's groundbreaking marriage equality law. Few filmmakers have had as vast or varied an impact on American life over the last 50 years, which is something that Reiner would surely have found very funny. Here are some of Reiner's films and roles that we love:

Stand By Me

Stand By Me is probably the purest chunk of schmaltz in Rob Reiner's generational early-career run. The movie is oozing with sentiment, factory-designed to squeeze profundity out of every otherwise mundane childhood interaction, and some not so mundane. It pulls out every trouble-at-home cliché to make you root for the kids and add dramatic heft. Richard Dreyfuss's narration should come with an insulin pump.

And yet it works! It works. You root for the kids, and you identify with them; you laugh when you're meant to laugh and cry when you're supposed to; and yes, through the sheen of memory, all those moments with your own childhood pals take on a patina that preserves them as something meaningful. It's distilled nostalgia, which in moviemaking is much easier to fuck up than to get right.

Weapons-grade middlebrow competence was Reiner's strength. That's a compliment, to be clear, especially as Hollywood has come to devalue that skillset and the type of work it produced. He was visually unflashy, almost to an extent that it became his signature as a director. I'm not sure what a Rob Reiner film "looks like." He mostly picked great scripts, made his visual and storytelling choices, and got out of the way to let his actors cook. In Stand By Me, his first crucial decision was to give the movie a main character; the novella focuses on all four boys equally. The second was the casting. Reiner reportedly auditioned more than 300 kids, and got all four exactly right. A Mount Rushmore of child actors could credibly just be the four boys from this film.

It can be easy and is tempting to think of a movie as something that just sort of happens, and succeeds and fails for ineffable reasons, but it's really just a collection of a million different choices being made—most of the big ones by the director—and any one of which, if misguided, could torpedo the whole thing. Stand By Me doesn't work if the kids don't work. For its flaws, every choice that Reiner needed to nail in this movie, he nailed. You can more or less say the same for his entire first 12 years of directing. His hit rate was a miracle—no, not a miracle, that denies agency. It is the collective work of a real-deal genius.  (...)

- Barry Petchesky

When Harry Met Sally

It’s like 90 minutes, and all of them are perfect. Harry and Sally might suffer for their neuroses, but the greatest gift a director can give an audience is a film whose every detail was obsessed over. New York, warm and orange, has never looked better. Carrie Fisher says her lines the only way they could ever sound: You’re right, you’re right, I know you’re right. I want you to know that I will never want that wagon wheel coffee table.

That a film so brisk can feel so lived-in owes to Nora Ephron’s screenplay and also to Reiner’s neat choices, like the split-screen that makes it look like Harry and Sally are watching Casablanca in the same bed, an effect dialed up later in a continuously shot four-way phone call scene that took 60 tries to get right. Every time I watch When Harry Met Sally, I think it must have been impossible to make; the coziness of the movie is cut with something sad and mischievous and hard to describe. Estelle Reiner’s deadpan line reading at Katz’s Deli is a classic, and every family Pictionary night in our house began with someone guessing “baby fish mouth,” but the bit that came to mind first was this scene set at a Giants game: Harry tells Jess about his wife’s affair between rounds of the wave.

- Maitreyi Anantharaman


Michael "Meathead" Stivic in All In The Family

Rob Reiner was proof that every once in a rare while, nepotism is a great idea. Of all the lessons he could glean from his father Carl, one of this nation's undisputed comedic geniuses, he put nearly all of them to best use over his voluminous IMDB page.

The credit that Reiner broke out with was the one that seemed with hindsight to be the least consequential of them all—his straight man/son-in-law/earnest doofus role in the Norman Lear sitcom All In The Family. The show, which for several years was the nation's defining situation comedy, ran through the risible but weirdly prescient venom of Carroll O'Connor's towering performance, and positioned Reiner as the stereotypically liberal son-in-law and foil for O'Connor's cardboard conservative Archie Bunker. Reiner helped frame the show, while mostly serving up setups for O'Connor. He played the part well, but it was not an especially dignified one. I mean, his character's name was Mike Stivic, but he became known universally as "Meathead" because Bunker only referred to him as such. Reiner learned from his father's years with Mel Brooks how to be that acquiescent foil, and if his work in that part did not make him a recognized comedian except to those folks who knew how comedy actually works, it indisputably gave him an eight-year advanced education on all the things required to make funny. Those studies would serve him well in his director's chair. His gift was not in being the funny, but in building sturdy and elegant setups for the funny, and there has never been a good comedy movie without that. The Princess Bride doesn't work for 10 minutes without Cary Elwes, and Elwes's performance wouldn't work if his director did not repeatedly put him in position to succeed.

Maybe Reiner would not have gotten the AITF gig without being his father's son—Richard Dreyfuss also wanted the role and Harrison Ford turned it down, for what that may be worth—but sometimes nepotism works for those outside the family. Reiner wrote three of the 174 episodes in which he appeared; he learned to thrive behind and off to the side of the camera. It all counted, it all contributed, and every credit Reiner is credited with here owes some of its shine to that television show, which in turn owes its existence to The Dick Van Dyke Show and his father and Mel Brooks's work with The 2000-Year-Old Man and Your Show Of Shows. That takes us back 75 years, into the earliest days of the medium, which may as well be the entire history of American comedy. Every giant stood on the shoulders of another, and that giant did the same. It is all of a piece, and IMDB would be half as large a quarter as useful without them, and him. 

- Ray Ratto

This Is Spinal Tap

In a particularly on-brand bit of trivia, I first became aware of This Is Spinal Tap through Guitar Hero II. The titular band’s hit “Tonight I’m Gonna Rock You Tonight” was downloadable content for that game, and I spent hours trying to perfect it before I ever thought about watching the movie it hailed from. I did eventually do it, and I remember exactly where I was—in Venezuela in the summer of 2007, traveling around for the Copa América—because Spinal Tap is a near-flawless movie, and one that seared itself into my brain. I can’t recall with certainty, but I’m pretty sure that this is when I first became aware of Rob Reiner—I knew his dad from Ocean’s Eleven, another perfect movie—and Spinal Tap is such a stunning collection of talent that it’s hard to pick out a favorite role or MVP. Here’s the thing about that, though: The best and most important performance in the film might be from Reiner himself, because the movie doesn’t work as well as it does without him.

On the one hand, this is obvious; he directed the movie and co-wrote it, so his fingerprints are quite naturally all over it. And yet, in a movie full of massive characters and comedians perfectly suited for those roles, Reiner’s performance as the flabbergasted documentarian is what makes the whole thing hang together. Reiner was a comedic genius in his own right, but I think the thing I appreciate most about Spinal Tap whenever I watch it is how much he understands about his cast’s strengths and how much he allows himself to recede into the background while still working to guide the jokes to their best conclusions. Every great comedy needs a straight man, and Reiner’s Marty DiBergi is certainly that, but the movie is so funny, and Reiner is such a welcome presence on screen, that even DiBergi gets to be effortlessly hilarious. He does this, for the most part, just by playing an ostensibly normal person and turning that all up to, well, 11.

Let’s take what I consider one of the most iconic comedic scenes of all time, and certainly the one that I have quoted the most in my life: “It’s one louder.”


Christopher Guest is perfect in this scene, unsurprisingly; his Nigel Tufnel is an idiot, and the movie gets a lot of humor out of that fact throughout, and especially here. However, Reiner’s plain-spoken incredulity over the idiocy is what really elevates the scene to me. You can feel his character grappling with this concept throughout: First with a plain-spoken revelation (“Oh I see, and most of the amps go to 10”), but then he comes in with the setup: “Why don’t you just make 10 louder and make 10 be the top number, and make that a little louder?” Every single time I watch this scene, the pause before Guest goes “These go to eleven” makes me giggle in anticipation.

Spinal Tap is hilarious in its own right, and also birthed the mockumentary genre; it’s crazy to think about all of the things that the movie directly influenced, from Guest’s own filmmaking work (shout out Best In Show), to Drop Dead Gorgeous, on through Popstar: Never Stop Never Stopping. God, I love that last one, and so many things that work in Popstar are directly traceable to the work Reiner did on Spinal Tap. (Spinal Tap also birthed a sequel just this year; I haven’t watched it yet, mainly because of how much I love the original and don’t need more from this stupid British band, but I am relieved to report that I’ve heard it’s a fine enough time at the movies.)

That This Is Spinal Tap was Reiner’s directorial debut only adds to the absurdity. Who produces not just a masterpiece, but such an utterly distinctive piece of work in their first real attempt? The answer, really, is that Reiner was a master, and he would go on to prove that over a historic run over the next decade, making Stand By Me, The Princess Bride, When Harry Met Sally, Misery, and A Few Good Men in just eight years. Ridiculous. This Is Spinal Tap is my favorite of all of those, though, and one of the most rewatchable movies ever made. Hell, as I’m writing this, I just remembered the scene where Reiner reads the band some reviews (“The review you had on Shark Sandwich, which was merely a two-word review just said … Shit Sandwich”) which is also among the funniest things put to film. The whole movie is strewn with gems like that. What a gift.

- Luis Paez-Pumar

by Defector Staff, Defector |  Read more:
Images: Andy Schwartz/Fotos International/Getty Images; Harry Met Sally, Spinal Tap (YouTube).]
[ed. See also: As You Wish: Rob Reiner (1947-2025). Ebert.com]

Thursday, December 18, 2025

Finding Peter Putnam

The forgotten janitor who discovered the logic of the mind

The neighborhood was quiet. There was a chill in the air. The scent of Spanish moss hung from the cypress trees. Plumes of white smoke rose from the burning cane fields and stretched across the skies of Terrebonne Parish. The man swung a long leg over a bicycle frame and pedaled off down the street.

It was 1987 in Houma, Louisiana, and he was headed to the Department of Transportation, where he was working the night shift, sweeping floors and cleaning toilets. He was just picking up speed when a car came barreling toward him with a drunken swerve.

A screech shot down the corridor of East Main Street, echoed through the vacant lots, and rang out over the Bayou.

Then silence.
 
The 60-year-old man lying on the street, as far as anyone knew, was just a janitor hit by a drunk driver. There was no mention of it on the local news, no obituary in the morning paper. His name might have been Anonymous. But it wasn’t.

His name was Peter Putnam. He was a physicist who’d hung out with Albert Einstein, John Archibald Wheeler, and Niels Bohr, and two blocks from the crash, in his run-down apartment, where his partner, Claude, was startled by a screech, were thousands of typed pages containing a groundbreaking new theory of the mind.

“Only two or three times in my life have I met thinkers with insights so far reaching, a breadth of vision so great, and a mind so keen as Putnam’s,” Wheeler said in 1991. And Wheeler, who coined the terms “black hole” and “wormhole,” had worked alongside some of the greatest minds in science.

Robert Works Fuller, a physicist and former president of Oberlin College, who worked closely with Putnam in the 1960s, told me in 2012, “Putnam really should be regarded as one of the great philosophers of the 20th century. Yet he’s completely unknown.”

That word—unknown—it came to haunt me as I spent the next 12 years trying to find out why.

The American Philosophical Society Library in Philadelphia, with its marbled floors and chandeliered ceilings, is home to millions of rare books and manuscripts, including John Wheeler’s notebooks. I was there in 2012, fresh off writing a physics book that had left me with nagging questions about the strange relationship between observer and observed. Physics seemed to suggest that observers play some role in the nature of reality, yet who or what an observer is remained a stubborn mystery.

Wheeler, who made key contributions to nuclear physics, general relativity, and quantum gravity, had thought more about the observer’s role in the universe than anyone—if there was a clue to that mystery anywhere, I was convinced it was somewhere in his papers. That’s when I turned over a mylar overhead, the kind people used to lay on projectors, with the titles of two talks, as if given back-to-back at the same unnamed event:

Wheeler: From Reality to Consciousness

Putnam: From Consciousness to Reality

Putnam, it seemed, had been one of Wheeler’s students, whose opinion Wheeler held in exceptionally high regard. That was odd, because Wheeler’s students were known for becoming physics superstars, earning fame, prestige, and Nobel Prizes: Richard Feynman, Hugh Everett, and Kip Thorne.

Back home, a Google search yielded images of a very muscly, very orange man wearing a very small speedo. This, it turned out, was the wrong Peter Putnam. Eventually, I stumbled on a 1991 article in the Princeton Alumni Weekly newsletter called “Brilliant Enigma.” “Except for the barest outline,” the article read, “Putnam’s life is ‘veiled,’ in the words of Putnam’s lifelong friend and mentor, John Archibald Wheeler.

A quick search of old newspaper archives turned up an intriguing article from the Associated Press, published six years after Putnam’s death. “Peter Putnam lived in a remote bayou town in Louisiana, worked as a night watchman on a swing bridge [and] wrote philosophical essays,” the article said. “He also tripled the family fortune to about $40 million by investing successfully in risky stock ventures.”

The questions kept piling up. Forty million dollars?

I searched a while longer for any more information but came up empty-handed. But I couldn’t forget about Peter Putnam. His name played like a song stuck in my head. I decided to track down anyone who might have known him.

The only paper Putnam ever published was co-authored with Robert Fuller, so I flew from my home in Cambridge, Massachusetts, to Berkeley, California, to meet him. Fuller was nearing 80 years old but had an imposing presence and a booming voice. He sat across from me in his sun-drenched living room, seeming thrilled to talk about Putnam yet plagued by some palpable regret.

Putnam had developed a theory of the brain that “ranged over the whole of philosophy, from ethics to methodology to mathematical foundations to metaphysics,” Fuller told me. He compared Putnam’s work to Alan Turing’s and Kurt Gödel’s. “Turing, Gödel, and Putnam—they’re three peas in a pod,” Fuller said. “But one of them isn’t recognized.” (...)

Phillips Jones, a physicist who worked alongside Putnam in the early 1960s, told me over the phone, “We got the sense that what Einstein’s general theory was for physics, Peter’s model would be for the mind.”

Even Einstein himself was impressed with Putnam. At 19 years old, Putnam went to Einstein’s house to talk with him about Arthur Stanley Eddington, the British astrophysicist. (Eddington performed the key experiment that proved Einstein’s theory of gravity.) Putnam was obsessed with an allegory by Eddington about a fisherman and wanted to ask Einstein about it. Putnam also wanted Einstein to give a speech promoting world government to a political group he’d organized. Einstein—who was asked by plenty of people to do plenty of things—thought highly enough of Putnam to agree.

How could this genius, this Einstein of the mind, just vanish into obscurity? When I asked why, if Putnam was so important, no one has ever heard of him, everyone gave me the same answer: because he didn’t publish his work, and even if he had, no one would have understood it.

“He spoke and wrote in ‘Putnamese,’ ” Fuller said. “If you can find his papers, I think you’ll immediately see what I mean.” (...)

Skimming through the papers I saw that the people I’d spoken to hadn’t been kidding about the Putnamese. “To bring the felt under mathematical categories involves building a type of mathematical framework within which latent colliding heuristics can be exhibited as of a common goal function,” I read, before dropping the paper with a sigh. Each one went on like that for hundreds of pages at a time, on none of which did he apparently bother to stop and explain what the whole thing was really about...

Putnam spent most of his time alone, Fuller had told me. “Because of this isolation, he developed a way of expressing himself in which he uses words, phrases, concepts, in weird ways, peculiar to himself. The thing would be totally incomprehensible to anyone.” (...)


Imagine a fisherman who’s exploring the life of the ocean. He casts his net into the water, scoops up a bunch of fish, inspects his catch and shouts, “A-ha! I have made two great scientific discoveries. First, there are no fish smaller than two inches. Second, all fish have gills.”

The fisherman’s first “discovery” is clearly an error. It’s not that there are no fish smaller than two inches, it’s that the holes in his net are two inches in diameter. But the second discovery seems to be genuine—a fact about the fish, not the net.

This was the Eddington allegory that obsessed Putnam.

When physicists study the world, how can they tell which of their findings are features of the world and which are features of their net? How do we, as observers, disentangle the subjective aspects of our minds from the objective facts of the universe? Eddington suspected that one couldn’t know anything about the fish until one knew the structure of the net.

That’s what Putnam set out to do: come up with a description of the net, a model of “the structure of thought,” as he put it in a 1948 diary entry.

At the time, scientists were abuzz with a new way of thinking about thinking. Alan Turing had worked out an abstract model of computation, which quickly led not only to the invention of physical computers but also to the idea that perhaps the brain, too, was a kind of Turing machine.

Putnam disagreed. “Man is a species of computer of fundamentally different genus than those she builds,” he wrote. It was a radical claim (not only for the mixed genders): He wasn’t saying that the mind isn’t a computer, he was saying it was an entirely different kind of computer.

A universal Turing machine is a powerful thing, capable of computing anything that can be computed by an algorithm. But Putnam saw that it had its limitations. A Turing machine, by design, performs deductive logic—logic where the answers to a problem are contained in its premises, where the rules of inference are pregiven, and information is never created, only shuffled around. Induction, on the other hand, is the process by which we come up with the premises and rules in the first place. “Could there be some indirect way to model or orient the induction process, as we do deductions?” Putnam asked.

Putnam laid out the dynamics of what he called a universal “general purpose heuristic”—which we might call an “induction machine,” or more to the point, a mind—borrowing from the mathematics of game theory, which was thick in the air at Princeton. His induction “game” was simple enough. He imagined a system (immersed in an environment) that could make one mutually exclusive “move” at a time. The system is composed of a massive number of units, each of which can switch between one of two states. They all act in parallel, switching, say, “on” and “off” in response to one another. Putnam imagined that these binary units could condition one another’s behavior, so if one caused another to turn on (or off) in the past, it would become more likely to do so in the future. To play the game, the rule is this: The first chain of binary units, linked together by conditioned reflexes, to form a self-reinforcing loop emits a move on behalf of the system.

Every game needs a goal. In a Turing machine, goals are imposed from the outside. For true induction, the process itself should create its own goals. And there was a key constraint: Putnam realized that the dynamics he had in mind would only work mathematically if the system had just one goal governing all its behavior.

That’s when it hit him: The goal is to repeat. Repetition isn’t a goal that has to be programmed in from the outside; it’s baked into the very nature of things—to exist from one moment to the next is to repeat your existence. “This goal function,” Putnam wrote, “appears pre-encoded in the nature of being itself.”

So, here’s the game. The system starts out in a random mix of “on” and “off” states. Its goal is to repeat that state—to stay the same. But in each turn, a perturbation from the environment moves through the system, flipping states, and the system has to emit the right sequence of moves (by forming the right self-reinforcing loops) to alter the environment in such a way that it will perturb the system back to its original state.

Putnam’s remarkable claim was that simply by playing this game, the system will learn; its sequences of moves will become increasingly less random. It will create rules for how to behave in a given situation, then automatically root out logical contradictions among those rules, resolving them into better ones. And here’s the weird thing: It’s a game that can never be won. The system never exactly repeats. But in trying to, it does something better. It adapts. It innovates. It performs induction.

In paper after paper, Putnam attempted to show how his induction game plays out in the human brain, with motor behaviors serving as the mutually exclusive “moves” and neurons as the parallel binary units that link up into loops to move the body. The point wasn’t to give a realistic picture of how a messy, anatomical brain works any more than an abstract Turing machine describes the workings of an iMac. It was not a biochemical description, but a logical one—a “brain calculus,” Putnam called it.

As the game is played, perturbations from outside—photons hitting the retina, hunger signals rising from the gut—require the brain to emit the right sequence of movements to return to its prior state. At first it has no idea what to do—each disturbance is a neural impulse moving through the brain in search of a pathway out, and it will take the first loop it can find. That’s why a newborn’s movements start out as random thrashes. But when those movements don’t satisfy the goal, the disturbance builds and spreads through the brain, feeling for new pathways, trying loop after loop, thrash after thrash, until it hits on one that does the trick.

When a successful move, discovered by sheer accident, quiets a perturbation, it gets wired into the brain as a behavioral rule. Once formed, applying the rule is a matter of deduction: The brain outputs the right move without having to try all the wrong ones first.

But the real magic happens when a contradiction arises, when two previously successful rules, called up in parallel, compete to move the body in mutually exclusive ways. A hungry baby, needing to find its mother’s breast, simultaneously fires up two loops, conditioned in from its history: “when hungry, turn to the left” and “when hungry, turn to the right.” Deductive logic grinds to a halt; the facilitation of either loop, neurally speaking, inhibits the other. Their horns lock. The neural activity has no viable pathway out. The brain can’t follow through with a wired-in plan—it has to create a new one.

How? By bringing in new variables that reshape the original loops into a new pathway, one that doesn’t negate either of the original rules, but clarifies which to use when. As the baby grows hungrier, activity spreads through the brain, searching its history for anything that can break the tie. If it can’t find it in the brain, it will automatically search the environment, thrash by thrash. The mathematics of game theory, Putnam said, guarantee that, since the original rules were in service of one and the same goal, an answer, logically speaking, can always be found.

In this case, the baby’s brain finds a key variable: When “turn left” worked, the neural signal created by the warmth of the mother’s breast against the baby’s left cheek got wired in with the behavior. When “turn right” worked, the right cheek was warm. That extra bit of sensory signal is enough to tip the scales. The brain has forged a new loop, a more general rule: “When hungry, turn in the direction of the warmer cheek.”

New universals lead to new motor sequences, which allow new interactions with the world, which dredge up new contradictions, which force new resolutions, and so on up the ladder of ever-more intelligent behavior. “This constitutes a theory of the induction process,” Putnam wrote.

In notebooks, in secret, using language only he would understand, Putnam mapped out the dynamics of a system that could perceive, learn, think, and create ideas through induction—a computer that could program itself, then find contradictions among its programs and wrangle them into better programs, building itself out of its history of interactions with the world. Just as Turing had worked out an abstract, universal model of the very possibility of computation, Putnam worked out an abstract, universal model of the very possibility of mind. It was a model, he wrote, that “presents a basic overall pattern [or] character of thought in causal terms for the first time.”

Putnam had said you can’t understand another person until you know what fight they’re in, what contradiction they’re working through. I saw before me two stories, equally true: Putnam was a genius who worked out a new logic of the mind. And Putnam was a janitor who died unknown. The only way to resolve a contradiction, he said, is to find the auxiliary variables that forge a pathway to a larger story, one that includes and clarifies both truths. The variables for this contradiction? Putnam’s mother and money.

by Amanda Gefter, Nautilus |  Read more:
Image: John Archibald Wheeler, courtesy of Alison Lahnston.
[ed. Fascinating. Sounds like part quantum physics and part AI. But it's beyond me.]