Duck Soup
...dog paddling through culture, technology, music and more.
Monday, December 22, 2025
Touched for the Very First Time. A Weekend at Slutcon
This summary is not available. Please
click here to view the post.
Sunday, December 21, 2025
What’s Not to Like?
Similes! I have hundreds of them on three-by-five notecards, highbrow and lowbrow, copied from newspapers, comic strips, sonnets, billboards, and fortune cookies. My desk overflows with them. They run down to the floor, trail across the room into the hallway. I have similes the way other houses have ants.
Why? To start, for the sheer laugh-out-loud pleasure of them. “His smile was as stiff as a frozen fish,” writes Raymond Chandler. “He vanished abruptly, like an eel going into the mud,” writes P. G. Wodehouse, the undoubted master of the form. Or Kingsley Amis’s probably first-hand description of a hangover: “He lay sprawled, too wicked to move, spewed up like a broken spider-crab on the tarry shingle of the morning.”
From time to time, I’ve tried to organize my collection, though mostly the task is, as the cliché nicely puts it, like herding cats. Still, a few categories come to mind. The Really Bad Simile, for instance. Examples of this pop up like blisters in contemporary “literary” fiction. Here is a woman eating a crème brûlée: “She crashed the spoon through the sugar like a boy falling through ice on a lake.” (Authors’ names omitted, per the Mercy Rule.) Or: “A slick of beer shaped like the Baltic Sea spilled on the table.” Sometimes they follow a verb like tin cans on a string: “The restraining pins tinkled to the floor like metal rain, hunks of hair tumbling across her face in feral waves.” Or sometimes they just make the page itself cringe and curl up at the corners: “Charlie’s heart rippled like a cloth spread across a wide table.”
Donne’s simile belongs to another category as well, the epic or Homeric simile. Every reader of the Iliad knows something like this picture of an attacking army as a wildfire:
“As when the obliterating fire comes down on the timbered forest / and the roll of the wind carries it everywhere,” and so the Achaean host drives ahead for another five lines. Modern prose writers can also unscroll a simile at surprising length. John Updike dives right in: “The sea, slightly distended by my higher perspective, seems a misty old gentleman stretched at his ease in an immense armchair which has for arms the arms of this bay and for an antimacassar the freshly laundered sky. Sailboats float on his surface like idle and unrelated benevolent thoughts.” And one would not like to have been the beefy Duke of Bedford when Edmund Burke imagined how revolutionary mobs might regard him: “Like the print of the poor ox that we see in the shop windows at Charing Cross, alive as he is, and thinking no harm in the world, he is divided into rumps, and sirloins, and briskets, and into all sorts of pieces for roasting, boiling, and stewing.”
It takes a dramatic mind to carry a comparison through so logically and so far. The Homeric simile evokes a world far larger than a single flash of thought, however clever. Its length creates a scene in our minds, even a drama where contraries come alive: an army driving into battle, an ocean tamed into a harmless old gent, a bloody clash in the streets between aristocrats and rebels.
“Perceptive of resemblances,” writes Aristotle, is what the maker of similes must be. There is one more step. The maker of similes, long or short, must perceive resemblances and then, above all, obey the first, and maybe only, commandment for a writer: to make you see. Consider Wodehouse’s “He found Lord Emsworth, as usual, draped like a wet sock over the rail of the Empress’s G.H.O.,” or Patricia Cornwell’s “My thoughts scattered like marbles.”
The dictionary definition of metaphor is simply an implied comparison, a comparison without the key words like or as. The most common schoolbook example is, “She has a heart of gold,” followed by, “The world is a stage.” Latching onto the verb is, the popular website Grammarly explains, “A metaphor states that one thing is another thing.”
Close, but not enough. There is great wisdom in the roots of our language, in the origin of words. Deep down, in its first Greek form, metaphor combines meta (over, across) and pherein (to carry), and thus the full word means to carry over, to transfer, to change or alter. A metaphor does more than state an identity. In our imagination, before our eyes, metaphor changes one thing into another: “I should have been a pair of ragged claws / Scuttling across the floors of silent seas.” Eliot’s metaphor is a metamorphosis. Magically, we see Prufrock the man metamorphosed into a creature with ragged claws, like a hapless minor god in Ovid.
Too much? Consider, then, what the presence of like or as does in a simile. It announces, self-consciously, that something good is coming. The simile is a rhetorical magic trick, like a pun pulled out of a hat. A metaphor, however, feels not clever but true. Take away the announcement of like, and we read and write on a much less sophisticated level, on a level that has been called primitive, because it recalls the staggering ancient power of words as curses, as spells to transform someone into a frog, a stag, a satanic serpent.
A better term might be childlike. Psychologists know that very young children understand the metamorphosing power of words. To a child of three or four, writes Howard Gardner, the properties of a new word “may be inextricably fused with the new object: at such a time the pencil may become a rocket ship.” Older children and adults know that this isn’t so. But for most of us, and certainly for most writers I know, the childhood core of magical language play is not lost. It exists at the center and is only surrounded by adult awareness, as the rings encircle the heart of the tree.
Still too much? Here is Updike, making me gasp: “But it is just two lovers, holding hands and in a hurry to reach their car, their locked hands a starfish leaping through the dark.” No labored comparison, no signal not to take it literally. Like the pencil and rocket, their hands have become a starfish. Or Shakespeare, metamorphosing himself into an autumnal tree and then an ancient abbey: “That time of year thou may’st in me behold, / When yellow leaves, or none, or few do hang / Upon those boughs which shake against the cold, / Bare ruin’d choirs where late the sweet birds sang.” Pure magic.
Yet why be a purist? At the high point of language, James Joyce blends simile, metaphor, and extended simile into one beautiful and unearthly scene, an image created by a sorcerer.
A girl stood before him in midstream, alone and still, gazing out to sea. She seemed like one whom magic had changed into the likeness of a strange and beautiful seabird. Her long slender bare legs were delicate as a crane’s. … Her thighs, fuller and soft-hued as ivory, were bared almost to the hips, where the white fringes of her drawers were like feathering of soft white down. Her slate-blue skirts were kilted boldly about her waist and dovetailed behind her. Her bosom was as a bird’s, soft and slight, slight and soft as the breast of some dark-plumaged dove. But her long fair hair was girlish: and girlish, and touched with the wonder of mortal beauty, her face.
The passage is like a palimpsest. A reader can see through the surface of the language. A reader can penetrate to the traces of the real person still visible beneath the living words that are, as they move down the page, quietly transforming her. It is as if we are looking through the transparent chrysalis to the caterpillar growing inside, watching its slow and perfect metamorphosis into the butterfly. Too much? No.
Why? To start, for the sheer laugh-out-loud pleasure of them. “His smile was as stiff as a frozen fish,” writes Raymond Chandler. “He vanished abruptly, like an eel going into the mud,” writes P. G. Wodehouse, the undoubted master of the form. Or Kingsley Amis’s probably first-hand description of a hangover: “He lay sprawled, too wicked to move, spewed up like a broken spider-crab on the tarry shingle of the morning.”
From time to time, I’ve tried to organize my collection, though mostly the task is, as the cliché nicely puts it, like herding cats. Still, a few categories come to mind. The Really Bad Simile, for instance. Examples of this pop up like blisters in contemporary “literary” fiction. Here is a woman eating a crème brûlée: “She crashed the spoon through the sugar like a boy falling through ice on a lake.” (Authors’ names omitted, per the Mercy Rule.) Or: “A slick of beer shaped like the Baltic Sea spilled on the table.” Sometimes they follow a verb like tin cans on a string: “The restraining pins tinkled to the floor like metal rain, hunks of hair tumbling across her face in feral waves.” Or sometimes they just make the page itself cringe and curl up at the corners: “Charlie’s heart rippled like a cloth spread across a wide table.”
Writing about sex can drive a writer to similes of unparalleled badness. Someone has borrowed my copy of Lady Chatterley’s Lover, but these more recent examples might do, from The Literary Review’s “Bad Sex in Fiction Award”: “Katsuro’s penis and testicles became one single mound that rolled around beneath the grip of her hand. Miyuki felt as though she was manipulating a small monkey that was curling up its paws.” Or this loving, if somewhat chiropractic moment: “her long neck, her swan’s neck … coiling like a serpent, like a serpent, coiling down on him.” Or finally (my eyes are closed as I type): “Her vaginal ratchet moved in concertina-like waves, slowly chugging my organ as a boa constrictor swallows its prey.” (...)
Donne’s simile belongs to another category as well, the epic or Homeric simile. Every reader of the Iliad knows something like this picture of an attacking army as a wildfire:
“As when the obliterating fire comes down on the timbered forest / and the roll of the wind carries it everywhere,” and so the Achaean host drives ahead for another five lines. Modern prose writers can also unscroll a simile at surprising length. John Updike dives right in: “The sea, slightly distended by my higher perspective, seems a misty old gentleman stretched at his ease in an immense armchair which has for arms the arms of this bay and for an antimacassar the freshly laundered sky. Sailboats float on his surface like idle and unrelated benevolent thoughts.” And one would not like to have been the beefy Duke of Bedford when Edmund Burke imagined how revolutionary mobs might regard him: “Like the print of the poor ox that we see in the shop windows at Charing Cross, alive as he is, and thinking no harm in the world, he is divided into rumps, and sirloins, and briskets, and into all sorts of pieces for roasting, boiling, and stewing.”
It takes a dramatic mind to carry a comparison through so logically and so far. The Homeric simile evokes a world far larger than a single flash of thought, however clever. Its length creates a scene in our minds, even a drama where contraries come alive: an army driving into battle, an ocean tamed into a harmless old gent, a bloody clash in the streets between aristocrats and rebels.
“Perceptive of resemblances,” writes Aristotle, is what the maker of similes must be. There is one more step. The maker of similes, long or short, must perceive resemblances and then, above all, obey the first, and maybe only, commandment for a writer: to make you see. Consider Wodehouse’s “He found Lord Emsworth, as usual, draped like a wet sock over the rail of the Empress’s G.H.O.,” or Patricia Cornwell’s “My thoughts scattered like marbles.”
The dictionary definition of metaphor is simply an implied comparison, a comparison without the key words like or as. The most common schoolbook example is, “She has a heart of gold,” followed by, “The world is a stage.” Latching onto the verb is, the popular website Grammarly explains, “A metaphor states that one thing is another thing.”
Close, but not enough. There is great wisdom in the roots of our language, in the origin of words. Deep down, in its first Greek form, metaphor combines meta (over, across) and pherein (to carry), and thus the full word means to carry over, to transfer, to change or alter. A metaphor does more than state an identity. In our imagination, before our eyes, metaphor changes one thing into another: “I should have been a pair of ragged claws / Scuttling across the floors of silent seas.” Eliot’s metaphor is a metamorphosis. Magically, we see Prufrock the man metamorphosed into a creature with ragged claws, like a hapless minor god in Ovid.
Too much? Consider, then, what the presence of like or as does in a simile. It announces, self-consciously, that something good is coming. The simile is a rhetorical magic trick, like a pun pulled out of a hat. A metaphor, however, feels not clever but true. Take away the announcement of like, and we read and write on a much less sophisticated level, on a level that has been called primitive, because it recalls the staggering ancient power of words as curses, as spells to transform someone into a frog, a stag, a satanic serpent.
A better term might be childlike. Psychologists know that very young children understand the metamorphosing power of words. To a child of three or four, writes Howard Gardner, the properties of a new word “may be inextricably fused with the new object: at such a time the pencil may become a rocket ship.” Older children and adults know that this isn’t so. But for most of us, and certainly for most writers I know, the childhood core of magical language play is not lost. It exists at the center and is only surrounded by adult awareness, as the rings encircle the heart of the tree.
Still too much? Here is Updike, making me gasp: “But it is just two lovers, holding hands and in a hurry to reach their car, their locked hands a starfish leaping through the dark.” No labored comparison, no signal not to take it literally. Like the pencil and rocket, their hands have become a starfish. Or Shakespeare, metamorphosing himself into an autumnal tree and then an ancient abbey: “That time of year thou may’st in me behold, / When yellow leaves, or none, or few do hang / Upon those boughs which shake against the cold, / Bare ruin’d choirs where late the sweet birds sang.” Pure magic.
Yet why be a purist? At the high point of language, James Joyce blends simile, metaphor, and extended simile into one beautiful and unearthly scene, an image created by a sorcerer.
A girl stood before him in midstream, alone and still, gazing out to sea. She seemed like one whom magic had changed into the likeness of a strange and beautiful seabird. Her long slender bare legs were delicate as a crane’s. … Her thighs, fuller and soft-hued as ivory, were bared almost to the hips, where the white fringes of her drawers were like feathering of soft white down. Her slate-blue skirts were kilted boldly about her waist and dovetailed behind her. Her bosom was as a bird’s, soft and slight, slight and soft as the breast of some dark-plumaged dove. But her long fair hair was girlish: and girlish, and touched with the wonder of mortal beauty, her face.
The passage is like a palimpsest. A reader can see through the surface of the language. A reader can penetrate to the traces of the real person still visible beneath the living words that are, as they move down the page, quietly transforming her. It is as if we are looking through the transparent chrysalis to the caterpillar growing inside, watching its slow and perfect metamorphosis into the butterfly. Too much? No.
by Max Byrd, American Scholar | Read more:
Image: locket479/Flickr
Labels:
Education,
Fiction,
Humor,
Journalism,
Literature
The Day the Dinosaurs Died
A young paleontologist may have discovered a record of the most significant event in the history of life on Earth. “It’s like finding the Holy Grail clutched in the bony fingers of Jimmy Hoffa, sitting on top of the Lost Ark."
If, on a certain evening about sixty-six million years ago, you had stood somewhere in North America and looked up at the sky, you would have soon made out what appeared to be a star. If you watched for an hour or two, the star would have seemed to grow in brightness, although it barely moved. That’s because it was not a star but an asteroid, and it was headed directly for Earth at about forty-five thousand miles an hour. Sixty hours later, the asteroid hit. The air in front was compressed and violently heated, and it blasted a hole through the atmosphere, generating a supersonic shock wave. The asteroid struck a shallow sea where the Yucatán peninsula is today. In that moment, the Cretaceous period ended and the Paleogene period began.
A few years ago, scientists at Los Alamos National Laboratory used what was then one of the world’s most powerful computers, the so-called Q Machine, to model the effects of the impact. The result was a slow-motion, second-by-second false-color video of the event. Within two minutes of slamming into Earth, the asteroid, which was at least six miles wide, had gouged a crater about eighteen miles deep and lofted twenty-five trillion metric tons of debris into the atmosphere. Picture the splash of a pebble falling into pond water, but on a planetary scale. When Earth’s crust rebounded, a peak higher than Mt. Everest briefly rose up. The energy released was more than that of a billion Hiroshima bombs, but the blast looked nothing like a nuclear explosion, with its signature mushroom cloud. Instead, the initial blowout formed a “rooster tail,” a gigantic jet of molten material, which exited the atmosphere, some of it fanning out over North America. Much of the material was several times hotter than the surface of the sun, and it set fire to everything within a thousand miles. In addition, an inverted cone of liquefied, superheated rock rose, spread outward as countless red-hot blobs of glass, called tektites, and blanketed the Western Hemisphere.
If, on a certain evening about sixty-six million years ago, you had stood somewhere in North America and looked up at the sky, you would have soon made out what appeared to be a star. If you watched for an hour or two, the star would have seemed to grow in brightness, although it barely moved. That’s because it was not a star but an asteroid, and it was headed directly for Earth at about forty-five thousand miles an hour. Sixty hours later, the asteroid hit. The air in front was compressed and violently heated, and it blasted a hole through the atmosphere, generating a supersonic shock wave. The asteroid struck a shallow sea where the Yucatán peninsula is today. In that moment, the Cretaceous period ended and the Paleogene period began.
A few years ago, scientists at Los Alamos National Laboratory used what was then one of the world’s most powerful computers, the so-called Q Machine, to model the effects of the impact. The result was a slow-motion, second-by-second false-color video of the event. Within two minutes of slamming into Earth, the asteroid, which was at least six miles wide, had gouged a crater about eighteen miles deep and lofted twenty-five trillion metric tons of debris into the atmosphere. Picture the splash of a pebble falling into pond water, but on a planetary scale. When Earth’s crust rebounded, a peak higher than Mt. Everest briefly rose up. The energy released was more than that of a billion Hiroshima bombs, but the blast looked nothing like a nuclear explosion, with its signature mushroom cloud. Instead, the initial blowout formed a “rooster tail,” a gigantic jet of molten material, which exited the atmosphere, some of it fanning out over North America. Much of the material was several times hotter than the surface of the sun, and it set fire to everything within a thousand miles. In addition, an inverted cone of liquefied, superheated rock rose, spread outward as countless red-hot blobs of glass, called tektites, and blanketed the Western Hemisphere.
Some of the ejecta escaped Earth’s gravitational pull and went into irregular orbits around the sun. Over millions of years, bits of it found their way to other planets and moons in the solar system. Mars was eventually strewn with the debris—just as pieces of Mars, knocked aloft by ancient asteroid impacts, have been found on Earth. A 2013 study in the journal Astrobiology estimated that tens of thousands of pounds of impact rubble may have landed on Titan, a moon of Saturn, and on Europa and Callisto, which orbit Jupiter—three satellites that scientists believe may have promising habitats for life. Mathematical models indicate that at least some of this vagabond debris still harbored living microbes. The asteroid may have sown life throughout the solar system, even as it ravaged life on Earth.
The asteroid was vaporized on impact. Its substance, mingling with vaporized Earth rock, formed a fiery plume, which reached halfway to the moon before collapsing in a pillar of incandescent dust. Computer models suggest that the atmosphere within fifteen hundred miles of ground zero became red hot from the debris storm, triggering gigantic forest fires. As the Earth rotated, the airborne material converged at the opposite side of the planet, where it fell and set fire to the entire Indian subcontinent. Measurements of the layer of ash and soot that eventually coated the Earth indicate that fires consumed about seventy per cent of the world’s forests. Meanwhile, giant tsunamis resulting from the impact churned across the Gulf of Mexico, tearing up coastlines, sometimes peeling up hundreds of feet of rock, pushing debris inland and then sucking it back out into deep water, leaving jumbled deposits that oilmen sometimes encounter in the course of deep-sea drilling.
The damage had only begun. Scientists still debate many of the details, which are derived from the computer models, and from field studies of the debris layer, knowledge of extinction rates, fossils and microfossils, and many other clues. But the over-all view is consistently grim. The dust and soot from the impact and the conflagrations prevented all sunlight from reaching the planet’s surface for months. Photosynthesis all but stopped, killing most of the plant life, extinguishing the phytoplankton in the oceans, and causing the amount of oxygen in the atmosphere to plummet. After the fires died down, Earth plunged into a period of cold, perhaps even a deep freeze. Earth’s two essential food chains, in the sea and on land, collapsed. About seventy-five per cent of all species went extinct. More than 99.9999 per cent of all living organisms on Earth died, and the carbon cycle came to a halt.
Earth itself became toxic. When the asteroid struck, it vaporized layers of limestone, releasing into the atmosphere a trillion tons of carbon dioxide, ten billion tons of methane, and a billion tons of carbon monoxide; all three are powerful greenhouse gases. The impact also vaporized anhydrite rock, which blasted ten trillion tons of sulfur compounds aloft. The sulfur combined with water to form sulfuric acid, which then fell as an acid rain that may have been potent enough to strip the leaves from any surviving plants and to leach the nutrients from the soil.
Today, the layer of debris, ash, and soot deposited by the asteroid strike is preserved in the Earth’s sediment as a stripe of black about the thickness of a notebook. This is called the KT boundary, because it marks the dividing line between the Cretaceous period and the Tertiary period. (The Tertiary has been redefined as the Paleogene, but the term “KT” persists.) Mysteries abound above and below the KT layer. In the late Cretaceous, widespread volcanoes spewed vast quantities of gas and dust into the atmosphere, and the air contained far higher levels of carbon dioxide than the air that we breathe now. The climate was tropical, and the planet was perhaps entirely free of ice. Yet scientists know very little about the animals and plants that were living at the time, and as a result they have been searching for fossil deposits as close to the KT boundary as possible.
One of the central mysteries of paleontology is the so-called “three-metre problem.” In a century and a half of assiduous searching, almost no dinosaur remains have been found in the layers three metres, or about nine feet, below the KT boundary, a depth representing many thousands of years. Consequently, numerous paleontologists have argued that the dinosaurs were on the way to extinction long before the asteroid struck, owing perhaps to the volcanic eruptions and climate change. Other scientists have countered that the three-metre problem merely reflects how hard it is to find fossils. Sooner or later, they’ve contended, a scientist will discover dinosaurs much closer to the moment of destruction.
DePalma’s find was in the Hell Creek geological formation, which outcrops in parts of North Dakota, South Dakota, Montana, and Wyoming, and contains some of the most storied dinosaur beds in the world. At the time of the impact, the Hell Creek landscape consisted of steamy, subtropical lowlands and floodplains along the shores of an inland sea. The land teemed with life and the conditions were excellent for fossilization, with seasonal floods and meandering rivers that rapidly buried dead animals and plants.
Dinosaur hunters first discovered these rich fossil beds in the late nineteenth century. In 1902, Barnum Brown, a flamboyant dinosaur hunter who worked at the American Museum of Natural History, in New York, found the first Tyrannosaurus rex here, causing a worldwide sensation. One paleontologist estimated that in the Cretaceous period Hell Creek was so thick with T. rexes that they were like hyenas on the Serengeti. It was also home to triceratops and duckbills. (...)
The asteroid was vaporized on impact. Its substance, mingling with vaporized Earth rock, formed a fiery plume, which reached halfway to the moon before collapsing in a pillar of incandescent dust. Computer models suggest that the atmosphere within fifteen hundred miles of ground zero became red hot from the debris storm, triggering gigantic forest fires. As the Earth rotated, the airborne material converged at the opposite side of the planet, where it fell and set fire to the entire Indian subcontinent. Measurements of the layer of ash and soot that eventually coated the Earth indicate that fires consumed about seventy per cent of the world’s forests. Meanwhile, giant tsunamis resulting from the impact churned across the Gulf of Mexico, tearing up coastlines, sometimes peeling up hundreds of feet of rock, pushing debris inland and then sucking it back out into deep water, leaving jumbled deposits that oilmen sometimes encounter in the course of deep-sea drilling.
The damage had only begun. Scientists still debate many of the details, which are derived from the computer models, and from field studies of the debris layer, knowledge of extinction rates, fossils and microfossils, and many other clues. But the over-all view is consistently grim. The dust and soot from the impact and the conflagrations prevented all sunlight from reaching the planet’s surface for months. Photosynthesis all but stopped, killing most of the plant life, extinguishing the phytoplankton in the oceans, and causing the amount of oxygen in the atmosphere to plummet. After the fires died down, Earth plunged into a period of cold, perhaps even a deep freeze. Earth’s two essential food chains, in the sea and on land, collapsed. About seventy-five per cent of all species went extinct. More than 99.9999 per cent of all living organisms on Earth died, and the carbon cycle came to a halt.
Earth itself became toxic. When the asteroid struck, it vaporized layers of limestone, releasing into the atmosphere a trillion tons of carbon dioxide, ten billion tons of methane, and a billion tons of carbon monoxide; all three are powerful greenhouse gases. The impact also vaporized anhydrite rock, which blasted ten trillion tons of sulfur compounds aloft. The sulfur combined with water to form sulfuric acid, which then fell as an acid rain that may have been potent enough to strip the leaves from any surviving plants and to leach the nutrients from the soil.
Today, the layer of debris, ash, and soot deposited by the asteroid strike is preserved in the Earth’s sediment as a stripe of black about the thickness of a notebook. This is called the KT boundary, because it marks the dividing line between the Cretaceous period and the Tertiary period. (The Tertiary has been redefined as the Paleogene, but the term “KT” persists.) Mysteries abound above and below the KT layer. In the late Cretaceous, widespread volcanoes spewed vast quantities of gas and dust into the atmosphere, and the air contained far higher levels of carbon dioxide than the air that we breathe now. The climate was tropical, and the planet was perhaps entirely free of ice. Yet scientists know very little about the animals and plants that were living at the time, and as a result they have been searching for fossil deposits as close to the KT boundary as possible.
One of the central mysteries of paleontology is the so-called “three-metre problem.” In a century and a half of assiduous searching, almost no dinosaur remains have been found in the layers three metres, or about nine feet, below the KT boundary, a depth representing many thousands of years. Consequently, numerous paleontologists have argued that the dinosaurs were on the way to extinction long before the asteroid struck, owing perhaps to the volcanic eruptions and climate change. Other scientists have countered that the three-metre problem merely reflects how hard it is to find fossils. Sooner or later, they’ve contended, a scientist will discover dinosaurs much closer to the moment of destruction.
Locked in the KT boundary are the answers to our questions about one of the most significant events in the history of life on the planet. If one looks at the Earth as a kind of living organism, as many biologists do, you could say that it was shot by a bullet and almost died. Deciphering what happened on the day of destruction is crucial not only to solving the three-metre problem but also to explaining our own genesis as a species.
On August 5, 2013, I received an e-mail from a graduate student named Robert DePalma. I had never met DePalma, but we had corresponded on paleontological matters for years, ever since he had read a novel I’d written that centered on the discovery of a fossilized Tyrannosaurus rex killed by the KT impact. “I have made an incredible and unprecedented discovery,” he wrote me, from a truck stop in Bowman, North Dakota. “It is extremely confidential and only three others know of it at the moment, all of them close colleagues.” He went on, “It is far more unique and far rarer than any simple dinosaur discovery. I would prefer not outlining the details via e-mail, if possible.” He gave me his cell-phone number and a time to call...
On August 5, 2013, I received an e-mail from a graduate student named Robert DePalma. I had never met DePalma, but we had corresponded on paleontological matters for years, ever since he had read a novel I’d written that centered on the discovery of a fossilized Tyrannosaurus rex killed by the KT impact. “I have made an incredible and unprecedented discovery,” he wrote me, from a truck stop in Bowman, North Dakota. “It is extremely confidential and only three others know of it at the moment, all of them close colleagues.” He went on, “It is far more unique and far rarer than any simple dinosaur discovery. I would prefer not outlining the details via e-mail, if possible.” He gave me his cell-phone number and a time to call...
DePalma’s find was in the Hell Creek geological formation, which outcrops in parts of North Dakota, South Dakota, Montana, and Wyoming, and contains some of the most storied dinosaur beds in the world. At the time of the impact, the Hell Creek landscape consisted of steamy, subtropical lowlands and floodplains along the shores of an inland sea. The land teemed with life and the conditions were excellent for fossilization, with seasonal floods and meandering rivers that rapidly buried dead animals and plants.
Dinosaur hunters first discovered these rich fossil beds in the late nineteenth century. In 1902, Barnum Brown, a flamboyant dinosaur hunter who worked at the American Museum of Natural History, in New York, found the first Tyrannosaurus rex here, causing a worldwide sensation. One paleontologist estimated that in the Cretaceous period Hell Creek was so thick with T. rexes that they were like hyenas on the Serengeti. It was also home to triceratops and duckbills. (...)
Today, DePalma, now thirty-seven, is still working toward his Ph.D. He holds the unpaid position of curator of vertebrate paleontology at the Palm Beach Museum of Natural History, a nascent and struggling museum with no exhibition space. In 2012, while looking for a new pond deposit, he heard that a private collector had stumbled upon an unusual site on a cattle ranch near Bowman, North Dakota. (Much of the Hell Creek land is privately owned, and ranchers will sell digging rights to whoever will pay decent money, paleontologists and commercial fossil collectors alike.) The collector felt that the site, a three-foot-deep layer exposed at the surface, was a bust: it was packed with fish fossils, but they were so delicate that they crumbled into tiny flakes as soon as they met the air. The fish were encased in layers of damp, cracked mud and sand that had never solidified; it was so soft that it could be dug with a shovel or pulled apart by hand. In July, 2012, the collector showed DePalma the site and told him that he was welcome to it. (...)
The following July, DePalma returned to do a preliminary excavation of the site. “Almost right away, I saw it was unusual,” he told me. He began shovelling off the layers of soil above where he’d found the fish. This “overburden” is typically material that was deposited long after the specimen lived; there’s little in it to interest a paleontologist, and it is usually discarded. But as soon as DePalma started digging he noticed grayish-white specks in the layers which looked like grains of sand but which, under a hand lens, proved to be tiny spheres and elongated droplets. “I think, Holy shit, these look like microtektites!” DePalma recalled. Microtektites are the blobs of glass that form when molten rock is blasted into the air by an asteroid impact and falls back to Earth in a solidifying drizzle. The site appeared to contain microtektites by the million.
As DePalma carefully excavated the upper layers, he began uncovering an extraordinary array of fossils, exceedingly delicate but marvellously well preserved. “There’s amazing plant material in there, all interlaced and interlocked,” he recalled. “There are logjams of wood, fish pressed against cypress-tree root bundles, tree trunks smeared with amber.” Most fossils end up being squashed flat by the pressure of the overlying stone, but here everything was three-dimensional, including the fish, having been encased in sediment all at once, which acted as a support. “You see skin, you see dorsal fins literally sticking straight up in the sediments, species new to science,” he said. As he dug, the momentousness of what he had come across slowly dawned on him. If the site was what he hoped, he had made the most important paleontological discovery of the new century.
by Douglas Preston, New Yorker | Read more:
The following July, DePalma returned to do a preliminary excavation of the site. “Almost right away, I saw it was unusual,” he told me. He began shovelling off the layers of soil above where he’d found the fish. This “overburden” is typically material that was deposited long after the specimen lived; there’s little in it to interest a paleontologist, and it is usually discarded. But as soon as DePalma started digging he noticed grayish-white specks in the layers which looked like grains of sand but which, under a hand lens, proved to be tiny spheres and elongated droplets. “I think, Holy shit, these look like microtektites!” DePalma recalled. Microtektites are the blobs of glass that form when molten rock is blasted into the air by an asteroid impact and falls back to Earth in a solidifying drizzle. The site appeared to contain microtektites by the million.
As DePalma carefully excavated the upper layers, he began uncovering an extraordinary array of fossils, exceedingly delicate but marvellously well preserved. “There’s amazing plant material in there, all interlaced and interlocked,” he recalled. “There are logjams of wood, fish pressed against cypress-tree root bundles, tree trunks smeared with amber.” Most fossils end up being squashed flat by the pressure of the overlying stone, but here everything was three-dimensional, including the fish, having been encased in sediment all at once, which acted as a support. “You see skin, you see dorsal fins literally sticking straight up in the sediments, species new to science,” he said. As he dug, the momentousness of what he had come across slowly dawned on him. If the site was what he hoped, he had made the most important paleontological discovery of the new century.
by Douglas Preston, New Yorker | Read more:
Image: Richard Barnes
Labels:
Animals,
Biology,
Environment,
history,
Science
Saturday, December 20, 2025
The Online Scam That Hits Travelers When They're Most Distracted
When an actual human being answered an airline customer-service hotline after a single ring, I probably should have known I was being scammed.
At the time, I wasn’t exactly thinking critically. It was three days before Thanksgiving, and my family was about to miss our flight to Berlin, stuck in traffic en route to the airport in Newark, N.J. Blame a combination of poor planning, construction on I-95 and five consecutive canceled Ubers.
So when an empathetic-sounding man identified himself as a United Airlines agent named Sheldon and immediately asked for my phone number in case we got disconnected, I felt nothing but an overwhelming sense of relief. Sheldon told me not to worry. He’d get my family to Berlin. “Sheldon, you are an angel,” I said through tears, explaining that my father had died in July and this was to be our family’s first Thanksgiving without him.
Sheldon told me, with what seemed like genuine emotion, that he was terribly sorry for my loss. The good news was he could get us on a Lufthansa flight later that night, going through Munich. All I had to do was cover the price difference between the tickets: $1,415.97 for the three of us. I sighed and gave Sheldon my American Express card number.
That’s when I became the latest victim of what the Federal Trade Commission calls a business-impostor or business-impersonator scam. Like 396,227 other Americans in the first nine months of this year — up 18% from the same period last year — I fell for this increasingly sophisticated deception, in which someone claims to represent a trusted company to extract money and personal data from an unsuspecting victim...
The specific techniques the scammers use vary: Some pose as airlines on social media and respond to consumer complaints. Others use texts or emails claiming to be an airline reporting a delayed or canceled flight to phish for travelers’ data. But the objective is always the same: to hit a stressed out, overwhelmed traveler at their most vulnerable.
A sponsored scam
In my case, the scammer exploited weaknesses in Google’s automated ad-screening system, so that fraudulent sponsored results rose to the top. After I reported the fake “United Airlines” ad to Google, via an online form for consumers, it was taken down. But a few days later, I entered the same search terms and the identical ad featuring the same 1-888 number was back at the top of my results. I reported it again, and it was quickly removed again. (...)
Not a good sign
After paying for the new tickets, I received a confirmation email from an unfamiliar domain. Sheldon was still on the line with me, so I asked him what was going on. Shouldn’t the confirmation come from United.com, not some random site called Travelomile? Sheldon explained that because Lufthansa operated the new flight and the changes were so last-minute, United used the site as its payments-processing partner. This didn’t quite make sense, but I suppose I still wanted to believe in Sheldon.
It wasn’t until he asked me to upload images of my family’s passports to a janky-looking website that my head started to spin. When our cab pulled into the departures zone, I hung up on Sheldon and ran to United’s customer-service counter in tears. I showed the agent behind the counter our “boarding passes.”
“I don’t know what these are, but I will help you,” the agent said. He booked us on the next flight, through Frankfurt, at no extra cost — a holiday miracle.
When we arrived at our gate, I called American Express and contested the charge from Travelomile before canceling my credit card. I then contacted Experian, one of the three major credit bureaus, to put a fraud alert on my file. Next, I filed complaint with the FTC and reported the fake ad to Google. Later, I looked up Travelomile on TrustPilot, an independent customer-review platform, and found 47 one-star ratings out 297 ratings total. Many of those one-star reviews were from people who said they had fallen for a similar scam. (...)
Stay on guard when you travel
What consumers can do to protect themselves from travel scammers, according to John Breyault of the National Consumers League:
At the time, I wasn’t exactly thinking critically. It was three days before Thanksgiving, and my family was about to miss our flight to Berlin, stuck in traffic en route to the airport in Newark, N.J. Blame a combination of poor planning, construction on I-95 and five consecutive canceled Ubers.
So when an empathetic-sounding man identified himself as a United Airlines agent named Sheldon and immediately asked for my phone number in case we got disconnected, I felt nothing but an overwhelming sense of relief. Sheldon told me not to worry. He’d get my family to Berlin. “Sheldon, you are an angel,” I said through tears, explaining that my father had died in July and this was to be our family’s first Thanksgiving without him.
Sheldon told me, with what seemed like genuine emotion, that he was terribly sorry for my loss. The good news was he could get us on a Lufthansa flight later that night, going through Munich. All I had to do was cover the price difference between the tickets: $1,415.97 for the three of us. I sighed and gave Sheldon my American Express card number.
That’s when I became the latest victim of what the Federal Trade Commission calls a business-impostor or business-impersonator scam. Like 396,227 other Americans in the first nine months of this year — up 18% from the same period last year — I fell for this increasingly sophisticated deception, in which someone claims to represent a trusted company to extract money and personal data from an unsuspecting victim...
The specific techniques the scammers use vary: Some pose as airlines on social media and respond to consumer complaints. Others use texts or emails claiming to be an airline reporting a delayed or canceled flight to phish for travelers’ data. But the objective is always the same: to hit a stressed out, overwhelmed traveler at their most vulnerable.
A sponsored scam
In my case, the scammer exploited weaknesses in Google’s automated ad-screening system, so that fraudulent sponsored results rose to the top. After I reported the fake “United Airlines” ad to Google, via an online form for consumers, it was taken down. But a few days later, I entered the same search terms and the identical ad featuring the same 1-888 number was back at the top of my results. I reported it again, and it was quickly removed again. (...)
In retrospect, my refusal to face reality was my biggest mistake. We were still in traffic, set to arrive at the airport just as United Flight 962 was beginning to board, with three large suitcases to check. We had zero chance of making it.
The replacement of humans with not-always-helpful AI-powered customer-service tools makes it easier for an airline scammer to lure frustrated travelers. That’s what happened to me in the back of the cab when I opened the United app on my phone and began furiously texting, first with a bot, then with an actual representative, who sent me a link for the company’s Agent on Demand service to help passengers in urgent situations.
The link didn’t work. When I tried to text the agent on the app, the connection got lost and I was back to square one, chatting with a bot. Time was running out. Exasperated, I closed the app and typed “United airlines agent on demand” into Google. The top search result on my phone said United.com, had a 1-888 number next to it and said it had had “1M+ visits in past month.” In other words, it looked legit. I tapped the number. That’s when I first connected with Sheldon.
The replacement of humans with not-always-helpful AI-powered customer-service tools makes it easier for an airline scammer to lure frustrated travelers. That’s what happened to me in the back of the cab when I opened the United app on my phone and began furiously texting, first with a bot, then with an actual representative, who sent me a link for the company’s Agent on Demand service to help passengers in urgent situations.
The link didn’t work. When I tried to text the agent on the app, the connection got lost and I was back to square one, chatting with a bot. Time was running out. Exasperated, I closed the app and typed “United airlines agent on demand” into Google. The top search result on my phone said United.com, had a 1-888 number next to it and said it had had “1M+ visits in past month.” In other words, it looked legit. I tapped the number. That’s when I first connected with Sheldon.
Not a good sign
After paying for the new tickets, I received a confirmation email from an unfamiliar domain. Sheldon was still on the line with me, so I asked him what was going on. Shouldn’t the confirmation come from United.com, not some random site called Travelomile? Sheldon explained that because Lufthansa operated the new flight and the changes were so last-minute, United used the site as its payments-processing partner. This didn’t quite make sense, but I suppose I still wanted to believe in Sheldon.
It wasn’t until he asked me to upload images of my family’s passports to a janky-looking website that my head started to spin. When our cab pulled into the departures zone, I hung up on Sheldon and ran to United’s customer-service counter in tears. I showed the agent behind the counter our “boarding passes.”
“I don’t know what these are, but I will help you,” the agent said. He booked us on the next flight, through Frankfurt, at no extra cost — a holiday miracle.
When we arrived at our gate, I called American Express and contested the charge from Travelomile before canceling my credit card. I then contacted Experian, one of the three major credit bureaus, to put a fraud alert on my file. Next, I filed complaint with the FTC and reported the fake ad to Google. Later, I looked up Travelomile on TrustPilot, an independent customer-review platform, and found 47 one-star ratings out 297 ratings total. Many of those one-star reviews were from people who said they had fallen for a similar scam. (...)
Stay on guard when you travel
What consumers can do to protect themselves from travel scammers, according to John Breyault of the National Consumers League:
- Save the airline’s real number in your contacts before traveling.
- If you reach out to the airline, do it through its official app.
- If you’ve been defrauded by an impostor, contact your bank or credit card company immediately. [ed. more...]
by Rachel Dodes, Bloomberg/Seattle Times | Read more:
Image: uncredited
John & Yoko: One to One
A fire alert disrupts the Venice screening of One to One: John & Yoko, Kevin Macdonald and Sam Rice-Edwards’ documentary about Lennon’s rambunctious post-Beatles heyday, when he and his artist wife Ono were first putting down roots in New York. Inside the hushed screening room, the flashing red lights and blaring alarm provide the second big surprise of the night. The first was how much I was enjoying the show.
Short of a documentary that unearths incontrovertible new evidence that he faked his own death, I’m not convinced that the world needs another John Lennon film. The medium, surely, has him well covered already. But Macdonald and Rice-Edwards have managed to find and mine a rich source of material, tightly tucked away amid all the other wildcat wells. Their film turns back the clock to the early 1970s and a benefit gig that occurred around the time of Lennon’s deportation battle with Nixon (see previous documentaries for details) and his extended lost weekend with May Pang (ditto). Crucially, too, it throws this concert against the maelstrom of the US political scene, with a channel-surfing aesthetic that skips from car and Coke commercials to the Attica prison riot and the near-fatal shooting of Alabama governor George Wallace.
While Lennon claims that he spent his first year in New York mostly watching TV, One to One suggests otherwise. Instead he hit the ground running, hurling himself at the action to become the standard bearer and figurehead for whatever progressive leftist cause was doing the rounds that week. The film blends archive footage with a trove of previously unheard phone conversations to show the ways in which he and Ono leveraged their celebrity status and surrounded themselves with a crew of colourful upstarts, from Allen Ginsberg to Jerry Rubin. The oddest of these, perhaps, is the activist AJ Weberman, who is tasked with a mission to raid Bob Dylan’s bins in order to prove what a “multimillionaire hypocrite” the singer has become. Ono pleads with Weberman to apologise, explaining that they need Dylan to perform at a planned “Free the People” concert in Miami, but AJ is unrepentant and initially won’t be budged.
In the event, the Free the People event was cancelled. But Lennon promptly finds a new focus with the One to One benefit for disabled children from the Willowbrook state school. Macdonald and Rice-Edwards have remastered Phil Spector’s muddy original recording so that the footage now plays with a fresh, bullish swagger. This was Lennon’s first full-length concert since the Beatles performed at Candlestick Park and, it transpired, the last he would ever play.
If only more nostalgic music documentaries could muster such a fun, fierce and full-blooded take on old, familiar material. One to One, against the odds, makes Lennon feel somehow vital again. It catches him like a butterfly at arguably his most interesting period, when he felt liberated and unfettered and was living “like a student” in a two-room loft in Greenwich Village. He’s radioactive with charisma, tilting at windmills and kicking out sparks.
Short of a documentary that unearths incontrovertible new evidence that he faked his own death, I’m not convinced that the world needs another John Lennon film. The medium, surely, has him well covered already. But Macdonald and Rice-Edwards have managed to find and mine a rich source of material, tightly tucked away amid all the other wildcat wells. Their film turns back the clock to the early 1970s and a benefit gig that occurred around the time of Lennon’s deportation battle with Nixon (see previous documentaries for details) and his extended lost weekend with May Pang (ditto). Crucially, too, it throws this concert against the maelstrom of the US political scene, with a channel-surfing aesthetic that skips from car and Coke commercials to the Attica prison riot and the near-fatal shooting of Alabama governor George Wallace.
While Lennon claims that he spent his first year in New York mostly watching TV, One to One suggests otherwise. Instead he hit the ground running, hurling himself at the action to become the standard bearer and figurehead for whatever progressive leftist cause was doing the rounds that week. The film blends archive footage with a trove of previously unheard phone conversations to show the ways in which he and Ono leveraged their celebrity status and surrounded themselves with a crew of colourful upstarts, from Allen Ginsberg to Jerry Rubin. The oddest of these, perhaps, is the activist AJ Weberman, who is tasked with a mission to raid Bob Dylan’s bins in order to prove what a “multimillionaire hypocrite” the singer has become. Ono pleads with Weberman to apologise, explaining that they need Dylan to perform at a planned “Free the People” concert in Miami, but AJ is unrepentant and initially won’t be budged.
In the event, the Free the People event was cancelled. But Lennon promptly finds a new focus with the One to One benefit for disabled children from the Willowbrook state school. Macdonald and Rice-Edwards have remastered Phil Spector’s muddy original recording so that the footage now plays with a fresh, bullish swagger. This was Lennon’s first full-length concert since the Beatles performed at Candlestick Park and, it transpired, the last he would ever play.
If only more nostalgic music documentaries could muster such a fun, fierce and full-blooded take on old, familiar material. One to One, against the odds, makes Lennon feel somehow vital again. It catches him like a butterfly at arguably his most interesting period, when he felt liberated and unfettered and was living “like a student” in a two-room loft in Greenwich Village. He’s radioactive with charisma, tilting at windmills and kicking out sparks.
by Xan Brooks, The Guardian | Read more:
Image: One to One/YT
[ed. Haven't seen this yet, but the link above about May Pang and her relationship with John was fascinating. Didn't know Yoko set them up to take pressure off of John's straying, and that, after a couple years (and an alleged affair of her own), became jealous and reeled him back in.]
Labels:
Celebrities,
Cities,
Culture,
history,
Media,
Movies,
Music,
Relationships
Friday, December 19, 2025
Pretty Girl
The Violin and the Fan
(Teased at school for playing violin, and practicing alone in his bedroom one summer afternoon, he played a particular set of notes while the fan in the window was producing a high-pitched vibration in the glass)
“As the two vibrations combined, it was as if a large dark billowing shape came billowing out of some corner in my mind. I can be no more precise than to say large, dark, shape, and billowing, what came flapping out of some backwater of my psyche I had not the slightest inkling was there … it was total horror. It was all horror everywhere, distilled and given form. It rose in me, out of me, summoned somehow by the odd confluence of the fan and those notes. It rose and grew larger and became engulfing and more horrible than I shall ever have the power to convey. I dropped my violin and ran from the room … Shapelessness was one of the horrible things about it. I can say and mean only shape, dark, and either billowing or flapping. But because the horror receded the moment I left the room, within minutes it had become unreal. The shape and horror. It seemed to have been my imagination, some random bit of psychic flatulence, an anomaly”…
“I returned shortly to the room and the fan and picked up the violin again. And produced the same resonance again immediately. And immediately again the black flapping shape rose in my mind again. It was a bit like a sail, or a small part of the wing of something far too large to be seen in totality. It was total psychic horror: death, decay, dissolution, cold empty black malevolent lonely voided space. It was the worst thing I’ve ever confronted … Set free somehow by that one-day-only resonance of violin and fan, the dark shape began rising out of my mind’s corner on its own. I dropped the violin again and ran from the room once again, clutching my head at the front and back, but this time it did not recede… . It was as if I’d awakened it and now it was active. It came and went for a year. I lived in horror of it for a year, as a child, never knowing when it would rise up billowing and blot out all light. After a year it receded. I think I was ten. But not all the way. I’d awakened it somehow. Every so often. Every few months it would rise inside me … The last time it ever rose up billowing was my second year of college … One sophomore night it came up out of nowhere, the black shape, for the first time in years. It is the most horrible feeling I have ever imagined, much less felt. There is no possible way death can feel as bad. It rose up. It was worse now that I was older … I thought I’d have to hurl myself out of my dormitory’s window. I simply could not live with how it felt … Some boy I hardly knew in the room below mine heard me staggering around whimpering at the top of my lungs. He came up and sat with me until it went away. It took most of the night. We didn’t converse; he didn’t try to comfort me. He spoke very little, just sat up with me. We didn’t become friends. By graduation I’d forgotten his name and major. But on that night he seemed to be the piece of string by which I hung suspended over hell itself … It’s never come back.”
“As the two vibrations combined, it was as if a large dark billowing shape came billowing out of some corner in my mind. I can be no more precise than to say large, dark, shape, and billowing, what came flapping out of some backwater of my psyche I had not the slightest inkling was there … it was total horror. It was all horror everywhere, distilled and given form. It rose in me, out of me, summoned somehow by the odd confluence of the fan and those notes. It rose and grew larger and became engulfing and more horrible than I shall ever have the power to convey. I dropped my violin and ran from the room … Shapelessness was one of the horrible things about it. I can say and mean only shape, dark, and either billowing or flapping. But because the horror receded the moment I left the room, within minutes it had become unreal. The shape and horror. It seemed to have been my imagination, some random bit of psychic flatulence, an anomaly”…
“I returned shortly to the room and the fan and picked up the violin again. And produced the same resonance again immediately. And immediately again the black flapping shape rose in my mind again. It was a bit like a sail, or a small part of the wing of something far too large to be seen in totality. It was total psychic horror: death, decay, dissolution, cold empty black malevolent lonely voided space. It was the worst thing I’ve ever confronted … Set free somehow by that one-day-only resonance of violin and fan, the dark shape began rising out of my mind’s corner on its own. I dropped the violin again and ran from the room once again, clutching my head at the front and back, but this time it did not recede… . It was as if I’d awakened it and now it was active. It came and went for a year. I lived in horror of it for a year, as a child, never knowing when it would rise up billowing and blot out all light. After a year it receded. I think I was ten. But not all the way. I’d awakened it somehow. Every so often. Every few months it would rise inside me … The last time it ever rose up billowing was my second year of college … One sophomore night it came up out of nowhere, the black shape, for the first time in years. It is the most horrible feeling I have ever imagined, much less felt. There is no possible way death can feel as bad. It rose up. It was worse now that I was older … I thought I’d have to hurl myself out of my dormitory’s window. I simply could not live with how it felt … Some boy I hardly knew in the room below mine heard me staggering around whimpering at the top of my lungs. He came up and sat with me until it went away. It took most of the night. We didn’t converse; he didn’t try to comfort me. He spoke very little, just sat up with me. We didn’t become friends. By graduation I’d forgotten his name and major. But on that night he seemed to be the piece of string by which I hung suspended over hell itself … It’s never come back.”
[ed. I've had a similar experience, it's horrifying. A total soul crushing dread that's beyond description. My experience involved my head and hands, feeling massively huge, almost blimp-like, and obliterating everything they touched. Even worse - actually the worst - was the deep and malevolent 'thrumb, thrumb, THRUMBING" sound and feeling that accompanied the whole experience. The voice of doom. It felt like being crushed.]
Favorite Rob Reiner Credits
When Rob Reiner was killed earlier this week, along with his wife and creative partner Michelle, the world of film lost one of its most beloved and respected figures, an artist who had done very good and extremely popular work in a variety of genres, first in front of the camera, then behind it as a writer, producer, and director, and then again in his later life as an actor. All the while, Reiner maintained a spotless reputation as a mensch, in an industry with vanishingly few of those. He was one of the most sophisticated and successful political activists in California, and his work (and money) helped pass the state's groundbreaking marriage equality law. Few filmmakers have had as vast or varied an impact on American life over the last 50 years, which is something that Reiner would surely have found very funny. Here are some of Reiner's films and roles that we love:
Stand By Me
Stand By Me is probably the purest chunk of schmaltz in Rob Reiner's generational early-career run. The movie is oozing with sentiment, factory-designed to squeeze profundity out of every otherwise mundane childhood interaction, and some not so mundane. It pulls out every trouble-at-home cliché to make you root for the kids and add dramatic heft. Richard Dreyfuss's narration should come with an insulin pump.
And yet it works! It works. You root for the kids, and you identify with them; you laugh when you're meant to laugh and cry when you're supposed to; and yes, through the sheen of memory, all those moments with your own childhood pals take on a patina that preserves them as something meaningful. It's distilled nostalgia, which in moviemaking is much easier to fuck up than to get right.
Weapons-grade middlebrow competence was Reiner's strength. That's a compliment, to be clear, especially as Hollywood has come to devalue that skillset and the type of work it produced. He was visually unflashy, almost to an extent that it became his signature as a director. I'm not sure what a Rob Reiner film "looks like." He mostly picked great scripts, made his visual and storytelling choices, and got out of the way to let his actors cook. In Stand By Me, his first crucial decision was to give the movie a main character; the novella focuses on all four boys equally. The second was the casting. Reiner reportedly auditioned more than 300 kids, and got all four exactly right. A Mount Rushmore of child actors could credibly just be the four boys from this film.
It can be easy and is tempting to think of a movie as something that just sort of happens, and succeeds and fails for ineffable reasons, but it's really just a collection of a million different choices being made—most of the big ones by the director—and any one of which, if misguided, could torpedo the whole thing. Stand By Me doesn't work if the kids don't work. For its flaws, every choice that Reiner needed to nail in this movie, he nailed. You can more or less say the same for his entire first 12 years of directing. His hit rate was a miracle—no, not a miracle, that denies agency. It is the collective work of a real-deal genius. (...)
- Barry Petchesky
When Harry Met Sally
It’s like 90 minutes, and all of them are perfect. Harry and Sally might suffer for their neuroses, but the greatest gift a director can give an audience is a film whose every detail was obsessed over. New York, warm and orange, has never looked better. Carrie Fisher says her lines the only way they could ever sound: You’re right, you’re right, I know you’re right. I want you to know that I will never want that wagon wheel coffee table.
That a film so brisk can feel so lived-in owes to Nora Ephron’s screenplay and also to Reiner’s neat choices, like the split-screen that makes it look like Harry and Sally are watching Casablanca in the same bed, an effect dialed up later in a continuously shot four-way phone call scene that took 60 tries to get right. Every time I watch When Harry Met Sally, I think it must have been impossible to make; the coziness of the movie is cut with something sad and mischievous and hard to describe. Estelle Reiner’s deadpan line reading at Katz’s Deli is a classic, and every family Pictionary night in our house began with someone guessing “baby fish mouth,” but the bit that came to mind first was this scene set at a Giants game: Harry tells Jess about his wife’s affair between rounds of the wave.
- Maitreyi Anantharaman
Stand By Me
Stand By Me is probably the purest chunk of schmaltz in Rob Reiner's generational early-career run. The movie is oozing with sentiment, factory-designed to squeeze profundity out of every otherwise mundane childhood interaction, and some not so mundane. It pulls out every trouble-at-home cliché to make you root for the kids and add dramatic heft. Richard Dreyfuss's narration should come with an insulin pump.
And yet it works! It works. You root for the kids, and you identify with them; you laugh when you're meant to laugh and cry when you're supposed to; and yes, through the sheen of memory, all those moments with your own childhood pals take on a patina that preserves them as something meaningful. It's distilled nostalgia, which in moviemaking is much easier to fuck up than to get right.
Weapons-grade middlebrow competence was Reiner's strength. That's a compliment, to be clear, especially as Hollywood has come to devalue that skillset and the type of work it produced. He was visually unflashy, almost to an extent that it became his signature as a director. I'm not sure what a Rob Reiner film "looks like." He mostly picked great scripts, made his visual and storytelling choices, and got out of the way to let his actors cook. In Stand By Me, his first crucial decision was to give the movie a main character; the novella focuses on all four boys equally. The second was the casting. Reiner reportedly auditioned more than 300 kids, and got all four exactly right. A Mount Rushmore of child actors could credibly just be the four boys from this film.
It can be easy and is tempting to think of a movie as something that just sort of happens, and succeeds and fails for ineffable reasons, but it's really just a collection of a million different choices being made—most of the big ones by the director—and any one of which, if misguided, could torpedo the whole thing. Stand By Me doesn't work if the kids don't work. For its flaws, every choice that Reiner needed to nail in this movie, he nailed. You can more or less say the same for his entire first 12 years of directing. His hit rate was a miracle—no, not a miracle, that denies agency. It is the collective work of a real-deal genius. (...)
- Barry Petchesky
When Harry Met Sally
It’s like 90 minutes, and all of them are perfect. Harry and Sally might suffer for their neuroses, but the greatest gift a director can give an audience is a film whose every detail was obsessed over. New York, warm and orange, has never looked better. Carrie Fisher says her lines the only way they could ever sound: You’re right, you’re right, I know you’re right. I want you to know that I will never want that wagon wheel coffee table.
That a film so brisk can feel so lived-in owes to Nora Ephron’s screenplay and also to Reiner’s neat choices, like the split-screen that makes it look like Harry and Sally are watching Casablanca in the same bed, an effect dialed up later in a continuously shot four-way phone call scene that took 60 tries to get right. Every time I watch When Harry Met Sally, I think it must have been impossible to make; the coziness of the movie is cut with something sad and mischievous and hard to describe. Estelle Reiner’s deadpan line reading at Katz’s Deli is a classic, and every family Pictionary night in our house began with someone guessing “baby fish mouth,” but the bit that came to mind first was this scene set at a Giants game: Harry tells Jess about his wife’s affair between rounds of the wave.
- Maitreyi Anantharaman
Michael "Meathead" Stivic in All In The Family
Rob Reiner was proof that every once in a rare while, nepotism is a great idea. Of all the lessons he could glean from his father Carl, one of this nation's undisputed comedic geniuses, he put nearly all of them to best use over his voluminous IMDB page.
The credit that Reiner broke out with was the one that seemed with hindsight to be the least consequential of them all—his straight man/son-in-law/earnest doofus role in the Norman Lear sitcom All In The Family. The show, which for several years was the nation's defining situation comedy, ran through the risible but weirdly prescient venom of Carroll O'Connor's towering performance, and positioned Reiner as the stereotypically liberal son-in-law and foil for O'Connor's cardboard conservative Archie Bunker. Reiner helped frame the show, while mostly serving up setups for O'Connor. He played the part well, but it was not an especially dignified one. I mean, his character's name was Mike Stivic, but he became known universally as "Meathead" because Bunker only referred to him as such. Reiner learned from his father's years with Mel Brooks how to be that acquiescent foil, and if his work in that part did not make him a recognized comedian except to those folks who knew how comedy actually works, it indisputably gave him an eight-year advanced education on all the things required to make funny. Those studies would serve him well in his director's chair. His gift was not in being the funny, but in building sturdy and elegant setups for the funny, and there has never been a good comedy movie without that. The Princess Bride doesn't work for 10 minutes without Cary Elwes, and Elwes's performance wouldn't work if his director did not repeatedly put him in position to succeed.
Maybe Reiner would not have gotten the AITF gig without being his father's son—Richard Dreyfuss also wanted the role and Harrison Ford turned it down, for what that may be worth—but sometimes nepotism works for those outside the family. Reiner wrote three of the 174 episodes in which he appeared; he learned to thrive behind and off to the side of the camera. It all counted, it all contributed, and every credit Reiner is credited with here owes some of its shine to that television show, which in turn owes its existence to The Dick Van Dyke Show and his father and Mel Brooks's work with The 2000-Year-Old Man and Your Show Of Shows. That takes us back 75 years, into the earliest days of the medium, which may as well be the entire history of American comedy. Every giant stood on the shoulders of another, and that giant did the same. It is all of a piece, and IMDB would be half as large a quarter as useful without them, and him.
- Ray Ratto
This Is Spinal Tap
In a particularly on-brand bit of trivia, I first became aware of This Is Spinal Tap through Guitar Hero II. The titular band’s hit “Tonight I’m Gonna Rock You Tonight” was downloadable content for that game, and I spent hours trying to perfect it before I ever thought about watching the movie it hailed from. I did eventually do it, and I remember exactly where I was—in Venezuela in the summer of 2007, traveling around for the Copa América—because Spinal Tap is a near-flawless movie, and one that seared itself into my brain. I can’t recall with certainty, but I’m pretty sure that this is when I first became aware of Rob Reiner—I knew his dad from Ocean’s Eleven, another perfect movie—and Spinal Tap is such a stunning collection of talent that it’s hard to pick out a favorite role or MVP. Here’s the thing about that, though: The best and most important performance in the film might be from Reiner himself, because the movie doesn’t work as well as it does without him.
On the one hand, this is obvious; he directed the movie and co-wrote it, so his fingerprints are quite naturally all over it. And yet, in a movie full of massive characters and comedians perfectly suited for those roles, Reiner’s performance as the flabbergasted documentarian is what makes the whole thing hang together. Reiner was a comedic genius in his own right, but I think the thing I appreciate most about Spinal Tap whenever I watch it is how much he understands about his cast’s strengths and how much he allows himself to recede into the background while still working to guide the jokes to their best conclusions. Every great comedy needs a straight man, and Reiner’s Marty DiBergi is certainly that, but the movie is so funny, and Reiner is such a welcome presence on screen, that even DiBergi gets to be effortlessly hilarious. He does this, for the most part, just by playing an ostensibly normal person and turning that all up to, well, 11.
Let’s take what I consider one of the most iconic comedic scenes of all time, and certainly the one that I have quoted the most in my life: “It’s one louder.”
Christopher Guest is perfect in this scene, unsurprisingly; his Nigel Tufnel is an idiot, and the movie gets a lot of humor out of that fact throughout, and especially here. However, Reiner’s plain-spoken incredulity over the idiocy is what really elevates the scene to me. You can feel his character grappling with this concept throughout: First with a plain-spoken revelation (“Oh I see, and most of the amps go to 10”), but then he comes in with the setup: “Why don’t you just make 10 louder and make 10 be the top number, and make that a little louder?” Every single time I watch this scene, the pause before Guest goes “These go to eleven” makes me giggle in anticipation.
Spinal Tap is hilarious in its own right, and also birthed the mockumentary genre; it’s crazy to think about all of the things that the movie directly influenced, from Guest’s own filmmaking work (shout out Best In Show), to Drop Dead Gorgeous, on through Popstar: Never Stop Never Stopping. God, I love that last one, and so many things that work in Popstar are directly traceable to the work Reiner did on Spinal Tap. (Spinal Tap also birthed a sequel just this year; I haven’t watched it yet, mainly because of how much I love the original and don’t need more from this stupid British band, but I am relieved to report that I’ve heard it’s a fine enough time at the movies.)
That This Is Spinal Tap was Reiner’s directorial debut only adds to the absurdity. Who produces not just a masterpiece, but such an utterly distinctive piece of work in their first real attempt? The answer, really, is that Reiner was a master, and he would go on to prove that over a historic run over the next decade, making Stand By Me, The Princess Bride, When Harry Met Sally, Misery, and A Few Good Men in just eight years. Ridiculous. This Is Spinal Tap is my favorite of all of those, though, and one of the most rewatchable movies ever made. Hell, as I’m writing this, I just remembered the scene where Reiner reads the band some reviews (“The review you had on Shark Sandwich, which was merely a two-word review just said … Shit Sandwich”) which is also among the funniest things put to film. The whole movie is strewn with gems like that. What a gift.
- Luis Paez-Pumar
[ed. See also: As You Wish: Rob Reiner (1947-2025). Ebert.com]
Rob Reiner was proof that every once in a rare while, nepotism is a great idea. Of all the lessons he could glean from his father Carl, one of this nation's undisputed comedic geniuses, he put nearly all of them to best use over his voluminous IMDB page.
The credit that Reiner broke out with was the one that seemed with hindsight to be the least consequential of them all—his straight man/son-in-law/earnest doofus role in the Norman Lear sitcom All In The Family. The show, which for several years was the nation's defining situation comedy, ran through the risible but weirdly prescient venom of Carroll O'Connor's towering performance, and positioned Reiner as the stereotypically liberal son-in-law and foil for O'Connor's cardboard conservative Archie Bunker. Reiner helped frame the show, while mostly serving up setups for O'Connor. He played the part well, but it was not an especially dignified one. I mean, his character's name was Mike Stivic, but he became known universally as "Meathead" because Bunker only referred to him as such. Reiner learned from his father's years with Mel Brooks how to be that acquiescent foil, and if his work in that part did not make him a recognized comedian except to those folks who knew how comedy actually works, it indisputably gave him an eight-year advanced education on all the things required to make funny. Those studies would serve him well in his director's chair. His gift was not in being the funny, but in building sturdy and elegant setups for the funny, and there has never been a good comedy movie without that. The Princess Bride doesn't work for 10 minutes without Cary Elwes, and Elwes's performance wouldn't work if his director did not repeatedly put him in position to succeed.
Maybe Reiner would not have gotten the AITF gig without being his father's son—Richard Dreyfuss also wanted the role and Harrison Ford turned it down, for what that may be worth—but sometimes nepotism works for those outside the family. Reiner wrote three of the 174 episodes in which he appeared; he learned to thrive behind and off to the side of the camera. It all counted, it all contributed, and every credit Reiner is credited with here owes some of its shine to that television show, which in turn owes its existence to The Dick Van Dyke Show and his father and Mel Brooks's work with The 2000-Year-Old Man and Your Show Of Shows. That takes us back 75 years, into the earliest days of the medium, which may as well be the entire history of American comedy. Every giant stood on the shoulders of another, and that giant did the same. It is all of a piece, and IMDB would be half as large a quarter as useful without them, and him.
- Ray Ratto
This Is Spinal Tap
In a particularly on-brand bit of trivia, I first became aware of This Is Spinal Tap through Guitar Hero II. The titular band’s hit “Tonight I’m Gonna Rock You Tonight” was downloadable content for that game, and I spent hours trying to perfect it before I ever thought about watching the movie it hailed from. I did eventually do it, and I remember exactly where I was—in Venezuela in the summer of 2007, traveling around for the Copa América—because Spinal Tap is a near-flawless movie, and one that seared itself into my brain. I can’t recall with certainty, but I’m pretty sure that this is when I first became aware of Rob Reiner—I knew his dad from Ocean’s Eleven, another perfect movie—and Spinal Tap is such a stunning collection of talent that it’s hard to pick out a favorite role or MVP. Here’s the thing about that, though: The best and most important performance in the film might be from Reiner himself, because the movie doesn’t work as well as it does without him.
On the one hand, this is obvious; he directed the movie and co-wrote it, so his fingerprints are quite naturally all over it. And yet, in a movie full of massive characters and comedians perfectly suited for those roles, Reiner’s performance as the flabbergasted documentarian is what makes the whole thing hang together. Reiner was a comedic genius in his own right, but I think the thing I appreciate most about Spinal Tap whenever I watch it is how much he understands about his cast’s strengths and how much he allows himself to recede into the background while still working to guide the jokes to their best conclusions. Every great comedy needs a straight man, and Reiner’s Marty DiBergi is certainly that, but the movie is so funny, and Reiner is such a welcome presence on screen, that even DiBergi gets to be effortlessly hilarious. He does this, for the most part, just by playing an ostensibly normal person and turning that all up to, well, 11.
Let’s take what I consider one of the most iconic comedic scenes of all time, and certainly the one that I have quoted the most in my life: “It’s one louder.”
Christopher Guest is perfect in this scene, unsurprisingly; his Nigel Tufnel is an idiot, and the movie gets a lot of humor out of that fact throughout, and especially here. However, Reiner’s plain-spoken incredulity over the idiocy is what really elevates the scene to me. You can feel his character grappling with this concept throughout: First with a plain-spoken revelation (“Oh I see, and most of the amps go to 10”), but then he comes in with the setup: “Why don’t you just make 10 louder and make 10 be the top number, and make that a little louder?” Every single time I watch this scene, the pause before Guest goes “These go to eleven” makes me giggle in anticipation.
Spinal Tap is hilarious in its own right, and also birthed the mockumentary genre; it’s crazy to think about all of the things that the movie directly influenced, from Guest’s own filmmaking work (shout out Best In Show), to Drop Dead Gorgeous, on through Popstar: Never Stop Never Stopping. God, I love that last one, and so many things that work in Popstar are directly traceable to the work Reiner did on Spinal Tap. (Spinal Tap also birthed a sequel just this year; I haven’t watched it yet, mainly because of how much I love the original and don’t need more from this stupid British band, but I am relieved to report that I’ve heard it’s a fine enough time at the movies.)
That This Is Spinal Tap was Reiner’s directorial debut only adds to the absurdity. Who produces not just a masterpiece, but such an utterly distinctive piece of work in their first real attempt? The answer, really, is that Reiner was a master, and he would go on to prove that over a historic run over the next decade, making Stand By Me, The Princess Bride, When Harry Met Sally, Misery, and A Few Good Men in just eight years. Ridiculous. This Is Spinal Tap is my favorite of all of those, though, and one of the most rewatchable movies ever made. Hell, as I’m writing this, I just remembered the scene where Reiner reads the band some reviews (“The review you had on Shark Sandwich, which was merely a two-word review just said … Shit Sandwich”) which is also among the funniest things put to film. The whole movie is strewn with gems like that. What a gift.
- Luis Paez-Pumar
by Defector Staff, Defector | Read more:
Images: Andy Schwartz/Fotos International/Getty Images; Harry Met Sally, Spinal Tap (YouTube).]
Images: Andy Schwartz/Fotos International/Getty Images; Harry Met Sally, Spinal Tap (YouTube).]
Labels:
Celebrities,
Culture,
history,
Humor,
Media,
Movies,
Relationships
Thursday, December 18, 2025
James Stewart, B. Hummel. Two wild cats in a mountainous landscape. Coloured lithograph by B. Hummel after J. Stewart, 1800-1899.
Plaques
[ed. Priceless. And always classy. Now we know how he occupies time when he isn't eating cheeseburgers, drinking diet cokes, and watching Fox News (and golfing and texting). Poor Republicans, this is your savior, see: White House installs plaques (MSN). hmm. wonder why this country feels so divided. Also, if you want to read some truly deranged stuff, check this out (transcript) from a recent rally 12-9-25 (Sen. Dems).]
Finding Peter Putnam
The forgotten janitor who discovered the logic of the mind
The neighborhood was quiet. There was a chill in the air. The scent of Spanish moss hung from the cypress trees. Plumes of white smoke rose from the burning cane fields and stretched across the skies of Terrebonne Parish. The man swung a long leg over a bicycle frame and pedaled off down the street.
It was 1987 in Houma, Louisiana, and he was headed to the Department of Transportation, where he was working the night shift, sweeping floors and cleaning toilets. He was just picking up speed when a car came barreling toward him with a drunken swerve.
A screech shot down the corridor of East Main Street, echoed through the vacant lots, and rang out over the Bayou.
Then silence.
The 60-year-old man lying on the street, as far as anyone knew, was just a janitor hit by a drunk driver. There was no mention of it on the local news, no obituary in the morning paper. His name might have been Anonymous. But it wasn’t.
His name was Peter Putnam. He was a physicist who’d hung out with Albert Einstein, John Archibald Wheeler, and Niels Bohr, and two blocks from the crash, in his run-down apartment, where his partner, Claude, was startled by a screech, were thousands of typed pages containing a groundbreaking new theory of the mind.
“Only two or three times in my life have I met thinkers with insights so far reaching, a breadth of vision so great, and a mind so keen as Putnam’s,” Wheeler said in 1991. And Wheeler, who coined the terms “black hole” and “wormhole,” had worked alongside some of the greatest minds in science.
Robert Works Fuller, a physicist and former president of Oberlin College, who worked closely with Putnam in the 1960s, told me in 2012, “Putnam really should be regarded as one of the great philosophers of the 20th century. Yet he’s completely unknown.”
That word—unknown—it came to haunt me as I spent the next 12 years trying to find out why.
The American Philosophical Society Library in Philadelphia, with its marbled floors and chandeliered ceilings, is home to millions of rare books and manuscripts, including John Wheeler’s notebooks. I was there in 2012, fresh off writing a physics book that had left me with nagging questions about the strange relationship between observer and observed. Physics seemed to suggest that observers play some role in the nature of reality, yet who or what an observer is remained a stubborn mystery.
Wheeler, who made key contributions to nuclear physics, general relativity, and quantum gravity, had thought more about the observer’s role in the universe than anyone—if there was a clue to that mystery anywhere, I was convinced it was somewhere in his papers. That’s when I turned over a mylar overhead, the kind people used to lay on projectors, with the titles of two talks, as if given back-to-back at the same unnamed event:
Wheeler: From Reality to Consciousness
Putnam: From Consciousness to Reality
Putnam, it seemed, had been one of Wheeler’s students, whose opinion Wheeler held in exceptionally high regard. That was odd, because Wheeler’s students were known for becoming physics superstars, earning fame, prestige, and Nobel Prizes: Richard Feynman, Hugh Everett, and Kip Thorne.
Back home, a Google search yielded images of a very muscly, very orange man wearing a very small speedo. This, it turned out, was the wrong Peter Putnam. Eventually, I stumbled on a 1991 article in the Princeton Alumni Weekly newsletter called “Brilliant Enigma.” “Except for the barest outline,” the article read, “Putnam’s life is ‘veiled,’ in the words of Putnam’s lifelong friend and mentor, John Archibald Wheeler.
A quick search of old newspaper archives turned up an intriguing article from the Associated Press, published six years after Putnam’s death. “Peter Putnam lived in a remote bayou town in Louisiana, worked as a night watchman on a swing bridge [and] wrote philosophical essays,” the article said. “He also tripled the family fortune to about $40 million by investing successfully in risky stock ventures.”
The questions kept piling up. Forty million dollars?
I searched a while longer for any more information but came up empty-handed. But I couldn’t forget about Peter Putnam. His name played like a song stuck in my head. I decided to track down anyone who might have known him.
The only paper Putnam ever published was co-authored with Robert Fuller, so I flew from my home in Cambridge, Massachusetts, to Berkeley, California, to meet him. Fuller was nearing 80 years old but had an imposing presence and a booming voice. He sat across from me in his sun-drenched living room, seeming thrilled to talk about Putnam yet plagued by some palpable regret.
Putnam had developed a theory of the brain that “ranged over the whole of philosophy, from ethics to methodology to mathematical foundations to metaphysics,” Fuller told me. He compared Putnam’s work to Alan Turing’s and Kurt Gödel’s. “Turing, Gödel, and Putnam—they’re three peas in a pod,” Fuller said. “But one of them isn’t recognized.” (...)
Phillips Jones, a physicist who worked alongside Putnam in the early 1960s, told me over the phone, “We got the sense that what Einstein’s general theory was for physics, Peter’s model would be for the mind.”
Even Einstein himself was impressed with Putnam. At 19 years old, Putnam went to Einstein’s house to talk with him about Arthur Stanley Eddington, the British astrophysicist. (Eddington performed the key experiment that proved Einstein’s theory of gravity.) Putnam was obsessed with an allegory by Eddington about a fisherman and wanted to ask Einstein about it. Putnam also wanted Einstein to give a speech promoting world government to a political group he’d organized. Einstein—who was asked by plenty of people to do plenty of things—thought highly enough of Putnam to agree.
How could this genius, this Einstein of the mind, just vanish into obscurity? When I asked why, if Putnam was so important, no one has ever heard of him, everyone gave me the same answer: because he didn’t publish his work, and even if he had, no one would have understood it.
“He spoke and wrote in ‘Putnamese,’ ” Fuller said. “If you can find his papers, I think you’ll immediately see what I mean.” (...)
Imagine a fisherman who’s exploring the life of the ocean. He casts his net into the water, scoops up a bunch of fish, inspects his catch and shouts, “A-ha! I have made two great scientific discoveries. First, there are no fish smaller than two inches. Second, all fish have gills.”
The fisherman’s first “discovery” is clearly an error. It’s not that there are no fish smaller than two inches, it’s that the holes in his net are two inches in diameter. But the second discovery seems to be genuine—a fact about the fish, not the net.
This was the Eddington allegory that obsessed Putnam.
When physicists study the world, how can they tell which of their findings are features of the world and which are features of their net? How do we, as observers, disentangle the subjective aspects of our minds from the objective facts of the universe? Eddington suspected that one couldn’t know anything about the fish until one knew the structure of the net.
That’s what Putnam set out to do: come up with a description of the net, a model of “the structure of thought,” as he put it in a 1948 diary entry.
At the time, scientists were abuzz with a new way of thinking about thinking. Alan Turing had worked out an abstract model of computation, which quickly led not only to the invention of physical computers but also to the idea that perhaps the brain, too, was a kind of Turing machine.
Putnam disagreed. “Man is a species of computer of fundamentally different genus than those she builds,” he wrote. It was a radical claim (not only for the mixed genders): He wasn’t saying that the mind isn’t a computer, he was saying it was an entirely different kind of computer.
A universal Turing machine is a powerful thing, capable of computing anything that can be computed by an algorithm. But Putnam saw that it had its limitations. A Turing machine, by design, performs deductive logic—logic where the answers to a problem are contained in its premises, where the rules of inference are pregiven, and information is never created, only shuffled around. Induction, on the other hand, is the process by which we come up with the premises and rules in the first place. “Could there be some indirect way to model or orient the induction process, as we do deductions?” Putnam asked.
Putnam laid out the dynamics of what he called a universal “general purpose heuristic”—which we might call an “induction machine,” or more to the point, a mind—borrowing from the mathematics of game theory, which was thick in the air at Princeton. His induction “game” was simple enough. He imagined a system (immersed in an environment) that could make one mutually exclusive “move” at a time. The system is composed of a massive number of units, each of which can switch between one of two states. They all act in parallel, switching, say, “on” and “off” in response to one another. Putnam imagined that these binary units could condition one another’s behavior, so if one caused another to turn on (or off) in the past, it would become more likely to do so in the future. To play the game, the rule is this: The first chain of binary units, linked together by conditioned reflexes, to form a self-reinforcing loop emits a move on behalf of the system.
Every game needs a goal. In a Turing machine, goals are imposed from the outside. For true induction, the process itself should create its own goals. And there was a key constraint: Putnam realized that the dynamics he had in mind would only work mathematically if the system had just one goal governing all its behavior.
That’s when it hit him: The goal is to repeat. Repetition isn’t a goal that has to be programmed in from the outside; it’s baked into the very nature of things—to exist from one moment to the next is to repeat your existence. “This goal function,” Putnam wrote, “appears pre-encoded in the nature of being itself.”
So, here’s the game. The system starts out in a random mix of “on” and “off” states. Its goal is to repeat that state—to stay the same. But in each turn, a perturbation from the environment moves through the system, flipping states, and the system has to emit the right sequence of moves (by forming the right self-reinforcing loops) to alter the environment in such a way that it will perturb the system back to its original state.
Putnam’s remarkable claim was that simply by playing this game, the system will learn; its sequences of moves will become increasingly less random. It will create rules for how to behave in a given situation, then automatically root out logical contradictions among those rules, resolving them into better ones. And here’s the weird thing: It’s a game that can never be won. The system never exactly repeats. But in trying to, it does something better. It adapts. It innovates. It performs induction.
In paper after paper, Putnam attempted to show how his induction game plays out in the human brain, with motor behaviors serving as the mutually exclusive “moves” and neurons as the parallel binary units that link up into loops to move the body. The point wasn’t to give a realistic picture of how a messy, anatomical brain works any more than an abstract Turing machine describes the workings of an iMac. It was not a biochemical description, but a logical one—a “brain calculus,” Putnam called it.
As the game is played, perturbations from outside—photons hitting the retina, hunger signals rising from the gut—require the brain to emit the right sequence of movements to return to its prior state. At first it has no idea what to do—each disturbance is a neural impulse moving through the brain in search of a pathway out, and it will take the first loop it can find. That’s why a newborn’s movements start out as random thrashes. But when those movements don’t satisfy the goal, the disturbance builds and spreads through the brain, feeling for new pathways, trying loop after loop, thrash after thrash, until it hits on one that does the trick.
When a successful move, discovered by sheer accident, quiets a perturbation, it gets wired into the brain as a behavioral rule. Once formed, applying the rule is a matter of deduction: The brain outputs the right move without having to try all the wrong ones first.
But the real magic happens when a contradiction arises, when two previously successful rules, called up in parallel, compete to move the body in mutually exclusive ways. A hungry baby, needing to find its mother’s breast, simultaneously fires up two loops, conditioned in from its history: “when hungry, turn to the left” and “when hungry, turn to the right.” Deductive logic grinds to a halt; the facilitation of either loop, neurally speaking, inhibits the other. Their horns lock. The neural activity has no viable pathway out. The brain can’t follow through with a wired-in plan—it has to create a new one.
How? By bringing in new variables that reshape the original loops into a new pathway, one that doesn’t negate either of the original rules, but clarifies which to use when. As the baby grows hungrier, activity spreads through the brain, searching its history for anything that can break the tie. If it can’t find it in the brain, it will automatically search the environment, thrash by thrash. The mathematics of game theory, Putnam said, guarantee that, since the original rules were in service of one and the same goal, an answer, logically speaking, can always be found.
In this case, the baby’s brain finds a key variable: When “turn left” worked, the neural signal created by the warmth of the mother’s breast against the baby’s left cheek got wired in with the behavior. When “turn right” worked, the right cheek was warm. That extra bit of sensory signal is enough to tip the scales. The brain has forged a new loop, a more general rule: “When hungry, turn in the direction of the warmer cheek.”
New universals lead to new motor sequences, which allow new interactions with the world, which dredge up new contradictions, which force new resolutions, and so on up the ladder of ever-more intelligent behavior. “This constitutes a theory of the induction process,” Putnam wrote.
In notebooks, in secret, using language only he would understand, Putnam mapped out the dynamics of a system that could perceive, learn, think, and create ideas through induction—a computer that could program itself, then find contradictions among its programs and wrangle them into better programs, building itself out of its history of interactions with the world. Just as Turing had worked out an abstract, universal model of the very possibility of computation, Putnam worked out an abstract, universal model of the very possibility of mind. It was a model, he wrote, that “presents a basic overall pattern [or] character of thought in causal terms for the first time.”
Putnam had said you can’t understand another person until you know what fight they’re in, what contradiction they’re working through. I saw before me two stories, equally true: Putnam was a genius who worked out a new logic of the mind. And Putnam was a janitor who died unknown. The only way to resolve a contradiction, he said, is to find the auxiliary variables that forge a pathway to a larger story, one that includes and clarifies both truths. The variables for this contradiction? Putnam’s mother and money.
[ed. Fascinating. Sounds like part quantum physics and part AI. But it's beyond me.]
It was 1987 in Houma, Louisiana, and he was headed to the Department of Transportation, where he was working the night shift, sweeping floors and cleaning toilets. He was just picking up speed when a car came barreling toward him with a drunken swerve.
A screech shot down the corridor of East Main Street, echoed through the vacant lots, and rang out over the Bayou.
Then silence.
The 60-year-old man lying on the street, as far as anyone knew, was just a janitor hit by a drunk driver. There was no mention of it on the local news, no obituary in the morning paper. His name might have been Anonymous. But it wasn’t.
His name was Peter Putnam. He was a physicist who’d hung out with Albert Einstein, John Archibald Wheeler, and Niels Bohr, and two blocks from the crash, in his run-down apartment, where his partner, Claude, was startled by a screech, were thousands of typed pages containing a groundbreaking new theory of the mind.
“Only two or three times in my life have I met thinkers with insights so far reaching, a breadth of vision so great, and a mind so keen as Putnam’s,” Wheeler said in 1991. And Wheeler, who coined the terms “black hole” and “wormhole,” had worked alongside some of the greatest minds in science.
Robert Works Fuller, a physicist and former president of Oberlin College, who worked closely with Putnam in the 1960s, told me in 2012, “Putnam really should be regarded as one of the great philosophers of the 20th century. Yet he’s completely unknown.”
That word—unknown—it came to haunt me as I spent the next 12 years trying to find out why.
The American Philosophical Society Library in Philadelphia, with its marbled floors and chandeliered ceilings, is home to millions of rare books and manuscripts, including John Wheeler’s notebooks. I was there in 2012, fresh off writing a physics book that had left me with nagging questions about the strange relationship between observer and observed. Physics seemed to suggest that observers play some role in the nature of reality, yet who or what an observer is remained a stubborn mystery.
Wheeler, who made key contributions to nuclear physics, general relativity, and quantum gravity, had thought more about the observer’s role in the universe than anyone—if there was a clue to that mystery anywhere, I was convinced it was somewhere in his papers. That’s when I turned over a mylar overhead, the kind people used to lay on projectors, with the titles of two talks, as if given back-to-back at the same unnamed event:
Wheeler: From Reality to Consciousness
Putnam: From Consciousness to Reality
Putnam, it seemed, had been one of Wheeler’s students, whose opinion Wheeler held in exceptionally high regard. That was odd, because Wheeler’s students were known for becoming physics superstars, earning fame, prestige, and Nobel Prizes: Richard Feynman, Hugh Everett, and Kip Thorne.
Back home, a Google search yielded images of a very muscly, very orange man wearing a very small speedo. This, it turned out, was the wrong Peter Putnam. Eventually, I stumbled on a 1991 article in the Princeton Alumni Weekly newsletter called “Brilliant Enigma.” “Except for the barest outline,” the article read, “Putnam’s life is ‘veiled,’ in the words of Putnam’s lifelong friend and mentor, John Archibald Wheeler.
A quick search of old newspaper archives turned up an intriguing article from the Associated Press, published six years after Putnam’s death. “Peter Putnam lived in a remote bayou town in Louisiana, worked as a night watchman on a swing bridge [and] wrote philosophical essays,” the article said. “He also tripled the family fortune to about $40 million by investing successfully in risky stock ventures.”
The questions kept piling up. Forty million dollars?
I searched a while longer for any more information but came up empty-handed. But I couldn’t forget about Peter Putnam. His name played like a song stuck in my head. I decided to track down anyone who might have known him.
The only paper Putnam ever published was co-authored with Robert Fuller, so I flew from my home in Cambridge, Massachusetts, to Berkeley, California, to meet him. Fuller was nearing 80 years old but had an imposing presence and a booming voice. He sat across from me in his sun-drenched living room, seeming thrilled to talk about Putnam yet plagued by some palpable regret.
Putnam had developed a theory of the brain that “ranged over the whole of philosophy, from ethics to methodology to mathematical foundations to metaphysics,” Fuller told me. He compared Putnam’s work to Alan Turing’s and Kurt Gödel’s. “Turing, Gödel, and Putnam—they’re three peas in a pod,” Fuller said. “But one of them isn’t recognized.” (...)
Phillips Jones, a physicist who worked alongside Putnam in the early 1960s, told me over the phone, “We got the sense that what Einstein’s general theory was for physics, Peter’s model would be for the mind.”
Even Einstein himself was impressed with Putnam. At 19 years old, Putnam went to Einstein’s house to talk with him about Arthur Stanley Eddington, the British astrophysicist. (Eddington performed the key experiment that proved Einstein’s theory of gravity.) Putnam was obsessed with an allegory by Eddington about a fisherman and wanted to ask Einstein about it. Putnam also wanted Einstein to give a speech promoting world government to a political group he’d organized. Einstein—who was asked by plenty of people to do plenty of things—thought highly enough of Putnam to agree.
How could this genius, this Einstein of the mind, just vanish into obscurity? When I asked why, if Putnam was so important, no one has ever heard of him, everyone gave me the same answer: because he didn’t publish his work, and even if he had, no one would have understood it.
“He spoke and wrote in ‘Putnamese,’ ” Fuller said. “If you can find his papers, I think you’ll immediately see what I mean.” (...)
Skimming through the papers I saw that the people I’d spoken to hadn’t been kidding about the Putnamese. “To bring the felt under mathematical categories involves building a type of mathematical framework within which latent colliding heuristics can be exhibited as of a common goal function,” I read, before dropping the paper with a sigh. Each one went on like that for hundreds of pages at a time, on none of which did he apparently bother to stop and explain what the whole thing was really about...
Putnam spent most of his time alone, Fuller had told me. “Because of this isolation, he developed a way of expressing himself in which he uses words, phrases, concepts, in weird ways, peculiar to himself. The thing would be totally incomprehensible to anyone.” (...)
Imagine a fisherman who’s exploring the life of the ocean. He casts his net into the water, scoops up a bunch of fish, inspects his catch and shouts, “A-ha! I have made two great scientific discoveries. First, there are no fish smaller than two inches. Second, all fish have gills.”
The fisherman’s first “discovery” is clearly an error. It’s not that there are no fish smaller than two inches, it’s that the holes in his net are two inches in diameter. But the second discovery seems to be genuine—a fact about the fish, not the net.
This was the Eddington allegory that obsessed Putnam.
When physicists study the world, how can they tell which of their findings are features of the world and which are features of their net? How do we, as observers, disentangle the subjective aspects of our minds from the objective facts of the universe? Eddington suspected that one couldn’t know anything about the fish until one knew the structure of the net.
That’s what Putnam set out to do: come up with a description of the net, a model of “the structure of thought,” as he put it in a 1948 diary entry.
At the time, scientists were abuzz with a new way of thinking about thinking. Alan Turing had worked out an abstract model of computation, which quickly led not only to the invention of physical computers but also to the idea that perhaps the brain, too, was a kind of Turing machine.
Putnam disagreed. “Man is a species of computer of fundamentally different genus than those she builds,” he wrote. It was a radical claim (not only for the mixed genders): He wasn’t saying that the mind isn’t a computer, he was saying it was an entirely different kind of computer.
A universal Turing machine is a powerful thing, capable of computing anything that can be computed by an algorithm. But Putnam saw that it had its limitations. A Turing machine, by design, performs deductive logic—logic where the answers to a problem are contained in its premises, where the rules of inference are pregiven, and information is never created, only shuffled around. Induction, on the other hand, is the process by which we come up with the premises and rules in the first place. “Could there be some indirect way to model or orient the induction process, as we do deductions?” Putnam asked.
Putnam laid out the dynamics of what he called a universal “general purpose heuristic”—which we might call an “induction machine,” or more to the point, a mind—borrowing from the mathematics of game theory, which was thick in the air at Princeton. His induction “game” was simple enough. He imagined a system (immersed in an environment) that could make one mutually exclusive “move” at a time. The system is composed of a massive number of units, each of which can switch between one of two states. They all act in parallel, switching, say, “on” and “off” in response to one another. Putnam imagined that these binary units could condition one another’s behavior, so if one caused another to turn on (or off) in the past, it would become more likely to do so in the future. To play the game, the rule is this: The first chain of binary units, linked together by conditioned reflexes, to form a self-reinforcing loop emits a move on behalf of the system.
Every game needs a goal. In a Turing machine, goals are imposed from the outside. For true induction, the process itself should create its own goals. And there was a key constraint: Putnam realized that the dynamics he had in mind would only work mathematically if the system had just one goal governing all its behavior.
That’s when it hit him: The goal is to repeat. Repetition isn’t a goal that has to be programmed in from the outside; it’s baked into the very nature of things—to exist from one moment to the next is to repeat your existence. “This goal function,” Putnam wrote, “appears pre-encoded in the nature of being itself.”
So, here’s the game. The system starts out in a random mix of “on” and “off” states. Its goal is to repeat that state—to stay the same. But in each turn, a perturbation from the environment moves through the system, flipping states, and the system has to emit the right sequence of moves (by forming the right self-reinforcing loops) to alter the environment in such a way that it will perturb the system back to its original state.
Putnam’s remarkable claim was that simply by playing this game, the system will learn; its sequences of moves will become increasingly less random. It will create rules for how to behave in a given situation, then automatically root out logical contradictions among those rules, resolving them into better ones. And here’s the weird thing: It’s a game that can never be won. The system never exactly repeats. But in trying to, it does something better. It adapts. It innovates. It performs induction.
In paper after paper, Putnam attempted to show how his induction game plays out in the human brain, with motor behaviors serving as the mutually exclusive “moves” and neurons as the parallel binary units that link up into loops to move the body. The point wasn’t to give a realistic picture of how a messy, anatomical brain works any more than an abstract Turing machine describes the workings of an iMac. It was not a biochemical description, but a logical one—a “brain calculus,” Putnam called it.
As the game is played, perturbations from outside—photons hitting the retina, hunger signals rising from the gut—require the brain to emit the right sequence of movements to return to its prior state. At first it has no idea what to do—each disturbance is a neural impulse moving through the brain in search of a pathway out, and it will take the first loop it can find. That’s why a newborn’s movements start out as random thrashes. But when those movements don’t satisfy the goal, the disturbance builds and spreads through the brain, feeling for new pathways, trying loop after loop, thrash after thrash, until it hits on one that does the trick.
When a successful move, discovered by sheer accident, quiets a perturbation, it gets wired into the brain as a behavioral rule. Once formed, applying the rule is a matter of deduction: The brain outputs the right move without having to try all the wrong ones first.
But the real magic happens when a contradiction arises, when two previously successful rules, called up in parallel, compete to move the body in mutually exclusive ways. A hungry baby, needing to find its mother’s breast, simultaneously fires up two loops, conditioned in from its history: “when hungry, turn to the left” and “when hungry, turn to the right.” Deductive logic grinds to a halt; the facilitation of either loop, neurally speaking, inhibits the other. Their horns lock. The neural activity has no viable pathway out. The brain can’t follow through with a wired-in plan—it has to create a new one.
How? By bringing in new variables that reshape the original loops into a new pathway, one that doesn’t negate either of the original rules, but clarifies which to use when. As the baby grows hungrier, activity spreads through the brain, searching its history for anything that can break the tie. If it can’t find it in the brain, it will automatically search the environment, thrash by thrash. The mathematics of game theory, Putnam said, guarantee that, since the original rules were in service of one and the same goal, an answer, logically speaking, can always be found.
In this case, the baby’s brain finds a key variable: When “turn left” worked, the neural signal created by the warmth of the mother’s breast against the baby’s left cheek got wired in with the behavior. When “turn right” worked, the right cheek was warm. That extra bit of sensory signal is enough to tip the scales. The brain has forged a new loop, a more general rule: “When hungry, turn in the direction of the warmer cheek.”
New universals lead to new motor sequences, which allow new interactions with the world, which dredge up new contradictions, which force new resolutions, and so on up the ladder of ever-more intelligent behavior. “This constitutes a theory of the induction process,” Putnam wrote.
In notebooks, in secret, using language only he would understand, Putnam mapped out the dynamics of a system that could perceive, learn, think, and create ideas through induction—a computer that could program itself, then find contradictions among its programs and wrangle them into better programs, building itself out of its history of interactions with the world. Just as Turing had worked out an abstract, universal model of the very possibility of computation, Putnam worked out an abstract, universal model of the very possibility of mind. It was a model, he wrote, that “presents a basic overall pattern [or] character of thought in causal terms for the first time.”
Putnam had said you can’t understand another person until you know what fight they’re in, what contradiction they’re working through. I saw before me two stories, equally true: Putnam was a genius who worked out a new logic of the mind. And Putnam was a janitor who died unknown. The only way to resolve a contradiction, he said, is to find the auxiliary variables that forge a pathway to a larger story, one that includes and clarifies both truths. The variables for this contradiction? Putnam’s mother and money.
by Amanda Gefter, Nautilus | Read more:
Image: John Archibald Wheeler, courtesy of Alison Lahnston.[ed. Fascinating. Sounds like part quantum physics and part AI. But it's beyond me.]
Labels:
Biology,
Critical Thought,
Education,
history,
Philosophy,
Psychology,
Science,
Technology
Franz Sedlacek (Austrian, 1891-1945) - The Chemist (1932)
Subscribe to:
Comments (Atom)