Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

Friday, February 13, 2026

Your Job Isn't Disappearing. It's Shrinking Around You in Real Time

You open your laptop Monday morning with a question you can’t shake: Will I still have a job that matters in two years?

Not whether you’ll be employed, but whether the work you do will still mean something.
Last week, you spent three hours writing a campaign brief. You saw a colleague generate something 80% as good in four minutes using an AI agent (Claude, Gemini, ChatGPT…). Maybe 90% as good if you’re being honest.

You still have your job. But you can feel it shrinking around you.

The problem isn’t that the robots are coming. It’s that you don’t know what you’re supposed to be good at anymore. That Excel expertise you built over five years? Automated. Your ability to research competitors and synthesize findings? There’s an agent for that. Your skill at writing clear project updates? Gone.

You’re losing your professional identity faster than you can rebuild it. And nobody’s telling you what comes next.

The Three Things Everyone Tries That Don’t Actually Work

When you feel your value eroding, you do what seems rational. You adapt, you learn, and you try to stay relevant.

First, you learn to use the AI tools better. You take courses on prompt engineering. You master ChatGPT, Claude, whatever new platform launches next week and the week after. You become the “AI person” on your team. You think that if I can’t beat them, I’ll use them better than anyone else.

This fails because you’re still competing on execution speed. You’re just a faster horse. And execution is exactly what’s being commoditized. Six months from now, the tools will be easier to use. Your “expertise” in prompting becomes worthless the moment the interface improves. You’ve learned to use the shovel better, but the backhoe is coming anyway.

Second, you double down on your existing expertise. The accountant learns more advanced tax code. The designer masters more software. The analyst builds more complex models. You will have the same thought as many others, “I’ll go so deep they can’t replace me.”

This fails because depth in a disappearing domain is a trap. You’re building a fortress in a flood zone. Agents aren’t just matching human expertise at the median level anymore. They’re rapidly approaching expert-level performance in narrow domains. Your specialized knowledge becomes a liability because you’ve invested everything in something that’s actively being automated. You’re becoming the world’s best telegraph operator in 1995.

Third, you try to “stay human” through soft skills. You lean into creativity, empathy, relationship building. You go to workshops on emotional intelligence. You focus on being irreplaceably human. You might think that what makes us human can’t be automated.

This fails because it’s too vague to be actionable. What does “be creative” actually mean when an AI can generate 100 ideas in 10 seconds? How do you monetize empathy when your job is to produce reports? The advice feels right but provides no compass. You end up doing the same tasks you always did, just with more anxiety and a vaguer sense of purpose.

The real issue with all three approaches is that they’re reactions, not redesigns. You’re trying to adapt your old role to a new reality. What actually works is building an entirely new role that didn’t exist before.

But nobody’s teaching you what that looks like.

The Economic Logic Working Against You

This isn’t happening to you because you’re failing to adapt. It’s happening because the economic incentive structure is perfectly designed to create this problem.

The mechanism is simple, companies profit immediately from adopting AI agents. Every task automated results in cost reduction. The CFO sees the spreadsheet, where one AI subscription replaces 40% of a mid-level employee’s work. The math is simple, and the decision is obvious.

Many people hate to hear that. But if they owned the company or sat in leadership, they’d do the exact same thing. Companies exist to drive profit, just as employees work to drive higher salaries. That’s how the system has worked for centuries.

But companies don’t profit from retraining you for a higher-order role that doesn’t exist yet.

Why? Because that new role is undefined, unmeasured, and uncertain. You can’t put “figure out what humans should do now” on a quarterly earnings call. You can’t show ROI on “redesign work itself.” Short-term incentives win. Long-term strategy loses.

Nobody invests in the 12-24 month process of discovering what your new role should be because there’s no immediate return on that investment.

We’re in a speed mismatch. Agent capabilities are compounding at 6-12 month cycles. [ed. Even faster now, after the release of Claude Opus 4.6 last week]. Human adaptation through traditional systems operates on 2-5 year cycles.

Universities can’t redesign curricula fast enough. They’re teaching skills that will be automated before students graduate. Companies can’t retrain fast enough. By the time they identify the new skills needed and build a program, the landscape has shifted again. You can’t pivot fast enough. Career transitions take time. Mortgages don’t wait.

We’ve never had to do this before.

Previous automation waves happened in manufacturing. You could see the factory floor. You could watch jobs disappear and new ones emerge. There was geographic and temporal separation.

This is different, knowledge work is being automated while you’re still at your desk. The old role and new role exist simultaneously in the same person, the same company, the same moment.

And nobody has an economic incentive to solve it. Companies maximize value through cost reduction, not workforce transformation. Educational institutions are too slow and too far removed from real-time market needs. Governments don’t understand the problem yet. You’re too busy trying to keep your current job to redesign your future one.

The system isn’t helping because it isn’t designed for continuous, rapid role evolution; it is designed for stability.

We’re using industrial-era institutions to solve an exponential-era problem. That’s why you feel stuck.

Your Experience Just Became Worthless (The Timeline)

Let me tell you a story of my friend, let’s call her Jane (Her real name is Katřina, but the Czech diacritic is tricky for many). She was a senior research analyst at a mid-sized consulting firm. Ten years of experience. Her job was provide answers to the client companies, who would ask questions like “What’s our competitor doing in the Asian market?” and she’d spend 2-3 weeks gathering data, reading reports, interviewing experts, synthesizing findings, and creating presentations.

She was good, clients loved her work, and she billed at $250 an hour.

The firm deployed an AI research agent in Q2 2023. Not to replace her, but as they said, to “augment” her. Management said all the right things about human-AI collaboration.

The agent could do Jane’s initial research in 90 minutes, it would scan thousands of sources, identify patterns, generate a first-draft report.

Month one: Jane was relieved and thought she could focus on high-value synthesis work. She’d take the agent’s output and refine it, add strategic insights, make it client-ready.

Month three: A partner asked her, “Why does this take you a week now? The AI gives us 80% of what we need in an hour. What’s the other 20% worth?”

Jane couldn’t answer clearly. Because sometimes the agent’s output only needed light editing. Sometimes her “strategic insights” were things the agent had already identified, just worded differently.

Month six: The firm restructured. They didn’t fire Jane, they changed her role to “Quality Reviewer.” She now oversaw the AI’s output for 6-8 projects simultaneously instead of owning 2-3 end to end.

Her title stayed the same. Her billing rate dropped to $150 an hour. Her ten years of experience felt worthless.

Jane tried everything. She took an AI prompt engineering course. She tried to go deeper into specialized research methodologies. She emphasized her client relationships. None of it mattered because the firm had already made the economic calculation.

One AI subscription costs $50 a month. Jane’s salary: $140K a year. The agent didn’t need to be perfect; it just needed to be 70% as good at 5% of the cost. But it was fast, faster than her.

The part that illustrates the systemic problem, you often hear from AI vendors that, thanks to their AI tools, people can focus on higher-value work. But when pressed on what that meant specifically, they’d go vague. Strategic thinking, client relationships, creative problem solving.

Nobody could define what higher-value work actually looked like in practice. Nobody could describe the new role. So they defaulted to the only thing they could measure: cost reduction.

Jane left six months later. The firm hired two junior analysts at $65K each to do what she did. With the AI, they’re 85% as effective as Jane was.

Jane’s still trying to figure out what she’s supposed to be good at. Last anyone heard, she’s thinking about leaving the industry entirely.

Stop Trying to Be Better at Your Current Job

The people who are winning aren’t trying to be better at their current job. They’re building new jobs that combine human judgment with agent capability.

Not becoming prompt engineers, not becoming AI experts. Becoming orchestrators who use agents to do what was previously impossible at their level. [...]

You’re not competing with the agent. You’re creating a new capability that requires both you and the agent. You’re not defensible because you’re better at the task. You’re defensible because you’ve built something that only exists with you orchestrating it.

This requires letting go of your identity as “the person who does X.” Marcus doesn’t write copy anymore. That bothered him at first. He liked writing. But he likes being valuable more.

Here’s what you can do this month:

by Jan Tegze, Thinking Out Loud |  Read more:
Image: uncredited
[ed. Not to criticize, but this advice still seems a bit too short-sighted (for reasons articulated in this article: AI #155: Welcome to Recursive Self-Improvement (DMtV):]
***

Presumably you can see the problem in such a scenario, where all the existing jobs get automated away. There are not that many slots for people to figure out and do genuinely new things with AI. Even if you get to one of the lifeboats, it will quickly spring a leak. The AI is coming for this new job the same way it came for your old one. What makes you think seeing this ‘next evolution’ after that coming is going to leave you a role to play in it?

If the only way to survive is to continuously reinvent yourself to do what just became possible, as Jan puts it? There’s only one way this all ends.

I also don’t understand Jan’s disparate treatment of the first approach that Jan dismisses, ‘be the one who uses AI the best,’ and his solution of ‘find new things AI can do and do that.’ In both cases you need to be rapidly learning new tools and strategies to compete with the other humans. In both cases the competition is easy now since most of your rivals aren’t trying, but gets harder to survive over time.
***

[ed. And the fact that there'll be a lot fewer of these types of jobs available. This scenario could be reality within the next year (or less!). Something like a temporary UBI (universal basic income) might be needed until long-term solutions can be worked out, but do you think any of the bozos currently in Washington are going to focus on this? And, that applies to safety standards as well. Here's Dean Ball (Hyperdimensional): On Recursive Self-Improvement (Part II):
***

Policymakers would be wise to take especially careful notice of this issue over the coming year or so. But they should also keep the hysterics to a minimum: yes, this really is a thing from science fiction that is happening before our eyes, but that does not mean we should behave theatrically, as an actor in a movie might. Instead, the challenge now is to deal with the legitimately sci-fi issues we face using the comparatively dull idioms of technocratic policymaking. [...]

Right now, we predominantly rely on faith in the frontier labs for every aspect of AI automation going well. There are no safety or security standards for frontier models; no cybersecurity rules for frontier labs or data centers; no requirements for explainability or testing for AI systems which were themselves engineered by other AI systems; and no specific legal constraints on what frontier labs can do with the AI systems that result from recursive self-improvement.

To be clear, I do not support the imposition of such standards at this time, not so much because they don’t seem important but because I am skeptical that policymakers could design any one of these standards effectively. It is also extremely likely that the existence of advanced AI itself will both change what is possible for such standards (because our technical capabilities will be much stronger) and what is desirable (because our understanding of the technology and its uses will improve so much, as will our apprehension of the stakes at play). Simply put: I do not believe that bureaucrats sitting around a table could design and execute the implementation of a set of standards that would improve status-quo AI development practices, and I think the odds are high that any such effort would worsen safety and security practices.

Thursday, February 12, 2026

I Regret to Inform You that the FDA is FDAing Again

I had high hopes and low expectations that the FDA under the new administration would be less paternalistic and more open to medical freedom. Instead, what we are getting is paternalism with different preferences. In particular, the FDA now appears to have a bizarre anti-vaccine fixation, particularly of the mRNA variety (disappointing but not surprising given the leadership of RFK Jr.).

The latest is that the FDA has issued a Refusal-to-File (RTF) letter to Moderna for their mRNA influenza vaccine, mRNA-1010. An RTF means the FDA has determined that the application is so deficient it doesn’t even warrant a review. RTF letters are not unheard of, but they’re rare—especially given that Moderna spent hundreds of millions of dollars running Phase 3 trials enrolling over 43,000 participants based on FDA guidance, and is now being told the (apparently) agreed-upon design was inadequate. [...]

In context, this looks like the regulatory rules of the game are being changed retroactively—a textbook example of regulatory uncertainty destroying option value. STAT News reports that Vinay Prasad personally handled the letter and overrode staff who were prepared to proceed with review. Moderna took the unusual step of publicly releasing Prasad’s letter—companies almost never do this, suggesting they’ve calculated the reputational risk of publicly fighting the FDA is lower than the cost of acquiescing.

Moreover, the comparator issue was discussed—and seemingly settled—beforehand. Moderna says the FDA agreed with the trial design in April 2024, and as recently as August 2025 suggested it would file the application and address comparator issues during the review process.

Finally, Moderna also provided immunogenicity and safety data from a separate Phase 3 study in adults 65+ comparing mRNA-1010 against a licensed high-dose flu vaccine, just as FDA had requested—yet the application was still refused.

What is most disturbing is not the specifics of this case but the arbitrariness and capriciousness of the process. The EU, Canada, and Australia have all accepted Moderna’s application for review. We may soon see an mRNA flu vaccine available across the developed world but not in the United States—not because it failed on safety or efficacy, but because FDA political leadership decided, after the fact, that the comparator choice they inherited was now unacceptable.

The irony is staggering. Moderna is an American company. Its mRNA platform was developed at record speed with billions in U.S. taxpayer support through Operation Warp Speed — the signature public health achievement of the first Trump administration. The same government that funded the creation of this technology is now dismantling it. In August, HHS canceled $500 million in BARDA contracts for mRNA vaccine development and terminated a separate $590 million contract with Moderna for an avian flu vaccine. Several states have introduced legislation to ban mRNA vaccines. Insanity.

The consequences are already visible. In January, Moderna’s CEO announced the company will no longer invest in new Phase 3 vaccine trials for infectious diseases: “You cannot make a return on investment if you don’t have access to the U.S. market.” Vaccines for Epstein-Barr virus, herpes, and shingles have been shelved. That’s what regulatory roulette buys you: a shrinking pipeline of medical innovation.

An administration that promised medical freedom is delivering medical nationalism: fewer options, less innovation, and a clear signal to every company considering pharmaceutical investment that the rules can change after the game is played. And this isn’t a one-product story. mRNA is a general-purpose platform with spillovers across infectious disease and vaccines for cancer; if the U.S. turns mRNA into a political third rail, the investment, talent, and manufacturing will migrate elsewhere. America built this capability, and we’re now choosing to export it—along with the health benefits.

by Alex Tabarrok, Marginal Revolution |  Read more:
Image: Brian Snyder/Reuters

Monday, February 9, 2026

Ultrastructural and Histological Cryopreservation of Mammalian Brains by Vitrification

Abstract

Studies of whole brain cryopreservation are rare but are potentially important for a variety of applications. It has been demonstrated that ultrastructure in whole rabbit and pig brains can be cryopreserved by vitrification (ice-free cryopreservation) after prior aldehyde fixation, but fixation limits the range of studies that can be done by neurobiologists, including studies that depend upon general molecular integrity, signal transduction, macromolecular synthesis, and other physiological processes. We now show that whole brain ultrastructure can be preserved by vitrification without prior aldehyde fixation. Rabbit brain perfusion with the M22 vitrification solution followed by vitrification, warming, and fixation showed an absence of visible ice damage and overall structural preservation, but osmotic brain shrinkage sufficient to distort and obscure neuroanatomical detail. Neuroanatomical preservation in the presence of M22 was also investigated in human cerebral cortical biopsies taken after whole brain perfusion with M22. These biopsies did not form ice upon cooling or warming, and high power electron microscopy showed dehydrated and electron-dense but predominantly intact cells, neuropil, and synapses with no signs of ice crystal damage, and partial dilution of these samples restored normal cortical pyramidal cell shapes. To further evaluate ultrastructural preservation within the severely dehydrated brain, rabbit brains were perfused with M22 and then partially washed free of M22 before fixation. Perfusion dilution of the brain to 3-5M M22 resulted in brain re-expansion and the re-appearance of well-defined neuroanatomical features, but rehydration of the brain to 1M M22 resulted in ultrastructural damage suggestive of preventable osmotic injury caused by incomplete removal of M22. We conclude that both animal and human brains can be cryopreserved by vitrification with predominant retention of ultrastructural integrity without the need for prior aldehyde fixation. This observation has direct relevance to the feasibility of human cryopreservation, for which direct evidence has been lacking until this report. It also provides a starting point for perfecting brain cryopreservation, which may be necessary for lengthy space travel and could allow future medical time travel.

by Gregory M. Fahy, Ralf Spindler, Brian G. Wowk, Victor Vargas, Richard La, Bruce Thomson, Roberto Roa, Hugh Hixon, Steve Graber, Xian Ge, Adnan Sharif, Stephen B. Harris, L. Stephen Coles, bioRxivRead more:

[ed. Uh oh. There are a few brains I'd prefer not to see preserved (...like whoever could pay for this). Which reminds me:]

Did you know: Larry Ellison christened his yacht Izanami for a Shinto sea god, but had to hurriedly rename it after it was pointed out that, when spelled backwards, it becomes “I’m a Nazi”. (next year’s story: Elon Musk renames his yacht after being told that, spelled backwards, it becomes the name of a Shinto sea god). 

Friday, January 30, 2026

Here Come the Beetles

The nearly 100-year-old Wailua Municipal Golf Course is home to more than 580 coconut trees. It’s also one of Kaua‘i’s most visible sites for coconut rhinoceros beetle damage.

Located makai of Kūhiō Highway, trees that would normally have full, verdant leaves are dull and have V-shaped cuts in their fronds. Some are bare and look more like matchsticks.

It’s not for lack of trying to mitigate the invasive pest. The trees’ crowns have been sprayed with a pesticide twice, and the trunks were injected twice with a systemic pesticide for longer term protection.

The Kaua‘i Department of Parks & Recreation maintains that even though the trees still look damaged, the treatments are working. Staff have collected 1,679 fallen, dead adult beetles over the last three years.

The most recent treatment, a systemic pesticide that travels through the trees’ vascular systems, was done in January 2025. While crown sprays kill the beetle on contact, systemic pesticides require the beetles to feed from the trees to die. The bugs eat the trees’ hearts — where new fronds develop — so it can take months for foliage damage to appear.
 
“The general public sees these trees that are damaged and thinks, ‘Oh my goodness they’re getting whacked,’ but in actuality, we need them to get whacked to kill (the beetles),” said Patrick Porter, county parks director.

But with the beetles continuing to spread around the island, the county is increasingly turning its attention to green waste, mulch piles and other breeding sites, where beetles spend four to six months growing from eggs to adults. A single adult female beetle can lay up to 140 eggs in her lifetime.

“The reality is if you don’t go after the larvae and you don’t go after your mulch cycle, you’re just pissing in the wind,” said Kaua‘i County Council member Fern Holland. “Because there are just going to be hundreds and hundreds of them hatching all the time, and you can’t go after all of them.” (...)

Last May, the County Council allocated $100,000 for invasive species and another $100,000 for CRB. It was the first time the county designated funds specifically to address the beetle.

Niki Kunioka-Volz, economic development specialist with the Kaua‘i Office of Economic Development, said none of that funding has been spent yet.
They’re considering using it to help get the breeding site at the Wailua golf course under control, such as by purchasing an air curtain burner, a fan-powered incinerator of sorts to dispose of green waste. The burner could also be a tool for the broader community. (...)

In 2024, the county received $200,000 from the state Department of Agriculture. That money was used for a CRB outreach campaign, training CRB detection dogs and distributing deterrent materials. State funding was also expected to help the county purchase a curtain burner, but that plan fell through.

Earlier this month, state legislators threatened to cut invasive species funding from the newly expanded Hawai‘i Department of Agriculture and Biosecurity over its slow progress in curbing threats such as coconut rhinoceros beetles.

“I’d like to see the pressure put on them to release the funds to the counties,” Holland said.

by Noelle Fujii-Oride, Honolulu Civil Beat | Read more:
Image: Kevin Fujii/David Croxford/Civil Beat
[ed. Tough, ugly, able to leap sleeping bureaucrats in a single bound. See also: As Palm-Killing Beetles Spread On Big Island, State Action Is Slow (CB):]
***
It has been nearly two years since the first rhinoceros coconut beetle was discovered on Hawaiʻi island. And yet, despite ongoing concern by residents, the state is moving slowly in devising its response.

Seven months ago, the state’s Department of Agriculture and Biosecurity said it would begin working to stop the spread of CRB, within and beyond North Kona. But a meeting of the agency’s board Tuesday marked the first concrete step to do so by regulators. Now, as agriculture department staff move to streamline and resolve apparent issues in the proposed regulations, it will likely take until March for the board to consider implementing them.

Many of the attendees at Tuesday’s meeting, including residents of other islands, said that the state is lagging on its pledge to regulate the movement of agricultural materials while the destructive pest is spreading and killing both the island’s coconut palms and its endangered, endemic loulu palms.

The First Two Years

Before making landfall on Hawaiʻi island in 2023, the beetles spent almost a decade in apparent confinement on Oʻahu.

At first they appeared to be isolated to Waikoloa. Then, in March of last year, larvae and beetles were discovered at Kona International Airport and the state-owned, 179-acre Keāhole Agriculture Park, before spreading further.

In response, the county implemented a voluntary order to discourage the movement of potentially-infested live plants, mulch and green waste, and other landscaping materials such as compost from the area in June 2025. The order was described as “a precursor to a mandatory compliance structure” to be implemented by the state, according to a press release from the time. (...)

The board spent about an hour considering the petition and hearing testimony. And while many who testified made recommendations about actual protocol that might be put into place, the board merely voted to move forward in the process. So it’s not yet clear whether it will adopt the Big Island petitioner’s proposed rules or create its own.

Wednesday, January 28, 2026

Why Even the Healthiest People Hit a Wall at Age 70

Are we currently determining how much of aging is lifestyle changes and interventions and how much of it is basically your genetic destiny?

 

[Transcript:] We are constantly being bombarded with health and lifestyle advice at the moment. I feel like I cannot open my social media feeds without seeing adverts for supplements or diet plans or exercise regimes. And I think that this really is a distraction from the big goals of longevity science. This is a really difficult needle to thread when it comes to talking about this stuff because I'm a huge advocate for public health. I think if we could help people eat better, if we could help 'em do more exercise, if we could help 'em quit smoking, this would have enormous effects on our health, on our economies all around the world. But this sort of micro-optimization, these three-hour long health podcasts that people are digesting on a daily basis these days, I think we're really majoring in the minors. We're trying to absolutely eke out every last single thing when it comes to living healthily. And I think the problem is that there are real limits to what we can do with health advice. 

So for example, there was a study that came out recently that was all over my social media feeds. And the headline was that by eating the best possible diet, you can double your chance of aging healthily. But I decided to dig into the results table. The healthiest diet was something called the Alternative Healthy Eating Index or AHEI. And even the people who are sticking most closely to this best diet, according to this study, the top 20% of adherence to the AHEI, only 13.6% of them made it to 70 years old without any chronic diseases. That means that over 85% of the people sticking to the best diet, according to this study, got to the age of 70 with at least something wrong with them. And that shows us that optimizing diet only has so far it can go. 

We're not talking about immortality or living to 120 here. If you wanna be 70 years old and in good enough health to play with your grandkids, I cannot guarantee that you can do that no matter how good your diet is. And that's why we need longevity medicine to help keep people healthier for longer. And actually, I think even this idea of 120, 150-year-old lifespans, you know, immortality even as a word that's often thrown around, I think the main thing we're trying to do is get people to 80, 90 years old in good health. 'cause we already know that most people alive today, when they reach that age, are unfortunately gonna be frail. They're probably gonna be suffering from two or three or four different diseases simultaneously. And what we wanna do is try and keep people healthier for longer. And by doing that, they probably will live longer but kind of as a side effect. 

If you look at photographs of people from the past, they often look older than people in the present day who are the same age. And part of these are these terrible fashion choices that people made in the past. And we can look back and, you know, understand the mistakes they've made with hindsight. But part of that actually is aging biology. I think the fact that people can be different biological ages at the same chronological ages, something that's really quite intuitive. All of us know people who've waltzed into their 60s looking great and, you know, basically as fit as someone in their 40s or 50s. And we know similar people who have also gone into their 60s, but they're looking haggard, they've got multiple different diseases, they're already struggling through life. 

In the last decade, scientists have come up with various measures of what's called biological age as distinct from chronological age. So your chronological age is just how many candles there are on your birthday cake. And obviously, you know, most of us are familiar with that. But the idea of biological age is to look inside your cells, look inside your body, and work out how old you are on a biological level. Now we aren't perfect at doing this yet, but we do have a variety of different measures. We can use blood tests, we can use what are called epigenetic tests, or we can do things that are far more sort of basic and functional, how strong your grip is declines with age. And by comparing the value of something like your grip strength to an average person of a given age, we can assign you a biological age value. And I think the ones that are getting the most buzz at the moment within the scientific community, but also all around the internet, are these epigenetic age tests. 

So the way that this works is that you'll take a blood test or a saliva sample and scientists will measure something about your epigenome. So the genome is your DNA, it's the instruction manual of life. And the epigenome is a layer of chemistry that sits on top of your genome. If you think of your DNA is that instruction manual, then the epigenome is the notes in the margin. It's the little sticky notes that have been stuck on the side and they tell the cell which DNA to use at which particular time. And we know that there are changes to this epigenome as you get older. And so by measuring the changes in the epigenome, you can assign someone a biological age. 

At the moment, these epigene clocks are a really great research tool. They're really deepening our understanding of biological aging in the lab. I think the problem with these tests as applied to individuals is we don't know enough about exactly what they're telling us. We don't know what these individual changes in epigenetic marks mean. We know they're correlated with age, but what we don't know is if they're causally related. And in particular, we don't know if you intervene, if you make a change in your lifestyle, if you start taking a certain supplement and that reduces your biological age. We don't know whether that actually means you're gonna dilate or whether it means you're gonna stay healthier for longer or whether you've done something that's kind of adjacent to that. And so we need to do more research to understand if we can causally impact these epigenetic measures. (...)

Machine learning and artificial intelligence are gonna be hugely, hugely important in understanding the biology of aging. Because the body is such a complicated system that in order to really understand it, we're gonna need these vast computer models to try and decode the data for us. The challenge is that what machine learning can do at the moment is it can identify correlations. So it can identify things that are associated with aging, but it can't necessarily tell us what's causing something else. So for example, in the case of these epigenetic clocks, the parts of the epigenome that change with age have been identified because they correlate. But what we don't know is if you intervene in any one of these individual epigenetic marks, if you move it in the direction of something younger, does that actually make people healthier? And so what we need to do is more experiments where we try and work out if we can intervene in these epigenetic, in these biological clocks, can we make people live healthier for longer? 

Over the last 10 or 15 years, scientists have really started to understand the fundamental underlying biology of the aging process. And they broke this down into 12 what are called hallmarks of aging. One of those hallmarks is the accumulation of senescent cells. Now senescent is just a biological technical term for old. These are cells that accumulate in all of our bodies as the years go by. And scientists have noticed that these cells seem to drive a range of different diseases as we get older. And so the idea was what if we could remove these cells and leave the rest of the cells of the body intact? Could that slow down or even partially reverse the aging process? And scientists identified drugs called it senolytic drugs. 

These are drugs that kill those senescent cells and they tried them out in mice and they do indeed effectively make the mice biologically younger. So if you give mice a course of senolytic drugs, it removes those senescent cells from their body. And firstly, it makes them live a bit longer. That's a good thing if you're slowing down the aging process, the basic thing you want to see. But it's not dragging out that period of frailty at the end of life. It's keeping the mice healthier for longer so they get less cancer, they get less heart disease, they get fewer cataracts. The mice are also less frail. They basically send the mice to a tiny mouse-scale gym in these experiments. And the mice that have been given the drugs, they can run further and faster on the mousey treadmills that they try them out on. 

It also seems to reverse some of the cognitive effects that come along with aging. So if you put an older mouse in a maze, it's often a bit anxious, doesn't really want to explore. Whereas a younger mouse is desperate to, you know, run around and find the cheese or whatever it is mice doing in mazes. And by giving them these senolytic drugs, you can unlock some of that youthful curiosity. And finally, these mice just look great. You do not need to be an expert mouse biologist to see which one has had the pills and which one hasn't. They've got thicker fur. They've got plumper skin. They've got brighter eyes. They've got less fat on their bodies. And what this shows us is that by targeting the fundamental processes of aging, by identifying something like senescent cells that drives a whole range of age-related problems, we can hit much perhaps even all of the aging process with a single treatment. 

Senescent cells are, of course, only one of these 12 hallmarks of aging. And I think in order to both understand and treat the aging process, we're potentially gonna only treatments for many, perhaps even all of those hallmarks. There's never gonna be a single magic pill that can just make you live forever. Aging is much, much more complicated than that. But by understanding this relatively short list of underlying processes, maybe we can come up with 12, 20 different treatments that can have a really big effect on how long we live. 

One of the most exciting ideas in longevity science at the moment is what's called cellular reprogramming. I sometimes describe this as a treatment that has fallen through a wormhole from the future. This is the idea that we can reset the biological clock inside of our cells. And the idea first came about in the mid 2000s because there was a scientist called Shinya Yamanaka who was trying to find out how to turn regular adult body cells all the way back to the very beginning of their biological existence. And Yamanaka and his team were able to identify four genes that you could insert into a cell and turn back that biological clock. 

Now, he was interested in this from the point of view of creating stem cells, a cell that can create any other kind of cell in the body, which we might be able to use for tissue repair in future. But scientists also noticed, as well as turning back the developmental clock on these cells, it also turns back the aging clock, cells that are given these four Yamanaka factors actually are biologically younger than cells that haven't had the treatment. And so what scientists decided to do was insert these Yamanaka factor genes into mice. 

Now if you do this in a naive way, so there's genes active all the time, it's actually very bad news for the mice, unfortunately. because these stem cells, although they're very powerful in terms of what kind of cell they can become, they are useless at being a liver cell or being a heart cell. And so the mice very quickly died of organ failure. But if you activate these genes only transiently, and the way that scientists did it the first time successfully was essentially to activate them at weekends. So they produced these genes in such a way that they could be activated with the drug and they gave the mice the drug for two days of the week, and then gave them five days off so the Yamanaka factors were then suppressed. They found that this was enough to turn back the biological clock in those cells, but without turning back the developmental clock and turn them into these stem cells. And that meant the mice stayed a little bit healthier. We now know that they can live a little bit longer with this treatment too.

Now the real challenge is that this is a gene therapy treatment. It involves delivering four different genes to every single cell in your body. The question is can we, with our puny 2020s biotechnology, make this into a viable treatment, a pill even, that we can actually use in human beings? I really think this idea of cellular reprogramming appeals to a particular tech billionaire sort of mentality. The idea that we can go in and edit the code of life and reprogram our biological age, it's a hugely powerful concept. And if this works, the fact that you can turn back the biological clock all the way to zero, this really is a very, very cool idea. And that's what's led various different billionaires from the Bay Area to invest huge, huge amounts of money in this. 

Altos Labs is the biggest so-called startup in this space. And I wouldn't really call it a startup 'cause it's got funding of $3 billion from amongst other people, Jeff Bezos, the founder of Amazon. Now I'm very excited about this because I think $3 billion is enough to have a good go and see if we can turn this into a viable human treatment. My only concern is that epigenetics is only one of those hallmarks of aging. And so it might be the case that we solve aging inside our individual cells, but we leave other parts of the aging process intact. (...)

Probably the quickest short-term wins in longevity science are going to be repurposed existing drugs. And the reason for this is because we spent many, many years developing these drugs. We understand how they work in humans. We understand a bit about their safety profile. And because these molecules already exist, we've just tried them out in mice, in, you know, various organisms in the lab and found that a subset of them do indeed slow down the aging process. The first trial of a longevity drug that was proposed in humans was for a drug called metformin, which is a pre-existing drug that we prescribe actually for diabetes in this case, and has some indications that it might slow down the aging process in people. (...)

I think one of the ones that's got the most buzz around it at the moment is a drug called rapamycin. This is a drug that's been given for organ transplants. It's sometimes used to coat stents, which these little things that you stick in the arteries around your heart to expand them if you've got a contraction of those arteries that's restricting the blood supply. But we also know from experiments in the lab that can make all kinds of different organisms live longer, everything from single-cell yeast, to worms, to flies, to mice, to marmoset, which are primates. They're very, very evolutionarily close to us as one of the latest results. 

Rapamycin has this really incredible story. It was first isolated in bacteria from a soil sample from Easter Island, which is known as Rapa Nui in the local Polynesians. That's where the drug gets its name. And when it was first isolated, it was discovered to be antifungal. It could stop fungal cells from growing. So that was what we thought we'd use it for initially. But when the scientists started playing around with in the lab, they realized it didn't just stop fungal cells from growing. It also stopped many other kinds of cells as well, things like up to and including human cells. And so the slight disadvantage was that if you used it as an antifungal agent, it would also stop your immune cells from being able to divide, which is obviously be a bit of a sort of counterintuitive way to try and treat a fungal disease. So scientists decided to use it as an immune suppressant. It can stop your immune system from going haywire when you get an organ transplant, for example, and rejecting that new organ. 

It is also developed as an anti-cancer drug. So if it can stop cells dividing or cancer as cells dividing out of control. But the way that rapamycin works is it targets a fundamental central component of cellular metabolism. And we noticed that that seemed to be very, very important in the aging process. And so by tamping it down by less than you would do in a patient where you're trying to suppress their immune system, you can actually rather than stopping the cell dividing entirely, you can make it enter a state where it's much more efficient in its use of resources. It starts this process called autophagy, which is Greek for self-eating, autophagy. And that means it consumes old damaged proteins, and then recycles them into fresh new ones. And that actually is a critical process in slowing down aging, biologically speaking. And in 2009, we found out for the first time that by giving it to mice late in life, you could actually extend their remaining lifespan. They live by 10 or 15% longer. And this was a really incredible result. 

This was the first time a drug had been shown to slow down aging in mammals. And accordingly, scientists have become very, very excited about it. And we've now tried it in loads of different contexts and loads of different animals and loads of different organisms at loads of different times in life. You can even wait until very late in a mouse lifespan to give it rapamycin and you still see most of that same lifespan extension effect. And that's fantastic news potentially for us humans because not all of us, unfortunately, can start taking a drug from birth 'cause most of us were born quite a long time ago. But rapamycin still works even if you give it to mice who are the equivalent of 60 or 70 years old in human terms. And that means that for those of us who are already aged a little bit, Rapamycin could still help us potentially. And there are already biohackers out there trying this out for themselves, hopefully with the help of a doctor to make sure that they're doing everything as safely as possible to try and extend their healthy life. And so the question is: should we do a human trial of rapamycin to find out if it can slow down the aging process in people as well? (...)

We've already got dozens of ideas in the lab for ways to slow down, maybe even reverse the age of things like mice and cells in a dish. And that means we've got a lot of shots on goal. I think it'll be wildly unlucky if none of the things that slow down aging in the lab actually translate to human beings. That doesn't mean that most of them will work, probably most of them won't, but we only need one or two of them to succeed and really make a big difference. And I think a great example of this is GLP-1 drugs, the ozempics, the things that are allowing people to suddenly lose a huge amount of weight. We've been looking for decades for these weight loss drugs, and now we finally found them. It's shown that these breakthroughs are possible, they can come out of left field. And all we need to do in some cases is a human trial to find out if these drugs actually work in people. 

And what that means is that, you know, the average person on planet earth is under the age of 40. They've probably got 40 or 50 years of life expectancy left depending on the country that they live in. And that's an awful lot of time for science to happen. And if then in the next 5 or 10 years, we do put funding toward these human trials, we might have those first longevity drugs that might make you live one or two or five years longer. And that gives scientists even more time to develop the next treatment. And if we think about some more advanced treatments, not just drugs, things like stem cell therapy or gene therapy, those things can sound pretty sci-fi. But actually, we know that these things are already being deployed in hospitals and clinics around the world. They're being deployed for specific serious diseases, for example, where we know that a single gene can be a problem and we can go in and fix that gene and give a child a much better chance at a long, healthy life. 

But as we learn how these technologies work in the context of these serious diseases, we're gonna learn how to make them effective. And most importantly, we're gonna learn how to make them safe. And so we could imagine doing longevity gene edits in human beings, perhaps not in the next five years, but I think it'll be foolish to bet against it happening in the next 20 years, for example. 

by Andrew Steele, The Big Think |  Read more:
Image: Yamanka factors via:
[ed. See also: Researchers Are Using A.I. to Decode the Human Genome (NYT).]

Thursday, January 1, 2026

Leonardo’s Wood Charring Method Predates Japanese Practice

Yakisugi is a Japanese architectural technique for charring the surface of wood. It has become quite popular in bioarchitecture because the carbonized layer protects the wood from water, fire, insects, and fungi, thereby prolonging the lifespan of the wood. Yakisugi techniques were first codified in written form in the 17th and 18th centuries. But it seems Italian Renaissance polymath Leonardo da Vinci wrote about the protective benefits of charring wood surfaces more than 100 years earlier, according to a paper published in Zenodo, an open repository for EU funded research.

Check the notes

As previously reported, Leonardo produced more than 13,000 pages in his notebooks (later gathered into codices), less than a third of which have survived. The notebooks contain all manner of inventions that foreshadow future technologies: flying machines, bicycles, cranes, missiles, machine guns, an “unsinkable” double-hulled ship, dredges for clearing harbors and canals, and floating footwear akin to snowshoes to enable a person to walk on water. Leonardo foresaw the possibility of constructing a telescope in his Codex Atlanticus (1490)—he wrote of “making glasses to see the moon enlarged” a century before the instrument’s invention.

In 2003, Alessandro Vezzosi, director of Italy’s Museo Ideale, came across some recipes for mysterious mixtures while flipping through Leonardo’s notes. Vezzosi experimented with the recipes, resulting in a mixture that would harden into a material eerily akin to Bakelite, a synthetic plastic widely used in the early 1900s. So Leonardo may well have invented the first manmade plastic.

The notebooks also contain Leonardo’s detailed notes on his extensive anatomical studies. Most notably, his drawings and descriptions of the human heart captured how heart valves can control blood flow 150 years before William Harvey worked out the basics of the human circulatory system. (In 2005, a British heart surgeon named Francis Wells pioneered a new procedure to repair damaged hearts based on Leonardo’s heart valve sketches and subsequently wrote the book The Heart of Leonardo.)

In 2023, Caltech researchers made another discovery: lurking in the margins of Leonardo’s Codex Arundel were several small sketches of triangles, their geometry seemingly determined by grains of sand poured out from a jar. The little triangles were his attempt to draw a link between gravity and acceleration—well before Isaac Newton came up with his laws of motion. By modern calculations, Leonardo’s model produced a value for the gravitational constant (G) to around 97 percent accuracy. And Leonardo did all this without a means of accurate timekeeping and without the benefit of calculus. The Caltech team was even able to re-create a modern version of the experiment.

“Burnt Japanese cedar”


Annalisa Di Maria, a Leonardo expert with the UNESCO Club of Florence, collaborated with molecular biologist and sculptor Andrea da Montefeltro and art historian Lucica Bianchi on this latest study, which concerns the Codex Madrid II. They had noticed one nearly imperceptible phrase in particular on folio 87r concerning wood preservation: “They will be better preserved if stripped of bark and burned on the surface than in any other way,” Leonardo wrote.

“This is not folklore,” the authors noted. “It is a technical intuition that precedes cultural codification.” Leonardo was interested in the structural properties of materials like wood, stone, and metal, as both an artist and an engineer, and would have noticed from firsthand experience that raw wood with its bark intact retained moisture and decayed more quickly. Furthermore, Leonardo’s observation coincides with what the authors describe as a “crucial moment for European material culture,” when “woodworking was receiving renewed attention in artistic workshops and civil engineering studies.”

Leonardo did not confine his woody observations to just that one line. The Codex includes discussions of how different species of wood conferred different useful properties: oak and chestnut for strength, ash and linden for flexibility, and alder and willow for underwater construction. Leonardo also noted that chestnut and beech were ideal as structural reinforcements, while maple and linden worked well for constructing musical instruments given their good acoustic properties. He even noted a natural method for seasoning logs: leaving them “above the roots” for better sap drainage.

The Codex Madrid II dates to 1503-1505, over a century before the earliest known written codifications of yakisugi, although it is probable that the method was used a bit before then. Per Di Maria et al., there is no evidence of any direct contact between Renaissance European culture and Japanese architectural practices, so this seems to be a case of “convergent invention.”

The benefits of this method of wood preservation have since been well documented by science, although the effectiveness is dependent on a variety of factors, including wood species and environmental conditions. The fire’s heat seals the pores of the wood so it absorbs less water—a natural means of waterproofing. The charred surface serves as natural insulation for fire resistance. And stripping the bark removes nutrients that attract insects and fungi, a natural form of biological protection.

by Jennifer Ouellette, Ars Technica |  Read more:
Images: A. Di maria et al., 2025; Unimoi/CC BY-SA 4.0; and Lorna Satchell/CC BY 4.0

Sunday, December 21, 2025

The Day the Dinosaurs Died

A young paleontologist may have discovered a record of the most significant event in the history of life on Earth. “It’s like finding the Holy Grail clutched in the bony fingers of Jimmy Hoffa, sitting on top of the Lost Ark."

If, on a certain evening about sixty-­six million years ago, you had stood somewhere in North America and looked up at the sky, you would have soon made out what appeared to be a star. If you watched for an hour or two, the star would have seemed to grow in brightness, although it barely moved. That’s because it was not a star but an asteroid, and it was headed directly for Earth at about forty-five thousand miles an hour. Sixty hours later, the asteroid hit. The air in front was compressed and violently heated, and it blasted a hole through the atmosphere, generating a supersonic shock wave. The asteroid struck a shallow sea where the Yucatán peninsula is today. In that moment, the Cretaceous period ended and the Paleogene period began.

A few years ago, scientists at Los Alamos National Laboratory used what was then one of the world’s most powerful computers, the so-called Q Machine, to model the effects of the impact. The result was a slow-motion, second-by-second false-color video of the event. Within two minutes of slamming into Earth, the asteroid, which was at least six miles wide, had gouged a crater about eighteen miles deep and lofted twenty-five trillion metric tons of debris into the atmosphere. Picture the splash of a pebble falling into pond water, but on a planetary scale. When Earth’s crust rebounded, a peak higher than Mt. Everest briefly rose up. The energy released was more than that of a billion Hiroshima bombs, but the blast looked nothing like a nuclear explosion, with its signature mushroom cloud. Instead, the initial blowout formed a “rooster tail,” a gigantic jet of molten material, which exited the atmosphere, some of it fanning out over North America. Much of the material was several times hotter than the surface of the sun, and it set fire to everything within a thousand miles. In addition, an inverted cone of liquefied, superheated rock rose, spread outward as countless red-hot blobs of glass, called tektites, and blanketed the Western Hemisphere.

Some of the ejecta escaped Earth’s gravitational pull and went into irregular orbits around the sun. Over millions of years, bits of it found their way to other planets and moons in the solar system. Mars was eventually strewn with the debris—just as pieces of Mars, knocked aloft by ancient asteroid impacts, have been found on Earth. A 2013 study in the journal Astrobiology estimated that tens of thousands of pounds of impact rubble may have landed on Titan, a moon of Saturn, and on Europa and Callisto, which orbit Jupiter—three satellites that scientists believe may have promising habitats for life. Mathematical models indicate that at least some of this vagabond debris still harbored living microbes. The asteroid may have sown life throughout the solar system, even as it ravaged life on Earth.

The asteroid was vaporized on impact. Its substance, mingling with vaporized Earth rock, formed a fiery plume, which reached halfway to the moon before collapsing in a pillar of incandescent dust. Computer models suggest that the atmosphere within fifteen hundred miles of ground zero became red hot from the debris storm, triggering gigantic forest fires. As the Earth rotated, the airborne material converged at the opposite side of the planet, where it fell and set fire to the entire Indian subcontinent. Measurements of the layer of ash and soot that eventually coated the Earth indicate that fires consumed about seventy per cent of the world’s forests. Meanwhile, giant tsunamis resulting from the impact churned across the Gulf of Mexico, tearing up coastlines, sometimes peeling up hundreds of feet of rock, pushing debris inland and then sucking it back out into deep water, leaving jumbled deposits that oilmen sometimes encounter in the course of deep-sea drilling.

The damage had only begun. Scientists still debate many of the details, which are derived from the computer models, and from field studies of the debris layer, knowledge of extinction rates, fossils and microfossils, and many other clues. But the over-all view is consistently grim. The dust and soot from the impact and the conflagrations prevented all sunlight from reaching the planet’s surface for months. Photosynthesis all but stopped, killing most of the plant life, extinguishing the phytoplankton in the oceans, and causing the amount of oxygen in the atmosphere to plummet. After the fires died down, Earth plunged into a period of cold, perhaps even a deep freeze. Earth’s two essential food chains, in the sea and on land, collapsed. About seventy-five per cent of all species went extinct. More than 99.9999 per cent of all living organisms on Earth died, and the carbon cycle came to a halt.

Earth itself became toxic. When the asteroid struck, it vaporized layers of limestone, releasing into the atmosphere a trillion tons of carbon dioxide, ten billion tons of methane, and a billion tons of carbon monoxide; all three are powerful greenhouse gases. The impact also vaporized anhydrite rock, which blasted ten trillion tons of sulfur compounds aloft. The sulfur combined with water to form sulfuric acid, which then fell as an acid rain that may have been potent enough to strip the leaves from any surviving plants and to leach the nutrients from the soil.

Today, the layer of debris, ash, and soot deposited by the asteroid strike is preserved in the Earth’s sediment as a stripe of black about the thickness of a notebook. This is called the KT boundary, because it marks the dividing line between the Cretaceous period and the Tertiary period. (The Tertiary has been redefined as the Paleogene, but the term “KT” persists.) Mysteries abound above and below the KT layer. In the late Cretaceous, widespread volcanoes spewed vast quantities of gas and dust into the atmosphere, and the air contained far higher levels of carbon dioxide than the air that we breathe now. The climate was tropical, and the planet was perhaps entirely free of ice. Yet scientists know very little about the animals and plants that were living at the time, and as a result they have been searching for fossil deposits as close to the KT boundary as possible.

One of the central mysteries of paleontology is the so-called “three-­metre problem.” In a century and a half of assiduous searching, almost no dinosaur remains have been found in the layers three metres, or about nine feet, below the KT boundary, a depth representing many thousands of years. Consequently, numerous paleontologists have argued that the dinosaurs were on the way to extinction long before the asteroid struck, owing perhaps to the volcanic eruptions and climate change. Other scientists have countered that the three-metre problem merely reflects how hard it is to find fossils. Sooner or later, they’ve contended, a scientist will discover dinosaurs much closer to the moment of destruction.

Locked in the KT boundary are the answers to our questions about one of the most significant events in the history of life on the planet. If one looks at the Earth as a kind of living organism, as many biologists do, you could say that it was shot by a bullet and almost died. Deciphering what happened on the day of destruction is crucial not only to solving the three-­metre problem but also to explaining our own genesis as a species.

On August 5, 2013, I received an e-mail from a graduate student named Robert DePalma. I had never met DePalma, but we had corresponded on paleontological matters for years, ever since he had read a novel I’d written that centered on the discovery of a fossilized Tyrannosaurus rex killed by the KT impact. “I have made an incredible and unprecedented discovery,” he wrote me, from a truck stop in Bowman, North Dakota. “It is extremely confidential and only three others know of it at the moment, all of them close colleagues.” He went on, “It is far more unique and far rarer than any simple dinosaur discovery. I would prefer not outlining the details via e-mail, if possible.” He gave me his cell-phone number and a time to call...

DePalma’s find was in the Hell Creek geological formation, which outcrops in parts of North Dakota, South Dakota, Montana, and Wyoming, and contains some of the most storied dinosaur beds in the world. At the time of the impact, the Hell Creek landscape consisted of steamy, subtropical lowlands and floodplains along the shores of an inland sea. The land teemed with life and the conditions were excellent for fossilization, with seasonal floods and meandering rivers that rapidly buried dead animals and plants.

Dinosaur hunters first discovered these rich fossil beds in the late nineteenth century. In 1902, Barnum Brown, a flamboyant dinosaur hunter who worked at the American Museum of Natural History, in New York, found the first Tyrannosaurus rex here, causing a worldwide sensation. One paleontologist estimated that in the Cretaceous period Hell Creek was so thick with T. rexes that they were like hyenas on the Serengeti. It was also home to triceratops and duckbills. (...)

Today, DePalma, now thirty-seven, is still working toward his Ph.D. He holds the unpaid position of curator of vertebrate paleontology at the Palm Beach Museum of Natural History, a nascent and struggling museum with no exhibition space. In 2012, while looking for a new pond deposit, he heard that a private collector had stumbled upon an unusual site on a cattle ranch near Bowman, North Dakota. (Much of the Hell Creek land is privately owned, and ranchers will sell digging rights to whoever will pay decent money, paleontologists and commercial fossil collectors alike.) The collector felt that the site, a three-foot-deep layer exposed at the surface, was a bust: it was packed with fish fossils, but they were so delicate that they crumbled into tiny flakes as soon as they met the air. The fish were encased in layers of damp, cracked mud and sand that had never solidified; it was so soft that it could be dug with a shovel or pulled apart by hand. In July, 2012, the collector showed DePalma the site and told him that he was welcome to it. (...)

The following July, DePalma returned to do a preliminary excavation of the site. “Almost right away, I saw it was unusual,” he told me. He began shovelling off the layers of soil above where he’d found the fish. This “overburden” is typically material that was deposited long after the specimen lived; there’s little in it to interest a paleontologist, and it is usually discarded. But as soon as DePalma started digging he noticed grayish-white specks in the layers which looked like grains of sand but which, under a hand lens, proved to be tiny spheres and elongated ­droplets. “I think, Holy shit, these look like microtektites!” DePalma recalled. Micro­tektites are the blobs of glass that form when molten rock is blasted into the air by an asteroid impact and falls back to Earth in a solidifying drizzle. The site appeared to contain micro­tektites by the million.

As DePalma carefully excavated the upper layers, he began uncovering an extraordinary array of fossils, exceedingly delicate but marvellously well preserved. “There’s amazing plant material in there, all interlaced and interlocked,” he recalled. “There are logjams of wood, fish pressed against cypress-­tree root bundles, tree trunks smeared with amber.” Most fossils end up being squashed flat by the pressure of the overlying stone, but here everything was three-dimensional, including the fish, having been encased in sediment all at once, which acted as a support. “You see skin, you see dorsal fins literally sticking straight up in the sediments, species new to science,” he said. As he dug, the momentousness of what he had come across slowly dawned on him. If the site was what he hoped, he had made the most important paleontological discovery of the new century.

by Douglas Preston, New Yorker |  Read more:
Image: Richard Barnes

Thursday, December 18, 2025

Finding Peter Putnam

The forgotten janitor who discovered the logic of the mind

The neighborhood was quiet. There was a chill in the air. The scent of Spanish moss hung from the cypress trees. Plumes of white smoke rose from the burning cane fields and stretched across the skies of Terrebonne Parish. The man swung a long leg over a bicycle frame and pedaled off down the street.

It was 1987 in Houma, Louisiana, and he was headed to the Department of Transportation, where he was working the night shift, sweeping floors and cleaning toilets. He was just picking up speed when a car came barreling toward him with a drunken swerve.

A screech shot down the corridor of East Main Street, echoed through the vacant lots, and rang out over the Bayou.

Then silence.
 
The 60-year-old man lying on the street, as far as anyone knew, was just a janitor hit by a drunk driver. There was no mention of it on the local news, no obituary in the morning paper. His name might have been Anonymous. But it wasn’t.

His name was Peter Putnam. He was a physicist who’d hung out with Albert Einstein, John Archibald Wheeler, and Niels Bohr, and two blocks from the crash, in his run-down apartment, where his partner, Claude, was startled by a screech, were thousands of typed pages containing a groundbreaking new theory of the mind.

“Only two or three times in my life have I met thinkers with insights so far reaching, a breadth of vision so great, and a mind so keen as Putnam’s,” Wheeler said in 1991. And Wheeler, who coined the terms “black hole” and “wormhole,” had worked alongside some of the greatest minds in science.

Robert Works Fuller, a physicist and former president of Oberlin College, who worked closely with Putnam in the 1960s, told me in 2012, “Putnam really should be regarded as one of the great philosophers of the 20th century. Yet he’s completely unknown.”

That word—unknown—it came to haunt me as I spent the next 12 years trying to find out why.

The American Philosophical Society Library in Philadelphia, with its marbled floors and chandeliered ceilings, is home to millions of rare books and manuscripts, including John Wheeler’s notebooks. I was there in 2012, fresh off writing a physics book that had left me with nagging questions about the strange relationship between observer and observed. Physics seemed to suggest that observers play some role in the nature of reality, yet who or what an observer is remained a stubborn mystery.

Wheeler, who made key contributions to nuclear physics, general relativity, and quantum gravity, had thought more about the observer’s role in the universe than anyone—if there was a clue to that mystery anywhere, I was convinced it was somewhere in his papers. That’s when I turned over a mylar overhead, the kind people used to lay on projectors, with the titles of two talks, as if given back-to-back at the same unnamed event:

Wheeler: From Reality to Consciousness

Putnam: From Consciousness to Reality

Putnam, it seemed, had been one of Wheeler’s students, whose opinion Wheeler held in exceptionally high regard. That was odd, because Wheeler’s students were known for becoming physics superstars, earning fame, prestige, and Nobel Prizes: Richard Feynman, Hugh Everett, and Kip Thorne.

Back home, a Google search yielded images of a very muscly, very orange man wearing a very small speedo. This, it turned out, was the wrong Peter Putnam. Eventually, I stumbled on a 1991 article in the Princeton Alumni Weekly newsletter called “Brilliant Enigma.” “Except for the barest outline,” the article read, “Putnam’s life is ‘veiled,’ in the words of Putnam’s lifelong friend and mentor, John Archibald Wheeler.

A quick search of old newspaper archives turned up an intriguing article from the Associated Press, published six years after Putnam’s death. “Peter Putnam lived in a remote bayou town in Louisiana, worked as a night watchman on a swing bridge [and] wrote philosophical essays,” the article said. “He also tripled the family fortune to about $40 million by investing successfully in risky stock ventures.”

The questions kept piling up. Forty million dollars?

I searched a while longer for any more information but came up empty-handed. But I couldn’t forget about Peter Putnam. His name played like a song stuck in my head. I decided to track down anyone who might have known him.

The only paper Putnam ever published was co-authored with Robert Fuller, so I flew from my home in Cambridge, Massachusetts, to Berkeley, California, to meet him. Fuller was nearing 80 years old but had an imposing presence and a booming voice. He sat across from me in his sun-drenched living room, seeming thrilled to talk about Putnam yet plagued by some palpable regret.

Putnam had developed a theory of the brain that “ranged over the whole of philosophy, from ethics to methodology to mathematical foundations to metaphysics,” Fuller told me. He compared Putnam’s work to Alan Turing’s and Kurt Gödel’s. “Turing, Gödel, and Putnam—they’re three peas in a pod,” Fuller said. “But one of them isn’t recognized.” (...)

Phillips Jones, a physicist who worked alongside Putnam in the early 1960s, told me over the phone, “We got the sense that what Einstein’s general theory was for physics, Peter’s model would be for the mind.”

Even Einstein himself was impressed with Putnam. At 19 years old, Putnam went to Einstein’s house to talk with him about Arthur Stanley Eddington, the British astrophysicist. (Eddington performed the key experiment that proved Einstein’s theory of gravity.) Putnam was obsessed with an allegory by Eddington about a fisherman and wanted to ask Einstein about it. Putnam also wanted Einstein to give a speech promoting world government to a political group he’d organized. Einstein—who was asked by plenty of people to do plenty of things—thought highly enough of Putnam to agree.

How could this genius, this Einstein of the mind, just vanish into obscurity? When I asked why, if Putnam was so important, no one has ever heard of him, everyone gave me the same answer: because he didn’t publish his work, and even if he had, no one would have understood it.

“He spoke and wrote in ‘Putnamese,’ ” Fuller said. “If you can find his papers, I think you’ll immediately see what I mean.” (...)

Skimming through the papers I saw that the people I’d spoken to hadn’t been kidding about the Putnamese. “To bring the felt under mathematical categories involves building a type of mathematical framework within which latent colliding heuristics can be exhibited as of a common goal function,” I read, before dropping the paper with a sigh. Each one went on like that for hundreds of pages at a time, on none of which did he apparently bother to stop and explain what the whole thing was really about...

Putnam spent most of his time alone, Fuller had told me. “Because of this isolation, he developed a way of expressing himself in which he uses words, phrases, concepts, in weird ways, peculiar to himself. The thing would be totally incomprehensible to anyone.” (...)


Imagine a fisherman who’s exploring the life of the ocean. He casts his net into the water, scoops up a bunch of fish, inspects his catch and shouts, “A-ha! I have made two great scientific discoveries. First, there are no fish smaller than two inches. Second, all fish have gills.”

The fisherman’s first “discovery” is clearly an error. It’s not that there are no fish smaller than two inches, it’s that the holes in his net are two inches in diameter. But the second discovery seems to be genuine—a fact about the fish, not the net.

This was the Eddington allegory that obsessed Putnam.

When physicists study the world, how can they tell which of their findings are features of the world and which are features of their net? How do we, as observers, disentangle the subjective aspects of our minds from the objective facts of the universe? Eddington suspected that one couldn’t know anything about the fish until one knew the structure of the net.

That’s what Putnam set out to do: come up with a description of the net, a model of “the structure of thought,” as he put it in a 1948 diary entry.

At the time, scientists were abuzz with a new way of thinking about thinking. Alan Turing had worked out an abstract model of computation, which quickly led not only to the invention of physical computers but also to the idea that perhaps the brain, too, was a kind of Turing machine.

Putnam disagreed. “Man is a species of computer of fundamentally different genus than those she builds,” he wrote. It was a radical claim (not only for the mixed genders): He wasn’t saying that the mind isn’t a computer, he was saying it was an entirely different kind of computer.

A universal Turing machine is a powerful thing, capable of computing anything that can be computed by an algorithm. But Putnam saw that it had its limitations. A Turing machine, by design, performs deductive logic—logic where the answers to a problem are contained in its premises, where the rules of inference are pregiven, and information is never created, only shuffled around. Induction, on the other hand, is the process by which we come up with the premises and rules in the first place. “Could there be some indirect way to model or orient the induction process, as we do deductions?” Putnam asked.

Putnam laid out the dynamics of what he called a universal “general purpose heuristic”—which we might call an “induction machine,” or more to the point, a mind—borrowing from the mathematics of game theory, which was thick in the air at Princeton. His induction “game” was simple enough. He imagined a system (immersed in an environment) that could make one mutually exclusive “move” at a time. The system is composed of a massive number of units, each of which can switch between one of two states. They all act in parallel, switching, say, “on” and “off” in response to one another. Putnam imagined that these binary units could condition one another’s behavior, so if one caused another to turn on (or off) in the past, it would become more likely to do so in the future. To play the game, the rule is this: The first chain of binary units, linked together by conditioned reflexes, to form a self-reinforcing loop emits a move on behalf of the system.

Every game needs a goal. In a Turing machine, goals are imposed from the outside. For true induction, the process itself should create its own goals. And there was a key constraint: Putnam realized that the dynamics he had in mind would only work mathematically if the system had just one goal governing all its behavior.

That’s when it hit him: The goal is to repeat. Repetition isn’t a goal that has to be programmed in from the outside; it’s baked into the very nature of things—to exist from one moment to the next is to repeat your existence. “This goal function,” Putnam wrote, “appears pre-encoded in the nature of being itself.”

So, here’s the game. The system starts out in a random mix of “on” and “off” states. Its goal is to repeat that state—to stay the same. But in each turn, a perturbation from the environment moves through the system, flipping states, and the system has to emit the right sequence of moves (by forming the right self-reinforcing loops) to alter the environment in such a way that it will perturb the system back to its original state.

Putnam’s remarkable claim was that simply by playing this game, the system will learn; its sequences of moves will become increasingly less random. It will create rules for how to behave in a given situation, then automatically root out logical contradictions among those rules, resolving them into better ones. And here’s the weird thing: It’s a game that can never be won. The system never exactly repeats. But in trying to, it does something better. It adapts. It innovates. It performs induction.

In paper after paper, Putnam attempted to show how his induction game plays out in the human brain, with motor behaviors serving as the mutually exclusive “moves” and neurons as the parallel binary units that link up into loops to move the body. The point wasn’t to give a realistic picture of how a messy, anatomical brain works any more than an abstract Turing machine describes the workings of an iMac. It was not a biochemical description, but a logical one—a “brain calculus,” Putnam called it.

As the game is played, perturbations from outside—photons hitting the retina, hunger signals rising from the gut—require the brain to emit the right sequence of movements to return to its prior state. At first it has no idea what to do—each disturbance is a neural impulse moving through the brain in search of a pathway out, and it will take the first loop it can find. That’s why a newborn’s movements start out as random thrashes. But when those movements don’t satisfy the goal, the disturbance builds and spreads through the brain, feeling for new pathways, trying loop after loop, thrash after thrash, until it hits on one that does the trick.

When a successful move, discovered by sheer accident, quiets a perturbation, it gets wired into the brain as a behavioral rule. Once formed, applying the rule is a matter of deduction: The brain outputs the right move without having to try all the wrong ones first.

But the real magic happens when a contradiction arises, when two previously successful rules, called up in parallel, compete to move the body in mutually exclusive ways. A hungry baby, needing to find its mother’s breast, simultaneously fires up two loops, conditioned in from its history: “when hungry, turn to the left” and “when hungry, turn to the right.” Deductive logic grinds to a halt; the facilitation of either loop, neurally speaking, inhibits the other. Their horns lock. The neural activity has no viable pathway out. The brain can’t follow through with a wired-in plan—it has to create a new one.

How? By bringing in new variables that reshape the original loops into a new pathway, one that doesn’t negate either of the original rules, but clarifies which to use when. As the baby grows hungrier, activity spreads through the brain, searching its history for anything that can break the tie. If it can’t find it in the brain, it will automatically search the environment, thrash by thrash. The mathematics of game theory, Putnam said, guarantee that, since the original rules were in service of one and the same goal, an answer, logically speaking, can always be found.

In this case, the baby’s brain finds a key variable: When “turn left” worked, the neural signal created by the warmth of the mother’s breast against the baby’s left cheek got wired in with the behavior. When “turn right” worked, the right cheek was warm. That extra bit of sensory signal is enough to tip the scales. The brain has forged a new loop, a more general rule: “When hungry, turn in the direction of the warmer cheek.”

New universals lead to new motor sequences, which allow new interactions with the world, which dredge up new contradictions, which force new resolutions, and so on up the ladder of ever-more intelligent behavior. “This constitutes a theory of the induction process,” Putnam wrote.

In notebooks, in secret, using language only he would understand, Putnam mapped out the dynamics of a system that could perceive, learn, think, and create ideas through induction—a computer that could program itself, then find contradictions among its programs and wrangle them into better programs, building itself out of its history of interactions with the world. Just as Turing had worked out an abstract, universal model of the very possibility of computation, Putnam worked out an abstract, universal model of the very possibility of mind. It was a model, he wrote, that “presents a basic overall pattern [or] character of thought in causal terms for the first time.”

Putnam had said you can’t understand another person until you know what fight they’re in, what contradiction they’re working through. I saw before me two stories, equally true: Putnam was a genius who worked out a new logic of the mind. And Putnam was a janitor who died unknown. The only way to resolve a contradiction, he said, is to find the auxiliary variables that forge a pathway to a larger story, one that includes and clarifies both truths. The variables for this contradiction? Putnam’s mother and money.

by Amanda Gefter, Nautilus |  Read more:
Image: John Archibald Wheeler, courtesy of Alison Lahnston.
[ed. Fascinating. Sounds like part quantum physics and part AI. But it's beyond me.]

Wednesday, December 17, 2025

'Atmospheric Rivers' Flood Western Washington; Blizzard Follows


WA floods hit many uninsured small farms with ‘varied’ damages (Seattle Times)

Over the past few days, farm owners and operators across Western Washington have been returning to their businesses after heavy flooding turned massive swaths of low-lying land into deep basins of water since the downpour began last week.

Farms up and down the I-5 corridor sustained losses, though for most of them, it’s too early to accurately account for damage. Some are still unable to reach their farms due to high water levels and road closures. Many don’t have insurance and those who do have it aren’t sure what it will cover. And the National Weather Service has forecast more minor to moderate flooding in the region through Friday.

Hundreds of thousands out of power in WA; blizzard warning continues (Seattle Times)

A storm brought high winds and heavy rain to Western Washington overnight into Wednesday, leaving more than 200,000 customers in the dark after days of flooding.

Wind speeds reached the 50s and 60s in Seattle and surrounding areas early Wednesday: In the Alpental Ski Area, 112 mph gusts were recorded around 2 a.m., and Snoqualmie Pass saw 82 mph wind speeds.



Even after the rain ends and waters recede, after workers remove trees and clean up landslides, after engineers finally get a good look at the damage to the region’s roads and bridges, Washington state’s transportation system faces a long, expensive and daunting road to recovery following this month’s devastating weather.

Yet an even more elusive — and immediate — task is determining when traffic will flow again on roads like Highway 2, where Tuesday’s news that a 50-mile stretch will be closed for months forced grim questions about the expense of repairing ravaged roads and the immediate economic future of communities in the Cascades.

Images: Brian Marchello/King County Sherriff's Office/Erika Schultz
[ed. One/two punch.]