Thursday, July 24, 2025

Of Mice, Mechanisms, and Dementia

“The scientific paper is a ‘fraud’ that creates “a totally misleading narrative of the processes of thought that go into the making of scientific discoveries.”
This critique comes not from a conspiracist on the margins of science, but from Nobel laureate Sir Peter Medawar. A brilliant experimentalist whose work on immune tolerance laid the foundation for modern organ transplantation, Sir Peter understood both the power and the limitations of scientific communication.

Consider the familiar structure of a scientific paper: Introduction (background and hypothesis), Methods, Results, Discussion, Conclusion. This format implies that the work followed a clean, sequential progression: scientists identified a gap in knowledge, formulated a causal explanation, designed definitive experiments to fill the gap, evaluated compelling results, and most of the time, confirmed their hypothesis.

Real lab work rarely follows such a clear path. Biological research is filled with what Medawar describes lovingly as “messing about”: false starts, starting in the middle, unexpected results, reformulated hypotheses, and intriguing accidental findings. The published paper ignores the mess in favour of the illusion of structure and discipline. It offers an ideal version of what might have happened rather than a confession of what did.

The polish serves a purpose. It makes complex work accessible (at least if you work in the same or a similar field!). It allows researchers to build upon new findings.

But the contrived omissions can also play upon even the most well-regarded scientist’s susceptibility to the seduction of story. As Christophe Bernard, Director of Research at the Institute of Systems Neuroscience (Marseilles, Fr.) recently explained,
“when we are reading a paper, we tend to follow the reasoning and logic of the authors, and if the argumentation is nicely laid out, it is difficult to pause, take a step back, and try to get an overall picture.”
Our minds travel the narrative path laid out for us, making it harder to spot potential flaws in logic or alternative interpretations of the data, and making conclusions feel far more definitive than they often are.

Medawar’s framing is my compass when I do deep dives into major discoveries in translational neuroscience. I approach papers with a dual vision. First, what is actually presented? But second, and often more importantly, what is not shown? How was the work likely done in reality? What alternatives were tried but not reported? What assumptions guided the experimental design? What other interpretations might fit the data if the results are not as convincing or cohesive as argued?

And what are the consequences for scientific progress?

In the case of Alzheimer’s research, they appear to be stark: thirty years of prioritizing an incomplete model of the disease’s causes; billions of corporate, government, and foundation dollars spent pursuing a narrow path to drug development; the relative exclusion of alternative hypotheses from funding opportunities and attention; and little progress toward disease-modifying treatments or a cure.

The incomplete Alzheimer’s model I’m referring to is the amyloid cascade hypothesis, which proposes that Alzheimer’s is the outcome of protein processing gone awry in the brain, leading to the production of plaques that trigger a cascade of other pathological changes, ultimately causing the cognitive decline we recognize as the disease. Amyloid work continues to dominate the research and drug development landscape, giving the hypothesis the aura of settled fact.

However, cracks are showing in this façade. In 2021, the FDA granted accelerated approval to aducanumab (Aduhelm), an anti-amyloid drug developed by Biogen, despite scant evidence that it meaningfully altered the course of cognitive decline. The decision to approve, made over near-unanimous opposition from the agency’s advisory panel, exposed growing tensions between regulatory optimism and scientific rigor. Medicare’s subsequent decision to restrict coverage to clinical trials, and Biogen’s quiet withdrawal of the drug from broader marketing efforts in 2024, made the disconnect impossible to ignore.

Meanwhile, a deeper fissure emerged: an investigation by Science unearthed evidence of data fabrication surrounding research on Aβ*56, a purported toxic amyloid-beta oligomer once hailed as a breakthrough target for disease-modifying therapy. Research results that had been seen as a promising pivot in the evolution of the amyloid cascade hypothesis, a new hope for rescuing the theory after repeated clinical failures, now appears to have been largely a sham. Treating Alzheimer’s by targeting amyloid plaques may have been a null path from the start.

When the cracks run that deep, it’s worth going back to the origin story—a landmark 1995 paper by Games et al., featured on the cover of Nature under the headline “A mouse model for Alzheimer’s.” It announced what was hailed as a breakthrough: the first genetically engineered mouse designed to mimic key features of the disease.

In what follows, I argue that the seeds of today’s failures were visible from the beginning if one looks carefully. I approach this review not as an Alzheimer’s researcher with a rival theory, but as a molecular neuroscientist interested in how fields sometimes converge around alluring but unstable ideas. Foundational papers deserve special scrutiny because they become the bedrock for decades of research. When that bedrock slips beneath us, it tells a cautionary story: about the power of narrative, the comfort of consensus, and the dangers of devotion without durable evidence. It also reminds us that while science is ultimately self-correcting, correction can be glacial when careers and reputations are staked on fragile ground.

The Rise of the Amyloid Hypothesis

In the early 1990s, a new idea began to dominate Alzheimer’s research: the amyloid cascade hypothesis.

First proposed by Hardy and Higgins in a 1992 Science perspective, the hypothesis suggested a clear sequence of disease-precipitating events: protein processing goes awry in the brain → beta-amyloid (Aβ) accumulates → plaques form → plaques trigger a cascade of downstream events, including neurofibrillary tangles, inflammation, synaptic loss, neuronal death, resulting in observable cognitive decline.

The hypothesis was compelling for several reasons. First, the discovery of the enzymatic steps by which amyloid precursor protein (APP) is processed into Aβ offered multiple potential intervention points—ideal for pharmaceutical drug development.

Second, the hypothesis was backed by powerful genetic evidence. Mutations in the APP gene on chromosome 21 were associated with early-onset Alzheimer’s. The case grew stronger with the observation that more than 50% of individuals with Down syndrome, who carry an extra copy of chromosome 21 (and thus extra APP), develop Alzheimer’s-like pathology by age 40.

Thus, like any robust causal theory, the amyloid cascade hypothesis offered explicit, testable predictions. As Hardy and Higgins outlined, if amyloid truly initiates the Alzheimer’s cascade, then genetically engineering mice to produce human amyloid should trigger the full sequence of events: plaques first, then tangles, synapse loss, and neuronal death, then cognitive decline. And the sequentiality matters: amyloid accumulation should precede other pathological features. At the time, this was a thrilling possibility.

Pharmaceutical companies were especially eager: if the hypothesis proved correct, stopping amyloid should stop the disease. The field awaited the first transgenic mouse studies with enormous anticipation.

How—with Unlimited Time and Money and a Little Scientific Despair—to Make a Transgenic Mouse

“Mouse Model Made” was the boastful headline to the independent, introductory commentary Nature solicited to accompany the 1995 Games paper’s unveiling of the first transgenic mouse set to “answer the needs” of Alzheimer’s research. The scientific argument over whether amyloid caused Alzheimer’s had been “settle[d]” by the Games paper, “perhaps for good.”

In some ways, the commentary’s bravado seemed warranted. Why? Because in the mid-’90s, creating a transgenic mouse was a multi-stage, treacherous gauntlet of molecular biology. Every step carried an uncomfortably high chance of failure. If this mouse, developed by Athena Neurosciences (a small Bay Area pharmaceutical company) was valid, it was an extraordinary technical achievement portending a revolution in Alzheimer’s care.

First Rule of Making a Transgenic Mouse: Don’t Talk About How You Made a Transgenic Mouse

How did Athena pull it off? Hard to say! What's most remarkable about the Games paper is what's not there. Scan through the methods section and you'll find virtually none of the painstaking effort required to build the Alzheimer’s mouse. Back in the ‘90s, creating a transgenic mouse took years of work, countless failed attempts, and extraordinary technical skill. In the Games paper, this effort is compressed into a few sparse sentences describing which gene and promoter (nearby gene instruction code) the research team used to make the mouse. The actual details are relegated to scientific meta-narrative—knowledge that exists only in lab notebooks, daily conversations between scientists, and the muscle memory of researchers who perform these techniques thousands of times.

The thin description wasn’t atypical for a publication from this era. Difficult experimental methods were often encapsulated in the single phrase "steps were carried out according to standard procedures," with citations to entire books on sub-cloning techniques or reference to the venerable Manipulating the Mouse Embryo: A Laboratory Manual (We all have this on our bookshelf, yes?) The idea that there were reliable "standard procedures" that could ensure success was farcical—an understatement that other scientists understand as code for "we spent years getting this to work; good luck figuring it out ;)."

So, as an appreciation of what it takes to make progress on the frontiers of science, here is approximately what’s involved.

Prerequisites: Dexterity, Glassblowing, and Zen Mastery

Do you have what it takes to master transgenic mouse creation? Well, do you have the dexterity of a neurosurgeon? Because you’ll be micro-manipulating fragile embryos with the care of someone defusing a bomb—except the bomb is smaller than a grain of sand, and you need to keep it alive. Have you trained in glass-blowing? Hope so, because you’ll need to handcraft your own micropipettes so you can balance an embryo on the pipette tip. Yes, really.

And most importantly, do you sincerely believe that outcomes are irrelevant, and only the endless, repetitive journey matters? If so, congratulations! You may already be a Zen master, which will come in handy when you’re objectively failing your boss’s expectations every single day for what feels like an eternity. Success, when it finally comes, will be indistinguishable from sheer, dumb luck, but the stochastic randomness won’t stop you from searching frantically through your copious notes to see if you can pinpoint the variable that made it finally work!

Let’s go a little deeper so we can understand why the Games team's achievement was considered so monumental—and why almost everyone viewed the results in the best possible light.

by Anonymous, Astral Codex Ten |  Read more:
Image: via