Showing posts with label Education. Show all posts
Showing posts with label Education. Show all posts

Wednesday, August 20, 2025

The Gospel According to South Park

Somehow, five years have passed since the COVID summer of 2020. My son had just “finished” fourth grade. His mother and I were distracted parents of him and his seven-year-old sister, both of us teetering from cabin fever. It felt like we were hanging on to our sanity, and our marriage, by a thread.

We held on to both, thankfully. Our kids seem to have recovered, too. But by this time that summer, it’s fair to say we had completely “lost contain” of our children. Even under normal conditions, we’ve favored a loose-reins approach to parenting, with a healthy dose of Lenore Skenazy-style “Free Range Parenting.” But that summer? I gave up entirely. I let my son watch TV. A lot of TV.

By the time school resumed, he had watched every episode of The Simpsons and every episode of South Park.

At the time, I felt more than a little guilty about letting a 10-year-old binge-watch two decades of South Park. It was a bit early, I thought, for him to be learning proper condom application techniques from Mr. Garrison. When I told friends later, the story always got a laugh – a kind of comic confession from a parent who’d fallen asleep at the wheel.

But as my son made his way through middle school and into high school, something changed. One night over dinner, we were talking about wars when I mentioned Saddam Hussein. My son chimed in casually – he knew exactly who Saddam was. I asked him how. His answer: “South Park.”

That kept happening. From Michael Jackson and Neverland Ranch, to Mormonism, to the NSA, to wokeism … my son was not only familiar with these topics, he was informed, funny, and incisively skeptical. I realized that this crash course from Butters and Cartman and Mr. Mackey had functioned like one of those downloads Neo gets in The Matrix; except that instead of instantly learning martial arts, my son had instantly become culturally literate. And, just as important, that literacy came wrapped in a sense of humor rooted in satire, absurdity, and a deep mistrust of power, regardless of party affiliation.

He jokes about Joe Biden’s senility and Trump’s grifting grossness. He refers to COVID-era masking as “chin diapers,” a phrase South Park coined while many adults were still double-masking alone in their cars. It struck me: my greatest parenting lapse had somehow turned into one of my best decisions.

Of course, it’s not just that South Park is anti-authority and unapologetically crude. So was Beavis and Butthead. The difference is that South Park is crafted. It endures not just because of what it says, but how it’s made – with discipline, speed, and storytelling intelligence.

South Park co-creators Matt Parker and Trey Stone are master storytellers. In a short video that should be required viewing for anyone who writes, they explain that if the beats, or scenes, of your story are best linked by the phrase “and then,” you’re doing it wrong. Instead, each scene should be connected by “therefore” or “but.” It’s deceptively simple, and it’s the single best explanation of narrative momentum I’ve ever seen. (Watch it here.)

Combine that storytelling mastery with a relentless work ethic that has allowed them to churn out weekly takes on almost every major current event of the last three decades, and you get the South Park that we know and (that most of us) love today. A generational institution that’s still funny.

by Jeremy Radcliffe, Epsilon Theory | Read more:
Image: South Park
[ed. Smart moronic vs dumb moronic. People are probably just grateful for any kind of resistance these days.]

Monday, August 11, 2025

Lore of the World: Field Notes for a Child's Codex: Part 2

When you become a new parent, you must re-explain the world, and therefore see it afresh yourself.

A child starts with only ancestral memories of archetypes: mother, air, warmth, danger. But none of the specifics. For them, life is like beginning to read some grand fantasy trilogy, one filled with lore and histories and intricate maps.

Yet the lore of our world is far grander, because everything here is real. Stars are real. Money is real. Brazil is real. And it is a parent’s job to tell the lore of this world, and help the child fill up their codex of reality one entry at a time.

Below are a few of the thousands of entries they must make.


Walmart

Walmart was, growing up, where I didn’t want to be. Whatever life had in store for me, I wanted it to be the opposite of Walmart. Let’s not dissemble: Walmart is, canonically, “lower class.” And so I saw, in Walmart, one possible future for myself. I wanted desperately to not be lower class, to not have to attend boring public school, to get out of my small town. My nightmare was ending up working at a place like Walmart (my father ended up at a similar big-box store). It seemed to me, at least back then, that all of human misery was compressed in that store; not just in the crassness of its capitalistic machinations, but in the very people who shop there. Inevitably, among the aisles some figure would be hunched over in horrific ailment, and I, playing the role of a young Siddhartha seeing the sick and dying for the first time, would recoil and flee to the parking lot in a wave of overwhelming pity. But it was a self-righteous pity, in the end. A pity almost cruel. I would leave Walmart wondering: Why is everyone living their lives half-awake? Why am I the only one who wants something more? Who sees suffering clearly?

Teenagers are funny.

Now, as a new parent, Walmart is a cathedral. It has high ceilings, lots to look at, is always open, and is cheap. Lightsabers (or “laser swords,” for copyright purposes) are stuffed in boxes for the taking. Pick out a blue one, a green one, a red one. We’ll turn off the lights at home and battle in the dark. And the overall shopping experience of Walmart is undeniably kid-friendly. You can run down the aisles. You can sway in the cart. Stakes are low at Walmart. Everyone says hi to you and your sister. They smile at you. They interact. While sometimes patrons and even employees may appear, well, somewhat strange, even bearing the cross of visible ailments, they are scary and friendly. If I visit Walmart now, I leave wondering why this is. Because in comparison, I’ve noticed that at stores more canonically “upper class,” you kids turn invisible. No one laughs at your antics. No one shouts hello. No one talks to you, or asks you questions. At Whole Foods, people don’t notice you. At Stop & Shop, they do. Your visibility, it appears, is inversely proportional to the price tags on the clothes worn around you. Which, by the logical force of modus ponens, means you are most visible at, your very existence most registered at, of all places, Walmart.

Cicadas

The surprise of this summer has been learning we share our property with what biologists call Cicada Brood XIV, who burst forth en masse every 17 years to swarm Cape Cod. Nowhere else in the world do members of this “Bourbon Brood” exist, with their long black bodies and cartoonishly red eyes. Only here, in the eastern half of the US. Writing these words, I can hear their dull and ceaseless motorcycle whine in the woods.

The neighbors we never knew we had, the first 17 years of a cicada’s life are spent underground as a colorless nymph, suckling nutrients from the roots of trees. These vampires (since they live on sap, vampires is what they are, at least to plants) are among the longest living insects. Luckily, they do not bite or sting, and carry no communicable diseases. It’s all sheer biomass. In a fit of paradoxical vitality, they’ve dug up from underneath, like sappers invading a castle, leaving behind coin-sized holes in the ground. If you put a stick in one of these coin slots, it will be swallowed, and its disappearance is accompanied by a dizzying sense that even a humble yard can contain foreign worlds untouched by human hands.

After digging out of their grave, where they live, to reach the world above, where they die, cicadas next molt, then spend a while adjusting to their new winged bodies before taking to the woods to mate. Unfortunately, our house is in the woods. Nor is there escape elsewhere—drive anywhere and cicadas hit your windshield, sometimes rapid-fire; never smearing, they instead careen off almost politely, like an aerial game of bumper cars.

We just have to make it a few more weeks. After laying their eggs on the boughs of trees (so vast are these clusters it breaks the branches) the nymphs drop. The hatched babies squirm into the dirt, and the 17-year-cycle repeats. But right now the saga’s ending seems far away, as their molted carapaces cling by the dozens to our plants and window frames and shed, like hollow miniatures. Even discarded, they grip.

“It’s like leaving behind their clothes,” I tell your sister.

“Their clothes,” she says, in her tiny pipsqueak voice.

We observe the cicadas in the yard. They do not do much. They hang, rest, wait. They offer no resistance to being swept away by broom or shoe tip. Even their flights are lazy and ponderous and unskilled. And ultimately, this is what is eerie about cicadas. Yes, they represent the pullulating irrepressible life force, but you can barely call any individual alive. They are life removed from consciousness. Much like a patient for whom irreparable brain damage has left only a cauliflower of functional gray matter left, they are here, but not here. Other bugs will avoid humans, or even just collisions with inanimate objects. Not the cicada. Their stupidity makes their existence even more a nightmare for your mother, who goes armed into the yard with a yellow flyswatter. She knows they cannot hurt her, but has a phobia of moths, due to their mindless flight. Cicadas are even worse in that regard. Much bigger, too. She tries, mightily, to not pass down her phobia. She forces herself to walk slowly, gritting her teeth. Or, on seeing one sunning on the arm of her lawn chair, she pretends there is something urgent needed inside. But I see her through the window, and when alone, she dashes. She dashes to the car or to the shed, and she dashes onto the porch to get an errant toy, waving about her head that yellow flyswatter, eyes squinted so she can’t see the horrors around her.

I, meanwhile, am working on desensitization. Especially with your sister, who has, with the mind-reading abilities she’s renowned for, picked up that something fishy is going on, and screeches when a cicada comes too near. I sense, though, she enjoys the thrill.

“Hello Cicadaaaaaasss!” I get her to croon with me. She waves at their zombie eyes. When she goes inside, shutting the screen door behind her, she says an unreturned goodbye to them.

Despite its idiocy, the cicada possesses a strange mathematical intelligence. Why 17-year cycles? Because 17 is prime. Divisible by no other cycle, it ensures no predator can track them generation to generation. Their evolutionary strategy is to overwhelm, unexpectedly, in a surprise attack. And this gambit of “You can’t eat us all!” is clearly working. The birds here are becoming comically fat, with potbellies; in their lucky bounty, they’ve developed into gourmands who only eat the heads.

Individual cicadas are too dumb to have developed such a smart tactic, so it is evolution who is the mathematician here. But unlike we humans, who can manipulate numbers abstractly, without mortal danger, evolution must always add, subtract, multiply, and divide, solely with lives. Cicadas en masse are a type of bio-numeracy, and each brood is collectively a Sieve of Eratosthenes, sacrificing trillions to arrive at an agreed-upon prime number. In this, the cicada may be, as far as we know, the most horrific way to do math in the entire universe.

Being an embodied temporal calculation, the cicada invasion has forced upon us a new awareness of time itself. I have found your mother crying from this. She says every day now she thinks about the inherent question they pose: What will our lives be like, when the cicadas return?

Against our will the Bourbon Brood has scheduled something in our calendar, 17 years out, shifting the future from abstract to concrete. When the cicadas return, you will be turning 21. Your sister, 19. Myself, already 55. Your mother, 54. Your grandparents will, very possibly, all be dead. This phase of life will have finished. And to mark its end, the cicadas will crawl up through the dirt, triumphant in their true ownership, and the empty nest of our home will buzz again with these long-living, subterranean-dwelling, prime-calculating, calendar-setting, goddamn vampires.

Stubbornness

God, you’re stubborn. You are so stubborn. Stubborn about which water bottle to drink from, stubborn about doing all the fairground rides twice, stubborn about going up slides before going down them, pushing buttons on elevators, being the first to go upstairs, deciding what snack to eat, wearing long-sleeved shirts in summer, wanting to hold hands, wanting not to hold hands; in general, you’re stubborn about all events, and especially about what order they should happen in. You’re stubborn about doing things beyond your ability, only to get angry when you inevitably fail. You’re stubborn in wanting the laws of physics to work the way you personally think they should. You’re stubborn in how much you love, in how determined and fierce your attachment can be.

This is true of many young children, of course, but you seem an archetypal expression of it. Even your losing battles are rarely true losses. You propose some compromise where you can snatch, from the jaws of defeat, a sliver of a draw. Arguments with you are like trading rhetorical pieces in a chess match. While you can eventually accept wearing rain boots because it’s pouring out, that acceptance hinges on putting them on in the most inconvenient spot imaginable.

So when I get frustrated—and yes, I do get frustrated—I remind myself that “stubborn” is a synonym for “willful.” Whatever human will is, you possess it in spades. You want the world to be a certain way, and you’ll do everything in your power to make it so. Luckily, most of your designs are a kind of benevolent dictatorship. And at root, I believe your willfulness comes from loving the world so much, and wanting to, like all creatures vital with life force, act in it, and so bend it to your purposes.

What I don’t think is that this willfulness is because we, as parents, are so especially lenient. Because we’re not. No, your stubbornness has felt baked in from the beginning.

This might be impossible to explain to you now, in all its details, but in the future you’ll be ready to understand that I really do mean “the beginning.” As in the literal moment of conception. Or the moment before the moment, when you were still split into halves: egg and sperm. There is much prudery around the topic, as you’ll learn, and because of its secrecy people conceptualize the entire process as fundamentally simple, like this: Egg exists (fanning itself coquettishly). Sperm swims hard (muscular and sweaty). Sperm reaches egg. Penetrates and is enveloped. The end. But this is a radical simplification of the true biology, which, like all biology, is actually about selection.

Selection is omnipresent, occurring across scales and systems. For example, the elegance of your DNA is because so many variants of individuals were generated, and of these, only some small number proved fit in the environment (your ancestors). The rest were winnowed away by natural selection. So too, at another scale, your body’s immune system internally works via what’s called “clonal selection.” Many different immune cells with all sorts of configurations are generated at low numbers, waiting as a pool of variability in your bloodstream. In the presence of an invading pathogen, the few immune cells that match (bind to) the pathogen are selected to be cloned in vast numbers, creating an army. And, at another scale and in a different way, human conception works via selection too. Even though scientists understand less about how conception selection works (these remain mysterious and primal things), the evidence indicates the process is full of it.

First, from the perspective of the sperm, they are entered into a win-or-die race inside an acidic maze with three hundred million competitors. If the pH or mucus blockades don’t get them, the fallopian tubes are a labyrinth of currents stirred by cilia. It’s a mortal race in all ways, for the woman’s body has its own protectors: white blood cells, which register the sperm as foreign and other. Non-self. So they patrol and destroy them. Imagining this, I oscillate between the silly and the serious. I picture the white blood cells patrolling like stormtroopers, and meanwhile the sperm (wearing massive helmets) attempt to rush past them. But in reality, what is this like? Did that early half of you see, ahead, some pair of competing brothers getting horrifically eaten, and smartly went the other way? What does a sperm see, exactly? We know they can sense the environment, for of the hundreds of sperm who make it close enough to potentially fertilize the egg, all must enter into a kind of dance with it, responding to the egg’s guidance cues in the form of temperature and chemical gradients (the technical jargon is “sperm chemotaxis”). We know from experiments that eggs single out sperm non-randomly, attracting the ones they like most. But for what reasons, or based on what standards, we don’t know. Regardless of why, the egg zealously protects its choice. Once a particular sperm is allowed to penetrate its outer layer, the egg transforms into a literal battle station, blasting out zinc ions at any approaching runners-up to avoid double inseminations.

Then, on the other side, there’s selection too. For which egg? Women are born with about a million of what are called “follicles.” These follicles all grow candidate eggs, called “oocytes,” but, past puberty, only a single oocyte each month is chosen to be released by the winner and become the waiting egg. In this, the ovary itself is basically a combination of biobank and proving grounds. So the bank depletes over time. Menopause is, basically, when the supply has run out. But where do they all go? Most follicles die in an initial background winnowing, a first round of selection, wherein those not developing properly are destroyed. The majority perish there. Only the strongest and most functional go on to the next stage. Each month, around 20 of these follicles enter a tournament with their sisters to see which of them ovulates, and so releases the winning egg. This competition is enigmatic, and can only be described as a kind of hormonal growth war. The winner must mature faster, but also emit chemicals to suppress the others, starving them. The losers atrophy and die. No wonder it’s hard for siblings to always get along.

Things like this explain why, the older I get, the more I am attracted to one of the first philosophies, by Empedocles. All things are either Love or Strife. Or both.

From that ancient perspective, I can’t help but feel your stubbornness is why you’re here at all. That it’s an imprint left over, etched onto your cells. I suspect you won all those mortal races and competitions, succeeded through all that strife, simply because from the beginning, in some proto-way, you wanted to be here. Out of all that potentiality, willfulness made you a reality.

Can someone be so stubborn they create themselves?

by Erik Hoel, The Intrinsic Perspective |  Read more:
Image: Alexander Naughton
[ed. Lovely. I can see my grandaughter might already have my stubborn gene. Hope it does her more good!]

Thursday, August 7, 2025

Stop Explaining the Fish

This past weekend, I sat on the beach with my husband, sans kids. We have teens now, and our tween is away at camp, hence the kidless beach sitch.

I was lying back in my beach chair, sun on my face and warm breeze in my hair, but noticing the absence of a wiggling toddler in my lap, giving me damp, sandy kisses. I miss those days something awful. My eyes scanned the beach, admiring all the hard-working parents who were vigilantly standing at the water's edge, keeping their kids safe in the waves. I smiled in solidarity at the mom picking Cheetos out of the sand, brushing them off, and feeding them to her crying toddler.

But something felt off. I kept noticing how many parents were working so hard to get it right. Too hard. They were jumping in to help, redirecting, offering options, all with love and good intentions. But over and over, I kept seeing how trying to optimize every experience was actually making things worse for everyone.

Here’s what I mean.

Now, let me back up before I explain. I have been there, done that, in the best and worst ways. I absolutely over-optimized and burned myself out in the toddler years, especially with my oldest. But I am also an early childhood educator who believes that less is more when it comes to adult input in a child’s play. Over the course of 18 years of parenting, I learned how to step back, just enough to let my kids step forward.

Back to the beach:

There was a group of kids, probably between four and eight years old, marching around the beach playground like a little gang of pirates. They were sandy, loud, playful, and totally in it. Summer magic.

Two moms stood nearby, chatting. Everyone looked settled.

Then a third mom walked up with a baby on her hip and called out, “Seth, honey. Don’t you want to play by the water? Want a snack? Some water?”

Seth didn’t answer. He was deep in pirate mode. He barely looked up. But the other kids heard "snack," and the whole energy shifted.

Next thing I saw, she was passing out small bags of Goldfish, chips, and carrot sticks to a band of sticky open palms. The toddler on her hip was writhing, trying to get a carrot stick. The mom kept trying to give the toddler a sippy cup instead, overexplaining about choking, while simultaneously convincing a six-year-old to trade snacks with the crying four-year-old who was tackling his brother for the last bag of Doritos.

There was a moment of silence as everyone contentedly chewed, when out came the sunblock tube. “Let’s get sunblocked,” she said to Seth, who was now rummaging in the open cooler for a Capri Sun.

The mom then asked her partner, who had just settled the toddler onto the blanket with a board book and a paci, to grab water bottles. Seth kept digging. “I want a Capri Sun!” he whined.

Both mom and dad looked tense, and those magical moments of pirate play were long gone. And listen. No one meant to disrupt anything. But in the effort to enhance it, everything got pulled off course.

Toddler crying. Preschooler whining. Mom and dad irritated with one another.

Sound familiar? It does to me. I could have easily been this mom.

A mom who means well, wants to keep everyone safe, fed, hydrated, and on track. But somehow, it always seems to backfire.

Later on, I saw a boy around five watching a man fly a fish-shaped kite. He was mesmerized.

The man noticed and smiled. They shared a quiet moment, just standing there in mutual curiosity.

Then the boy’s dad came over and said, “Sammy, can you name that fish? From the movie? A clownfish. Can you say clownfish?”

The boy looked away. His interest dimmed. That quiet connection was replaced with a quiz.

Well-meaning Dad invited Sammy to go closer to the kite. He even offered to buy him one. He wanted to show him how to fly it. It was really nice, but it was too much.

This is what I want to say. You don’t have to do more. You don’t have to optimize every single moment or guide every step.

You don’t have to explain the fish. You can just watch, because watching is not lazy, and it is not missing an opportunity.

It is choosing not to interrupt one.

by The Workspace For Children |  Read more:
Image: uncredited
[ed. Simple, yes? How many times have you started sharing something interesting with someone, only to have them immediately jump in and start telling you something about themselves. It happens more often than you think. Wonder how much we (they) miss in life that way.]

Tuesday, August 5, 2025

Scientific Fraud Has Become an Industry

For years, sleuths who study scientific fraud have been sounding the alarm about the sheer size and sophistication of the industry that churns out fake publications. Now, an extensive investigation finds evidence of a range of bad actors profiting from fraud. The study, based on an analysis of thousands of publications and their authors and editors, shows paper mills are just part of a complex, interconnected system that includes publishers, journals, and brokers.

The paper, published today in the Proceedings of the National Academy of Sciences, paints an alarming picture. Northwestern University metascientist Reese Richardson and his colleagues identify networks of editors and authors colluding to publish shoddy or fraudulent papers, report that large organizations are placing batches of fake papers in journals, suggest brokers may serve as intermediaries between paper mills and intercepted journals, and find that the number of fake papers—though still relatively small—seems to be increasing at a rate far greater than the scientific literature generally.

The paper shows that misconduct “has become an industry,” says Anna Abalkina of the Free University of Berlin, who studies corruption in science and was not involved with the research. Richardson and colleagues hope their sweeping case will attract attention and spur change.

They began their analysis by pinpointing corrupt editors. They focused their investigation on PLOS ONE, because the megajournal allows easy access to bulk metadata and publishes the names of the editors who have handled the thousands of papers it publishes each year, making it possible to detect anomalies without behind-the-scenes information. The researchers identified all the papers from the journal that had been retracted or received comments on PubPeer—a website that allows researchers to critique published work—and then identified each paper’s editors.

All told, 33 editors stood out as more frequently handling work that was later retracted or criticized than would be expected by chance. “Some of these were immense outliers,” Richardson says. For instance, of the 79 papers that one editor had handled at PLOS ONE, 49 have been retracted. Flagged editors handled 1.3% of papers published in the journal by 2024, but nearly one-third of all retracted papers.

The team also spotted that these editors worked on certain authors’ papers at a suspiciously high rate. These authors were often editors at PLOS ONE themselves, and they often handled each other’s papers. It’s possible that some editors are being paid bribes, Richardson says, but “also possible that these are informal arrangements that are being made among colleagues.” The researchers detected similarly questionable editor behavior in 10 journals published by Hindawi, an open-access publisher that was shuttered because of rampant paper mill activity after Wiley acquired it. A spokesperson for Wiley told Science the publisher has made “significant investments to address research integrity issues.” (...)

Richardson and his colleagues found that the problem goes far beyond networks of unscrupulous editors and authors scratching each other’s backs. They identified what appear to be coordinated efforts to arrange the publication of batches of dubious papers in multiple journals.

The team looked at more than 2000 papers flagged on PubPeer for containing duplicated images and identified clusters of papers that all shared images. Those sets of papers were often published around the same time and in a limited selection of journals. Looking at patterns of duplicated images is an “absolutely innovative” method for investigating these networks, Abalkina says. “No one has done this before.”

In some cases, the authors suggest, a single paper mill that infiltrated multiple journals may be responsible. But they also believe some of these clusters reflect the work of “brokers” who act as go-betweens, taking papers produced by mills and placing them at compromised journals.

The team dug into the workings of the Academic Research and Development Association (ARDA), based in Chennai, India, which offers services including “thesis/article writing” as well as “journal publication” in a list of dozens of journals. On a web page listing “high impact journals” on offer, ARDA says it liaises with journals on behalf of researchers and “[ensures] they get published successfully in the High Impact Indexing Database journal of their choice.”

Over several years, ARDA’s list of journals has evolved, the team found, with new publications added to the list and others removed after being delisted by bibliometric databases because of fishy behavior. The journals often publish transparently “problematic” articles, Richardson says, and ARDA charges between $250 and $500 for publication, based on quotes offered to Richardson and his colleagues. The website asks authors to submit their own papers, suggesting ARDA itself is not a paper mill, but rather a go-between, Richardson says.

ARDA did not respond to a request for comment.

Organizations like these operate in broad daylight, under the guise of providing “editorial services,” says Lokman Meho, an information scientist at the American University of Beirut. Although their operations may be unethical—with stark consequences for science and scientists—they don’t care about trying to hide, he says, because “it is actually not illegal to run such businesses.”

The problems Richardson and his colleagues documented are growing fast. The team built a list of papers identified in 55 databases of likely paper mill products, looking at the number of suspicious papers published each year between 2016 and 2020. (They excluded the past few years of data because it takes time for fraudulent papers to be discovered and retracted.) They found that the number of suspected paper mill products doubled every 1.5 years—10 times faster than the rate of growth of the literature as a whole, although still a small proportion of papers overall. The number of retractions and papers flagged on PubPeer had also risen fast, doubling every 3.3 and 3.6 years, respectively, but not keeping pace with the increase in suspected fraudulent papers. “This means that the percentage of fraudulent science is growing,” Abalkina says. That poses particular risks to fields like medical science, where the fake papers sometimes make their way into systematic reviews and meta-analyses, potentially distorting our understanding of drugs and treatments, she says.

One contributor is the rapid growth of science, says Wolfgang Kaltenbrunner, a science studies scholar at Leiden University. Paper mill products are often buried in low-impact journals and are written to get little attention, he says. In small scientific communities, it is harder to hide products like these, but as some fields get larger and more anonymous, such papers can escape detection more easily. And as the scientific workforce has burgeoned, institutions have increasingly turned to evaluating scientists based on how many publications they produce, leading some researchers to bolster their records with fake papers, he says. “Perverse incentives, inflated metrics, the ‘publish or perish’ culture, and systemic tolerance for weak scholarship” all allow paper mills to flourish, says Li Tang, an expert on Chinese research policy at Fudan University.

Young researchers may feel forced into paying for paper mill publications to compete with peers—a ratcheting effect that is already apparent, Richardson says. The number of papers published by medical residency applicants has soared in recent years, for instance, with some students claiming authorship of dozens of papers. He says it’s no coincidence that the paper mill industry targets residency applicants, especially foreign students on visas.

Docampo, Abalkina, and others say there’s little in the new paper that wasn’t already strongly suspected. But the dramatic confirmation that the study offers may shift the needle, they say. “We’re massively behind the curve on making visible and realizing the extent of the problem,” Kaltenbrunner says. “The sheer scale of it is the takeaway message here.”

by Cathleen O’Grady, Science | Read more:
Image: Davide Bonazzi/Salzmanart

Thursday, July 24, 2025

Of Mice, Mechanisms, and Dementia

“The scientific paper is a ‘fraud’ that creates “a totally misleading narrative of the processes of thought that go into the making of scientific discoveries.”
This critique comes not from a conspiracist on the margins of science, but from Nobel laureate Sir Peter Medawar. A brilliant experimentalist whose work on immune tolerance laid the foundation for modern organ transplantation, Sir Peter understood both the power and the limitations of scientific communication.

Consider the familiar structure of a scientific paper: Introduction (background and hypothesis), Methods, Results, Discussion, Conclusion. This format implies that the work followed a clean, sequential progression: scientists identified a gap in knowledge, formulated a causal explanation, designed definitive experiments to fill the gap, evaluated compelling results, and most of the time, confirmed their hypothesis.

Real lab work rarely follows such a clear path. Biological research is filled with what Medawar describes lovingly as “messing about”: false starts, starting in the middle, unexpected results, reformulated hypotheses, and intriguing accidental findings. The published paper ignores the mess in favour of the illusion of structure and discipline. It offers an ideal version of what might have happened rather than a confession of what did.

The polish serves a purpose. It makes complex work accessible (at least if you work in the same or a similar field!). It allows researchers to build upon new findings.

But the contrived omissions can also play upon even the most well-regarded scientist’s susceptibility to the seduction of story. As Christophe Bernard, Director of Research at the Institute of Systems Neuroscience (Marseilles, Fr.) recently explained,
“when we are reading a paper, we tend to follow the reasoning and logic of the authors, and if the argumentation is nicely laid out, it is difficult to pause, take a step back, and try to get an overall picture.”
Our minds travel the narrative path laid out for us, making it harder to spot potential flaws in logic or alternative interpretations of the data, and making conclusions feel far more definitive than they often are.

Medawar’s framing is my compass when I do deep dives into major discoveries in translational neuroscience. I approach papers with a dual vision. First, what is actually presented? But second, and often more importantly, what is not shown? How was the work likely done in reality? What alternatives were tried but not reported? What assumptions guided the experimental design? What other interpretations might fit the data if the results are not as convincing or cohesive as argued?

And what are the consequences for scientific progress?

In the case of Alzheimer’s research, they appear to be stark: thirty years of prioritizing an incomplete model of the disease’s causes; billions of corporate, government, and foundation dollars spent pursuing a narrow path to drug development; the relative exclusion of alternative hypotheses from funding opportunities and attention; and little progress toward disease-modifying treatments or a cure.

The incomplete Alzheimer’s model I’m referring to is the amyloid cascade hypothesis, which proposes that Alzheimer’s is the outcome of protein processing gone awry in the brain, leading to the production of plaques that trigger a cascade of other pathological changes, ultimately causing the cognitive decline we recognize as the disease. Amyloid work continues to dominate the research and drug development landscape, giving the hypothesis the aura of settled fact.

However, cracks are showing in this façade. In 2021, the FDA granted accelerated approval to aducanumab (Aduhelm), an anti-amyloid drug developed by Biogen, despite scant evidence that it meaningfully altered the course of cognitive decline. The decision to approve, made over near-unanimous opposition from the agency’s advisory panel, exposed growing tensions between regulatory optimism and scientific rigor. Medicare’s subsequent decision to restrict coverage to clinical trials, and Biogen’s quiet withdrawal of the drug from broader marketing efforts in 2024, made the disconnect impossible to ignore.

Meanwhile, a deeper fissure emerged: an investigation by Science unearthed evidence of data fabrication surrounding research on Aβ*56, a purported toxic amyloid-beta oligomer once hailed as a breakthrough target for disease-modifying therapy. Research results that had been seen as a promising pivot in the evolution of the amyloid cascade hypothesis, a new hope for rescuing the theory after repeated clinical failures, now appears to have been largely a sham. Treating Alzheimer’s by targeting amyloid plaques may have been a null path from the start.

When the cracks run that deep, it’s worth going back to the origin story—a landmark 1995 paper by Games et al., featured on the cover of Nature under the headline “A mouse model for Alzheimer’s.” It announced what was hailed as a breakthrough: the first genetically engineered mouse designed to mimic key features of the disease.

In what follows, I argue that the seeds of today’s failures were visible from the beginning if one looks carefully. I approach this review not as an Alzheimer’s researcher with a rival theory, but as a molecular neuroscientist interested in how fields sometimes converge around alluring but unstable ideas. Foundational papers deserve special scrutiny because they become the bedrock for decades of research. When that bedrock slips beneath us, it tells a cautionary story: about the power of narrative, the comfort of consensus, and the dangers of devotion without durable evidence. It also reminds us that while science is ultimately self-correcting, correction can be glacial when careers and reputations are staked on fragile ground.

The Rise of the Amyloid Hypothesis

In the early 1990s, a new idea began to dominate Alzheimer’s research: the amyloid cascade hypothesis.

First proposed by Hardy and Higgins in a 1992 Science perspective, the hypothesis suggested a clear sequence of disease-precipitating events: protein processing goes awry in the brain → beta-amyloid (Aβ) accumulates → plaques form → plaques trigger a cascade of downstream events, including neurofibrillary tangles, inflammation, synaptic loss, neuronal death, resulting in observable cognitive decline.

The hypothesis was compelling for several reasons. First, the discovery of the enzymatic steps by which amyloid precursor protein (APP) is processed into Aβ offered multiple potential intervention points—ideal for pharmaceutical drug development.

Second, the hypothesis was backed by powerful genetic evidence. Mutations in the APP gene on chromosome 21 were associated with early-onset Alzheimer’s. The case grew stronger with the observation that more than 50% of individuals with Down syndrome, who carry an extra copy of chromosome 21 (and thus extra APP), develop Alzheimer’s-like pathology by age 40.

Thus, like any robust causal theory, the amyloid cascade hypothesis offered explicit, testable predictions. As Hardy and Higgins outlined, if amyloid truly initiates the Alzheimer’s cascade, then genetically engineering mice to produce human amyloid should trigger the full sequence of events: plaques first, then tangles, synapse loss, and neuronal death, then cognitive decline. And the sequentiality matters: amyloid accumulation should precede other pathological features. At the time, this was a thrilling possibility.

Pharmaceutical companies were especially eager: if the hypothesis proved correct, stopping amyloid should stop the disease. The field awaited the first transgenic mouse studies with enormous anticipation.

How—with Unlimited Time and Money and a Little Scientific Despair—to Make a Transgenic Mouse

“Mouse Model Made” was the boastful headline to the independent, introductory commentary Nature solicited to accompany the 1995 Games paper’s unveiling of the first transgenic mouse set to “answer the needs” of Alzheimer’s research. The scientific argument over whether amyloid caused Alzheimer’s had been “settle[d]” by the Games paper, “perhaps for good.”

In some ways, the commentary’s bravado seemed warranted. Why? Because in the mid-’90s, creating a transgenic mouse was a multi-stage, treacherous gauntlet of molecular biology. Every step carried an uncomfortably high chance of failure. If this mouse, developed by Athena Neurosciences (a small Bay Area pharmaceutical company) was valid, it was an extraordinary technical achievement portending a revolution in Alzheimer’s care.

First Rule of Making a Transgenic Mouse: Don’t Talk About How You Made a Transgenic Mouse

How did Athena pull it off? Hard to say! What's most remarkable about the Games paper is what's not there. Scan through the methods section and you'll find virtually none of the painstaking effort required to build the Alzheimer’s mouse. Back in the ‘90s, creating a transgenic mouse took years of work, countless failed attempts, and extraordinary technical skill. In the Games paper, this effort is compressed into a few sparse sentences describing which gene and promoter (nearby gene instruction code) the research team used to make the mouse. The actual details are relegated to scientific meta-narrative—knowledge that exists only in lab notebooks, daily conversations between scientists, and the muscle memory of researchers who perform these techniques thousands of times.

The thin description wasn’t atypical for a publication from this era. Difficult experimental methods were often encapsulated in the single phrase "steps were carried out according to standard procedures," with citations to entire books on sub-cloning techniques or reference to the venerable Manipulating the Mouse Embryo: A Laboratory Manual (We all have this on our bookshelf, yes?) The idea that there were reliable "standard procedures" that could ensure success was farcical—an understatement that other scientists understand as code for "we spent years getting this to work; good luck figuring it out ;)."

So, as an appreciation of what it takes to make progress on the frontiers of science, here is approximately what’s involved.

Prerequisites: Dexterity, Glassblowing, and Zen Mastery

Do you have what it takes to master transgenic mouse creation? Well, do you have the dexterity of a neurosurgeon? Because you’ll be micro-manipulating fragile embryos with the care of someone defusing a bomb—except the bomb is smaller than a grain of sand, and you need to keep it alive. Have you trained in glass-blowing? Hope so, because you’ll need to handcraft your own micropipettes so you can balance an embryo on the pipette tip. Yes, really.

And most importantly, do you sincerely believe that outcomes are irrelevant, and only the endless, repetitive journey matters? If so, congratulations! You may already be a Zen master, which will come in handy when you’re objectively failing your boss’s expectations every single day for what feels like an eternity. Success, when it finally comes, will be indistinguishable from sheer, dumb luck, but the stochastic randomness won’t stop you from searching frantically through your copious notes to see if you can pinpoint the variable that made it finally work!

Let’s go a little deeper so we can understand why the Games team's achievement was considered so monumental—and why almost everyone viewed the results in the best possible light.

by Anonymous, Astral Codex Ten |  Read more:
Image: via