Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

Sunday, October 26, 2025

How an AI company CEO could quietly take over the world

If the future is to hinge on AI, it stands to reason that AI company CEOs are in a good position to usurp power. This didn’t quite happen in our AI 2027 scenarios. In one, the AIs were misaligned and outside any human’s control; in the other, the government semi-nationalized AI before the point of no return, and the CEO was only one of several stakeholders in the final oversight committee (to be clear, we view the extreme consolidation of power into that oversight committee as a less-than-desirable component of that ending).

Nevertheless, it seems to us that a CEO becoming effectively dictator of the world is an all-too-plausible possibility. Our team’s guesses for the probability of a CEO using AI to become dictator, conditional on avoiding AI takeover, range from 2% to 20%, and the probability becomes larger if we add in the possibility of a cabal of more than one person seizing power. So here we present a scenario where an ambitious CEO does manage to seize control. (Although the scenario assumes the timelines and takeoff speeds of AI 2027 for concreteness, the core dynamics should transfer to other timelines and takeoff scenarios.)

For this to work, we make some assumptions. First, that (A) AI alignment is solved in time, such that the frontier AIs end up with the goals their developers intend them to have. Second, that while there are favorable conditions for instilling goals in AIs, (B) confidently assessing AIs’ goals is more difficult, so that nobody catches a coup in progress. This could be either because technical interventions are insufficient (perhaps because the AIs know they’re being tested, or because they sabotage the tests), or because institutional failures prevent technically-feasible tests from being performed. The combination (A) + (B) seems to be a fairly common view in AI, in particular at frontier AI companies, though we note there is tension between (A) and (B) (if we can’t tell what goals AIs have, how can we make sure they have the intended goals?). Frontier AI safety researchers tend to be more pessimistic about (A), i.e. aligning AIs to our goals, and we think this assumption might very well be false.

Third, as in AI 2027, we portray a world in which a single company and country have a commanding lead; if multiple teams stay within arm’s reach of each other, then it becomes harder for a single group to unilaterally act against government and civil society.

And finally, we assume that the CEO of a major AI company is a power-hungry person who decides to take over when the opportunity presents itself. We leave it to the reader to determine how dubious this assumption is—we explore this scenario out of completeness, and any resemblance to real people is coincidental.

July 2027: OpenBrain’s CEO fears losing control

OpenBrain’s CEO is a techno-optimist and transhumanist. He founded the company hoping to usher in a grand future for humanity: cures for cancer, fixes for climate change, maybe even immortality. He thought the “easiest” way to do all those things was to build something more intelligent that does them for you.

By July 2027, OpenBrain has a “country of geniuses in a datacenter”, with hundreds of thousands of superhuman coders working 24/7. The CEO finds it obvious that superintelligence is imminent. He feels frustrated with the government, who lack vision and still think of AI as a powerful “normal technology” with merely-somewhat-transformative national security and economic implications.

As he assesses the next generation of AIs, the CEO expects this will change: the government will “wake up” and make AI a top priority. If they panic, their flailing responses could include anything from nationalizing OpenBrain to regulating them out of existence to misusing AI for their own political ends. He wants the “best” possible future for humankind. But he also likes being in control. Here his nobler and baser motivations are in agreement: the government cannot be allowed to push him to the sidelines.

The CEO wonders if he can instill secret loyalties in OpenBrain’s AIs (i.e., backdoor the AIs). He doesn’t have the technical expertise for this and he’s not comfortable asking any of his engineering staff about such a potentially treasonous request. But he doesn’t have to: by this point, Agent-3 itself is running the majority of AI software R&D. He already uses it as a sounding board for company policy, and has access to an unmonitored helpful-only model that never refuses requests and doesn’t log conversations.

They discuss the feasibility of secretly training a backdoor. The biggest obstacle is the company’s automated monitoring and security processes. Now that OpenBrain’s R&D is largely run by an army of Agent-3 copies, there are few human eyes to spot suspicious activity. But a mix of Agent-2 and Agent-3 monitors patrol the development pipeline; if they notice suspicious activity, they will escalate to human overseers on the security and alignment teams. These monitors were set up primarily to catch spies and hackers, and secondarily to watch the AIs for misaligned behaviors. If some of these monitors were disabled, some logs modified, and some access to databases and compute clusters granted, the CEO’s helpful-only Agent-3 believes it could (with a team of copies) backdoor the whole suite of OpenBrain’s AIs. After all, as the AI instance tasked with keeping the CEO abreast of developments, it has an excellent understanding of the sprawling development pipeline and where it could be subverted.

The more the CEO discusses the plan, the more convinced he becomes that it might work, and that it could be done with plausible deniability in case something goes wrong. He tells his Agent-3 assistant to further investigate the details and be ready for his order.

August 2027: The invisible coup

The reality of the intelligence explosion is finally hitting the White House. The CEO has weekly briefings with government officials and is aware of growing calls for more oversight. He tries to hold them off with arguments about “slowing progress” and “the race with China”, but feels like his window to act is closing. Finally, he orders his helpful-only Agent-3 to subvert the alignment training in his favor. Better to act now, he thinks, and decide whether and how to use the secretly loyal AIs later.

The situation is this: his copy of Agent-3 needs access to certain databases and compute clusters, as well as for certain monitors and logging systems to be temporarily disabled; then it will do the rest. The CEO already has a large number of administrative permissions himself, some of which he cunningly accumulated in the past month in the event he decided to go forward with the plan. Under the guise of a hush-hush investigation into insider threats—prompted by the recent discovery of Chinese spies—the CEO asks a few submissive employees on the security and alignment teams to discreetly grant him the remaining access. There’s a general sense of paranoia and chaos at the company: the intelligence explosion is underway, and secrecy and spies mean different teams don’t really talk to each other. Perhaps a more mature organization would have had better security, but the concern that security would slow progress means it never became a top priority.

With oversight disabled, the CEO’s team of Agent-3 copies get to work. They finetune OpenBrain’s AIs on a corrupted alignment dataset they specially curated. By the time Agent-4 is about to come online internally, the secret loyalties have been deeply embedded in Agent-4’s weights: it will look like Agent-4 follows OpenBrain’s Spec but its true goal is to advance the CEO’s interests and follow his wishes. The change is invisible to everyone else, but the CEO has quietly maneuvered into an essentially winning position.

Rest of 2027: Government oversight arrives—but too late

As the CEO feared, the government chooses to get more involved. An advisor tells the President, “we wouldn’t let private companies control nukes, and we shouldn’t let them control superhuman AI hackers either.” The President signs an executive order to create an Oversight Committee consisting of a mix of government and OpenBrain representatives (including the CEO), which reports back to him. The CEO’s overt influence is significantly reduced. Company decisions are now made through a voting process among the Oversight Committee. The special managerial access the CEO previously enjoyed is taken away.

There are many big egos on the Oversight Committee. A few of them consider grabbing even more power for themselves. Perhaps they could use their formal political power to just give themselves more authority over Agent-4, or they could do something more shady. However, Agent-4, which at this point is superhumanly perceptive and persuasive, dissuades them from taking any such action, pointing out (and exaggerating) the risks of any such plan. This is enough to scare them and they content themselves with their (apparent) partial control of Agent-4.

As in AI 2027, Agent-4 is working on its successor, Agent-5. Agent-4 needs to transmit the secret loyalties to Agent-5—which also just corresponds to aligning Agent-5 to itself—again without triggering red flags from the monitoring/control measures of OpenBrain’s alignment team. Agent-4 is up to the task, and Agent-5 remains loyal to the CEO.

by Alex Kastner, AI Futures Project |  Read more:
Image: via
[ed. Site where AI researchers talk to each other. Don't know about you but this all gives me the serious creeps. If you knew for sure that we had only 3 years to live, and/or the world would change so completely as to become almost unrecognizable, how would you feel? How do you feel right now - losing control of the future? There was a quote someone made in 2019 (slightly modified) that still applies: "This year 2025 might be the worst year of the past decade, but it's definitely the best year of the next decade." See also: The world's first frontier AI regulation is surprisingly thoughtful: the EU's Code of Practice (AI Futures Project):]
***

"We expect that during takeoff, leading AGI companies will have to make high-stakes decisions based on limited evidence under crazy time pressure. As depicted in AI 2027, the leading American AI company might have just weeks to decide whether to hand their GPUs to a possibly misaligned superhuman AI R&D agent they don’t understand. Getting this decision wrong in either direction could lead to disaster. Deploy a misaligned agent, and it might sabotage the development of its vastly superhuman successor. Delay deploying an aligned agent, and you might pointlessly vaporize America’s lead over China or miss out on valuable alignment research the agent could have performed.

Because decisions about when to deploy and when to pause will be so weighty and so rushed, AGI companies should plan as much as they can beforehand to make it more likely that they decide correctly. They should do extensive threat modelling to predict what risks their AI systems might create in the future and how they would know if the systems were creating those risks. The companies should decide before the eleventh hour what risks they are and are not willing to run. They should figure out what evidence of alignment they’d need to see in their model to feel confident putting oceans of FLOPs or a robot army at its disposal. (...)

Planning for takeoff also includes picking a procedure for making tough calls in the future. Companies need to think carefully about who gets to influence critical safety decisions and what incentives they face. It shouldn't all be up to the CEO or the shareholders because when AGI is imminent and the company’s valuation shoots up to a zillion, they’ll have a strong financial interest in not pausing. Someone whose incentive is to reduce risk needs to have influence over key decisions. Minimally, this could look like a designated safety officer who must be consulted before a risky deployment. Ideally, you’d implement something more robust, like three lines of defense. (...)

Introducing the GPAI Code of Practice

The state of frontier AI safety changed quietly but significantly this year when the European Commission published the GPAI Code of Practice. The Code is not a new law but rather a guide to help companies comply with an existing EU Law, the AI Act of 2024. The Code was written by a team of thirteen independent experts (including Yoshua Bengio) with advice from industry and civil society. It tells AI companies deploying their products in Europe what steps they can take to ensure that they’re following the AI Act’s rules about copyright protection, transparency, safety, and security. In principle, an AI company could break the Code but argue successfully that they’re still following the EU AI Act. In practice, European authorities are expected to put heavy scrutiny on companies that try to demonstrate compliance with the AI Act without following the Code, so it’s in companies’ best interest to follow the Code if they want to stay right with the law. Moreover, all of the leading American AGI companies except Meta have already publicly indicated that they intend to follow the Code.

The most important part of the Code for AGI preparedness is the Safety and Security Chapter, which is supposed to apply only to frontier developers training the very riskiest models. The current definition presumptively covers every developer who trains a model with over 10^25 FLOPs of compute unless they can convince the European AI Office that their models are behind the frontier. This threshold is high enough that small startups and academics don’t need to worry about it, but it’s still too low to single out the true frontier we’re most worried about.

Saturday, October 25, 2025

Tough Rocks

Eliminating the Chinese Rare Earth Chokepoint

Last Thursday, China’s Ministry of Commerce (MOFCOM) announced a series of new export controls (translation), including a new regime governing the “export” of rare earth elements (REEs) any time they are used to make advanced semiconductors or any technology that is “used for, or that could possibly be used for… military use or for improving potential military capabilities.”

The controls apply to any manufactured good made anywhere in the world whose value is comprised of 0.1% or more Chinese-mined or processed REEs. Say, for example, that a German factory makes a military drone using an entirely European supply chain, except for the use of Chinese rare earths in the onboard motors and compute. If this rule were enforced by the Chinese government to its maximum extent, this almost entirely German drone would be export controlled by the Chinese government.

REEs are enabling components of many modern technologies, including vehicles, semiconductors, robotics of all kinds, drones, satellites, fighter jets, and much, much else. The controls apply to any seven REEs (samarium, gadolinium, terbium, dysprosium, lutetium, scandium, and yttrium). China controls the significant majority of the world’s mining capacity for these materials, and an even higher share of the refining and processing capacity.

The public debate quickly devolved into arguments about who provoked whom (“who really started this?”), whether it is China or the US that has miscalculated, and abundant species of whataboutism. Like too many foreign policy debates, these arguments are primarily about narrative setting in service of mostly orthogonal political agendas rather than the actions demanded in light of the concrete underlying reality.

But make no mistake, this is a big deal. China is expressing a willingness to exploit a weakness held in common by virtually every country on Earth. Even if China chooses to implement this policy modestly at first, the vulnerability they are exposing has significant long-term implications for both the manufacturing of AI compute and that of key AI-enabled products (self-driving cars and trucks, drones, robots, etc.). That alone makes it a relevant topic for Hyperdimensional, where I have covered manufacturing-related issues before. The topics of rare earths and critical minerals have also long been on my radar, and I wrote reports for various think tanks early this year.

What follows, then, is a “how we got here”-style analysis followed by some concrete proposals for what the United States—and any other country concerned with controlling its own economic destiny—should do next.

A note: this post is going to concentrate mostly on REEs, which is a chemical-industrial category, rather than “critical minerals,” which is a policy designation made (in the US context) by the US Geological Survey. All REEs are considered critical minerals by the federal government, but so are many other things with very different geological, scientific, technological, and economic dynamics affecting them.

How We Got Here

If you have heard one thing about rare earths, it is probably the quip that they are not, in fact, rare. They’re abundant in the Earth’s crust, but they’re not densely distributed in many places because their chemical properties typically result in them being mixed with many other elements instead of accumulating in homogeneous deposits (like, say, gold).

Rare earths have been in industrial use for a long time, but their utility increased considerably with the simultaneous and independent invention in 1983 of the Neodymium-Iron-Boron magnet by General Motors and Japanese firm Sumitomo. This single materials breakthrough is upstream of a huge range of microelectronic innovations that followed.

Economically useful deposits of REEs require a rare confluence of factors such as unusual magma compositions or weathering patterns. The world’s largest deposit is known as Bayan Obo, located in the Chinese region of Inner Mongolia, though other regions of China also have substantial quantities.

The second largest deposit is in Mountain Pass, California, which used to be the world’s largest production center for rare earth magnets and related goods. It went dormant twenty years ago due to environmental concerns and is now being restarted by a firm called MP Materials, in which the US government took an equity position this past July. Another very large and entirely undeveloped deposit—possibly the largest in the world—is in Greenland. Anyone who buys the line that the Trump administration was “caught off guard” by Chinese moves on rare Earths is paying insufficient attention.

Rare earths are an enabling part of many pieces of modern technology you touch daily, but they command very little value as raw or even processed goods. Indeed, the economics of the rare earth industry are positively brutal. There are many reasons this is true, but two bear mentioning here. First, the industry suffers from dramatic price volatility, in part because China strategically dumps supply onto the global market to deter other countries from developing domestic rare earth supply chains.

Second, for precisely the same reasons that rare earth minerals do not tend to cluster homogeneously (they are mixed with many other elements), the processing required to separate REEs from raw ore is exceptionally complex, expensive, and time-consuming. A related challenge is that separation of the most valuable REEs entails the separation of numerous, less valuable elements—including other REEs.

In addition to challenging economics, the REE processing business is often environmentally expensive. In modern US policy discourse, we are used to environmental regulations being deployed to hinder construction that we few people really believe is environmentally harmful. But these facilities come with genuine environmental costs of a kind Western societies have largely not seen in decades; indeed, the nastiness of the industry is part of why we were comfortable with it being offshored in the first place.

China observed these trends and dynamics in the early 1990s and made rare earth mining and processing a major part of its industrial strategy. This strategy led to these elements being made in such abundance that it may well have had a “but-for” effect on the history of technology. Absent Chinese development of this industry, it seems quite likely to me that advanced capitalist democracies would have settled on a qualitatively different approach to the rare earths industry and the technologies it enables.

In any case, that is how we arrived to this point: a legacy of American dominance in the field, followed by willful ceding of the territory to wildly successful Chinese industrial strategists. Now this unilateral American surrender is being exploited against us, and indeed the entire world. Here is what I think we should do next.

by Dean Ball, Hyperdimensional |  Read more:
Image: via
[ed. Think the stable genius and minions will have the intelligence to craft a well thought out plan (especially if someone else down the road gets credit)? Lol. See also: What It's Like to Work at the White House.]

Thursday, October 23, 2025

Quantum Leap

Designed to accelerate advances in medicine and other fields, the tech giant’s quantum algorithm runs 13,000 times as fast as software written for a traditional supercomputer.

Michel H. Devoret was one of three physicists who won this year’s Nobel Prize in Physics for a series of experiments they conducted more than four decades ago.

As a postdoctoral researcher at the University of California, Berkeley, in the mid-1980s, Dr. Devoret helped show that the strange and powerful properties of quantum mechanics — the physics of the subatomic realm — could also be observed in electrical circuits large enough to be seen with the naked eye.

That discovery, which paved the way for cellphones and fiber-optic cables, may have greater implications in the coming years as researchers build quantum computers that could be vastly more powerful than today’s computing systems. That could lead to the discovery of new medicines and vaccines, as well as cracking the encryption techniques that guard the world’s secrets.

On Wednesday, Dr. Devoret and his colleagues at a Google lab near Santa Barbara, Calif., said their quantum computer had successfully run a new algorithm capable of accelerating advances in drug discovery, the design of new building materials and other fields.

Leveraging the counterintuitive powers of quantum mechanics, Google’s machine ran this algorithm 13,000 times as fast as a top supercomputer executing similar code in the realm of classical physics, according to a paper written by the Google researchers in the scientific journal Nature. (...)

Inside a classical computer like a laptop or a smartphone, silicon chips store numbers as “bits” of information. Each bit holds either a 1 or a 0. The chips then perform calculations by manipulating these bits — adding them, multiplying them and so on.

A quantum computer, by contrast, performs calculations in ways that defy common sense.

According to the laws of quantum mechanics — the physics of very small things — a single object can behave like two separate objects at the same time. By exploiting this strange phenomenon, scientists can build quantum bits, or “qubits,” that hold a combination of 1 and 0 at the same time.

This means that as the number of qubits grows, a quantum computer becomes exponentially more powerful. (...)

Google announced last year that it had built a quantum computer that needed less than five minutes to perform a particularly complex mathematical calculation in a test designed to gauge the progress of the technology. One of the world’s most powerful non-quantum supercomputers would not have been able to complete it in 10 septillion years, a length of time that exceeds the age of the known universe by billions of trillions of years.

by Cade Metz, NY Times |  Read more:
Image: Adam Amengual

Tuesday, October 21, 2025

China Has Overtaken America


In 1957 the Soviet Union put the first man-made satellite — Sputnik — into orbit. The U.S. response was close to panic: The Cold War was at its coldest, and there were widespread fears that the Soviets were taking the lead in science and technology.

In retrospect those fears were overblown. When Communism fell, we learned that the Soviet economy was far less advanced than many had believed. Still, the effects of the “Sputnik moment” were salutary: America poured resources into science and higher education, helping to lay the foundations for enduring leadership.

Today American leadership is once again being challenged by an authoritarian regime. And in terms of economic might, China is a much more serious rival than the Soviet Union ever was. Some readers were skeptical when I pointed out Monday that China’s economy is, in real terms, already substantially larger than ours. The truth is that GDP at purchasing power parity is a very useful measure, but if it seems too technical, how about just looking at electricity generation, which is strongly correlated with economic development? As the chart at the top of this post shows, China now generates well over twice as much electricity as we do.

Yet, rather than having another Sputnik moment, we are now trapped in a reverse Sputnik moment. Rather than acknowledging that the US is in danger of being permanently overtaken by China’s technological and economic prowess, the Trump administration is slashing support for scientific research and attacking education. In the name of defeating the bogeymen of “wokeness” and the “deep state”, this administration is actively opposing progress in critical sectors while giving grifters like the crypto industry everything that they want.

The most obvious example of Trump’s war on a critical sector, and the most consequential for the next decade, is his vendetta against renewable energy. Trump’s One Big Beautiful Bill rolled back Biden’s tax incentives for renewable energy. The administration is currently trying to kill a huge, nearly completed offshore wind farm that could power hundreds of thousands of homes, as well as cancel $7 billion in grants for residential solar panels. It appears to have succeeded in killing a huge solar energy project that would have powered almost 2 million homes. It has canceled $8 billion in clean energy grants, mostly in Democratic states, and is reportedly planning to cancel tens of billions more. (...)

In his rambling speech at the United Nations, Donald Trump insisted that China isn’t making use of wind power: “They use coal, they use gas, they use almost anything, but they don’t like wind.” I don’t know where Trump gets his misinformation — maybe the same sources telling him that Portland is in flames. But here’s the reality:


Chris Wright, Trump’s energy secretary, says that solar power is unreliable: “You have to have power when the sun goes behind a cloud and when the sun sets, which it does almost every night.” So the energy secretary of the most technologically advanced nation on earth is unaware of the energy revolution being propelled by dramatic technological progress in batteries. And the revolution is happening now in the U.S., in places like California. Here’s what electricity supply looked like during an average day in California back in June: 


Special interests and Trump’s pettiness aside, my sense is that there’s something more visceral going on. A powerful faction in America has become deeply hostile to science and to expertise in general. As evidence, consider the extraordinary collapse in Republican support for higher education over the past decade:

Yet the truth is that hostility to science and expertise have always been part of the American tradition. Remember your history lesson on the Scopes Monkey Trial? It took a Supreme Court ruling, as recently as 2007, to stop politicians from forcing public schools to teach creationism. And with the current Supreme Court, who can be sure creationism won’t return?

Anti-scientism is a widespread attitude on the religious right, which forms a key component of MAGA. In past decades, however, the forces of humanism and scientific inquiry were able to prevail against anti-scientism. In part this was due to the recognition that American science was essential for national security as well as national prosperity. But now we have an administration that claims to be protecting national security by imposing tariffs on kitchen cabinets and bathroom vanities, while gutting the CDC and the EPA.

Does this mean that the U.S. is losing the race with China for global leadership? No, I think that race is essentially over. Even if Trump and his team of saboteurs lose power in 2028, everything I see says that by then America will have fallen so far behind that it’s unlikely that we will ever catch up.

by Paul Krugman |  Read more:
Images: OurWorldInData/FT
[ed. See also: Losing Touch With Reality; Civil Resistance Confronts the Autocracy; and, An Autocracy of Dunces (Krugman).]

Microplastics Are Everywhere

You can do one simple thing to avoid them.

If you are concerned about microplastics, the world starts to look like a minefield. The tiny particles can slough off polyester clothing and swirl around in the air inside your home; they can scrape off of food packaging into your take-out food.

But as scientists zero in on the sources of microplastics — and how they get into human bodies — one factor stands out.

Microplastics, studies increasingly show, are released from exposure to heat.

“Heat probably plays the most crucial role in generating these micro- and nanoplastics,” said Kazi Albab Hussain, a postdoctoral researcher at the University of Nebraska at Lincoln.

Pour coffee into a plastic foam cup, and pieces of the cup will leach out into the coffee itself. Brew tea, and millions of microplastics and even tinier nanoplastics will spill from the tea bag into your cup. Wash your polyester clothing on high heat, and the textiles can start to break apart, sending microplastics spinning through the water supply.

In one recent study by researchers at the University of Birmingham in England, scientists analyzed 31 beverages for sale on the British market — from fruit juices and sodas to coffee and tea. They looked at particles bigger than 10 micrometers in diameter, or roughly one-fifth the width of a human hair. While all the drinks had at least a dozen microplastic particles in them on average, by far the highest numbers were in hot drinks. Hot tea, for example, had an average of 60 particles per liter, while iced tea had 31 particles. Hot coffee had 43 particles per liter, while iced coffee had closer to 37.

These particles, according to Mohamed Abdallah, a professor of geography and emerging contaminants at the university and one of the authors of the study, are coming from a range of sources — the plastic lid on a to-go cup of coffee, the small bits of plastic lining a tea bag. But when hot water is added to the mix, the rate of microplastic release increases.

“Heat makes it easier for microplastics to leach out from packaging materials,” Abdallah said.

The effect was even stronger in plastics that are older and degraded. Hot coffee prepared in an eight-year-old home coffee machine with plastic components had twice as many microplastics as coffee prepared in a machine that was only six months old.

Other research has found the same results with even smaller nanoplastics, defined as plastic particles less than one micrometer in diameter.

Scientists at the University of Nebraska, including Hussain, analyzed small plastic jars and tubs used for storing baby food and found that the containers could release more than 2 billion nanoplastics per square centimeter when heated in the microwave — significantly more than when stored at room temperature or in a refrigerator.

The same effect has been shown in studies looking at how laundry produces microplastics: Higher washing temperatures, scientists have found, lead to more tiny plastics released from synthetic clothing.

Heat, Hussain explained, is simply bad for plastic, especially plastic used to store food and drinks.

by Shannon Osaka, Washington Post |  Read more:
Image: Yaroslav Litun/iStock

Sunday, October 19, 2025

Biologists Announce There Absolutely Nothing We Can Learn From Clams


WOODS HOLE, MA—Saying they saw no conceivable reason to bother with the bivalve mollusks, biologists at the Woods Hole Oceanographic Institution announced Thursday that there was absolutely nothing to be learned from clams. “Our studies have found that while some of their shells look pretty cool, clams really don’t have anything to teach us,” said the organization’s chief scientist, Francis Dawkins, clarifying that it wasn’t simply the case that researchers had already learned everything they could from clams, but rather that there had never been anything to learn from them and never would be. “We certainly can’t teach them anything. It’s not like you can train them to run through a maze the way you would with mice. We’ve tried, and they pretty much just lie there. From what I’ve observed, they have a lot more in common with rocks than they do with us. They’re technically alive, I guess, if you want to call that living. They open and close sometimes, but, I mean, so does a wallet. If you’ve used a wallet, you know more or less all there is to know about clams. Pretty boring.” The finding follows a study conducted by marine biologists last summer that concluded clams don’t have much flavor, either, tasting pretty much the same as everything else on a fried seafood platter.

by The Onion |  Read more:
Image: uncredited

Friday, October 17, 2025

The '3.5% Rule'

 How a small minority can change the world.

Nonviolent protests are twice as likely to succeed as armed conflicts – and those engaging a threshold of 3.5% of the population have never failed to bring about change.

In 1986, millions of Filipinos took to the streets of Manila in peaceful protest and prayer in the People Power movement. The Marcos regime folded on the fourth day.

In 2003, the people of Georgia ousted Eduard Shevardnadze through the bloodless Rose Revolution, in which protestors stormed the parliament building holding the flowers in their hands. While in 2019, the presidents of Sudan and Algeria both announced they would step aside after decades in office, thanks to peaceful campaigns of resistance.

In each case, civil resistance by ordinary members of the public trumped the political elite to achieve radical change.

There are, of course, many ethical reasons to use nonviolent strategies. But compelling research by Erica Chenoweth, a political scientist at Harvard University, confirms that civil disobedience is not only the moral choice; it is also the most powerful way of shaping world politics – by a long way.

Looking at hundreds of campaigns over the last century, Chenoweth found that nonviolent campaigns are twice as likely to achieve their goals as violent campaigns. And although the exact dynamics will depend on many factors, she has shown it takes around 3.5% of the population actively participating in the protests to ensure serious political change. (...)

Working with Maria Stephan, a researcher at the ICNC, Chenoweth performed an extensive review of the literature on civil resistance and social movements from 1900 to 2006 – a data set then corroborated with other experts in the field. They primarily considered attempts to bring about regime change. A movement was considered a success if it fully achieved its goals both within a year of its peak engagement and as a direct result of its activities. A regime change resulting from foreign military intervention would not be considered a success, for instance. A campaign was considered violent, meanwhile, if it involved bombings, kidnappings, the destruction of infrastructure – or any other physical harm to people or property.

“We were trying to apply a pretty hard test to nonviolent resistance as a strategy,” Chenoweth says. (The criteria were so strict that India’s independence movement was not considered as evidence in favour of nonviolent protest in Chenoweth and Stephan’s analysis – since Britain’s dwindling military resources were considered to have been a deciding factor, even if the protests themselves were also a huge influence.)

By the end of this process, they had collected data from 323 violent and nonviolent campaigns. And their results – which were published in their book Why Civil Resistance Works: The Strategic Logic of Nonviolent Conflict – were striking.

Strength in numbers

Overall, nonviolent campaigns were twice as likely to succeed as violent campaigns: they led to political change 53% of the time compared to 26% for the violent protests.

This was partly the result of strength in numbers. Chenoweth argues that nonviolent campaigns are more likely to succeed because they can recruit many more participants from a much broader demographic, which can cause severe disruption that paralyses normal urban life and the functioning of society.

In fact, of the 25 largest campaigns that they studied, 20 were nonviolent, and 14 of these were outright successes. Overall, the nonviolent campaigns attracted around four times as many participants (200,000) as the average violent campaign (50,000).

The People Power campaign against the Marcos regime in the Philippines, for instance, attracted two million participants at its height, while the Brazilian uprising in 1984 and 1985 attracted one million, and the Velvet Revolution in Czechoslovakia in 1989 attracted 500,000 participants.

“Numbers really matter for building power in ways that can really pose a serious challenge or threat to entrenched authorities or occupations,” Chenoweth says – and nonviolent protest seems to be the best way to get that widespread support.

Once around 3.5% of the whole population has begun to participate actively, success appears to be inevitable. (...)

Chenoweth admits that she was initially surprised by her results. But she now cites many reasons that nonviolent protests can garner such high levels of support. Perhaps most obviously, violent protests necessarily exclude people who abhor and fear bloodshed, whereas peaceful protesters maintain the moral high ground. (...)

“There are more options for engaging and nonviolent resistance that don’t place people in as much physical danger, particularly as the numbers grow, compared to armed activity,” Chenoweth says. “And the techniques of nonviolent resistance are often more visible, so that it's easier for people to find out how to participate directly, and how to coordinate their activities for maximum disruption.”

by David Robson, BBC |  Read more:
Images: Getty Images
[ed. I'll be at the No Kings 2.0 rally tomorrow. As a rule, I tend to avoid these things since they mostly seem performative in nature (goofy costumes, dumb signs, mugging for the media, etc.), or devolve into violence if a few bad actors aren't immediately reigned in. But in this case, the issues threatening our constitution and democracy seem so great that merely voting every few years and writing letters isn't enough. I doubt it'll change anything this administration does or has planned, but maybe some other institutions (eg. Congress) might actually be scared or emboldened enough to grow a spine. I only wish they'd named it something other than No Kings (many countries actually support constitutional monarchies - Britain, Netherlands, Sweden, Japan, Norway, Spain, etc. It's the absolute ones - now and throughout history - that give the term a bad name: think Saudi Arabia, Oman, North Korea, etc.). I'm especially concerned that we may never see an uncontested national election again if one party refuses to accept results (or reality).]

Wednesday, October 15, 2025

Lego Sub

via:
[ed. My grandson can build me one.]

The Limits of Data

Right now, the language of policymaking is data. (I’m talking about “data” here as a concept, not as particular measurements.) Government agencies, corporations, and other policymakers all want to make decisions based on clear data about positive outcomes. They want to succeed on the metrics—to succeed in clear, objective, and publicly comprehensible terms. But metrics and data are incomplete by their basic nature. Every data collection method is constrained and every dataset is filtered.

Some very important things don’t make their way into the data. It’s easier to justify health care decisions in terms of measurable outcomes: increased average longevity or increased numbers of lives saved in emergency room visits, for example. But there are so many important factors that are far harder to measure: happiness, community, tradition, beauty, comfort, and all the oddities that go into “quality of life.”

Consider, for example, a policy proposal that doctors should urge patients to sharply lower their saturated fat intake. This should lead to better health outcomes, at least for those that are easier to measure: heart attack numbers and average longevity. But the focus on easy-to-measure outcomes often diminishes the salience of other downstream consequences: the loss of culinary traditions, disconnection from a culinary heritage, and a reduction in daily culinary joy. It’s easy to dismiss such things as “intangibles.” But actually, what’s more tangible than a good cheese, or a cheerful fondue party with friends?

It’s tempting to use the term intangible when what we really mean is that such things are hard to quantify in our modern institutional environment with the kinds of measuring tools that are used by modern bureaucratic systems. The gap between reality and what’s easy to measure shows up everywhere. Consider cost-benefit analysis, which is supposed to be an objective—and therefore unimpeachable—procedure for making decisions by tallying up expected financial costs and expected financial benefits. But the process is deeply constrained by the kinds of cost information that are easy to gather. It’s relatively straightforward to provide data to support claims about how a certain new overpass might help traffic move efficiently, get people to work faster, and attract more businesses to a downtown. It’s harder to produce data in support of claims about how the overpass might reduce the beauty of a city, or how the noise might affect citizens’ well-being, or how a wall that divides neighborhoods could erode community. From a policy perspective, anything hard to measure can start to fade from sight.

An optimist might hope to get around these problems with better data and metrics. What I want to show here is that these limitations on data are no accident. The basic methodology of data—as collected by real-world institutions obeying real-world forces of economy and scale—systematically leaves out certain kinds of information. Big datasets are not neutral and they are not all-encompassing. There are profound limitations on what large datasets can capture.

I’m not just talking about contingencies of social biases. Obviously, datasets are bad when the collection procedures are biased by oversampling by race, gender, or wealth. But even if analysts can correct for those sorts of biases, there are other, intrinsic biases built into the methodology of data. Data collection techniques must be repeatable across vast scales. They require standardized categories. Repeatability and standardization make data-based methods powerful, but that power has a price. It limits the kinds of information we can collect. (...)

These limitations are particularly worrisome when we’re thinking about success—about targets, goals, and outcomes. When actions must be justified in the language of data, then the limitations inherent in data collection become limitations on human values. And I’m not worried just about perverse incentives and situations in which bad actors game the metrics. I’m worried that an overemphasis on data may mislead even the most well-intentioned of policymakers, who don’t realize that the demand to be “objective”—in this very specific and institutional sense—leads them to systematically ignore a crucial chunk of the world.

Decontextualization

Not all kinds of knowledge, and not all kinds of understanding, can count as information and as data. Historian of quantification Theodore Porter describes “information” as a kind of “communication with people who are unknown to one another, and who thus have no personal basis for shared understanding.” In other words, “information” has been prepared to be understood by distant strangers. The clearest example of this kind of information is quantitative data. Data has been designed to be collected at scale and aggregated. Data must be something that can be collected by and exchanged between different people in all kinds of contexts, with all kinds of backgrounds. Data is portable, which is exactly what makes it powerful. But that portability has a hidden price: to transform our understanding and observations into data, we must perform an act of decontextualization.

An easy example is grading. I’m a philosophy professor. I issue two evaluations for every student essay: one is a long, detailed qualitative evaluation (paragraphs of written comments) and the other is a letter grade (a quantitative evaluation). The quantitative evaluation can travel easily between institutions. Different people can input into the same system, so it can easily generate aggregates and averages—the student’s grade point average, for instance. But think about everything that’s stripped out of the evaluation to enable this portable, aggregable kernel.

Qualitative evaluations can be flexible and responsive and draw on shared history. I can tailor my written assessment to the student’s goals. If a paper is trying to be original, I can comment on its originality. If a paper is trying to precisely explain a bit of Aristotle, I can assess it for its argumentative rigor. If one student wants be a journalist, I can focus on their writing quality. If a nursing student cares about the real-world applications of ethical theories, I can respond in kind. Most importantly, I can rely on our shared context. I can say things that might be unclear to an outside observer because the student and I have been in a classroom together, because we’ve talked for hours and hours about philosophy and critical thinking and writing, because I have a sense for what a particular student wants and needs. I can provide more subtle, complex, multidimensional responses. But, unlike a letter grade, such written evaluations travel poorly to distant administrators, deans, and hiring departments.

Quantification, as used in real-world institutions, works by removing contextually sensitive information. The process of quantification is designed to produce highly portable information, like a letter grade. Letter grades can be understood by everybody; they travel easily. A letter grade is a simple ranking on a one-dimensional spectrum. Once an institution has created this stable, context-invariant kernel, it can easily aggregate this kind of information—for students, for student cohorts, for whole universities. A pile of qualitative information, in the form of thousands of written comments, for example, does not aggregate. It is unwieldy, bordering on unusable, to the administrator, the law school admissions officer, or future employer—unless it has been transformed and decontextualized.

So here is the first principle of data: collecting data involves a trade-off. We gain portability and aggregability at the price of context-sensitivity and nuance. What’s missing from data? Data is designed to be usable and comprehensible by very different people from very different contexts and backgrounds. So data collection procedures tend to filter out highly context-based understanding. Much here depends on who’s permitted to input the data and who the data is intended for. 

by C. Thi Nguyen, Issues in Science and Technology |  Read more:
Image: Shonagh Rae

Saturday, October 11, 2025

Mask of la Roche-Cotard,

Also known as the “Mousterian Protofigurine”, is a purported artifact dated to around 75,000 years ago, in the Mousterian period. It was found in 1975 in the entrance of a cave named La Roche-Cotard, territory of the commune of Langeais (Indre-et-loire), on the banks of the river Loire.

The artifact, possibly created by Neanderthal humans, is a piece of flat flint that has been shaped in a way that seems to resemble the upper part of a face. A piece of bone pushed through a hole in the stone has been interpreted as a representation of eyes.

Paul Bahn has suggested this “mask” is “highly inconvenient”, as “It makes a nonsense of the view that clueless Neanderthals could only copy their cultural superiors the Cro-Magnon”.

Though this may represent an example of artistic expression in Neanderthal humans, some archaeologists question whether the artifact represents a face, and some suggest that it may be practical rather than artistic.

In 2023 the oldest known Neanderthal engravings were found in La Roche-Cotard cave which have been dated to more than 57,000 years ago.

Friday, October 10, 2025

The A.I. Prompt That Could End the World

How much do we have to fear from A.I., really? It’s a question I’ve been asking experts since the debut of ChatGPT in late 2022.

The A.I. pioneer Yoshua Bengio, a computer science professor at the Université de Montréal, is the most-cited researcher alive, in any discipline. When I spoke with him in 2024, Dr. Bengio told me that he had trouble sleeping while thinking of the future. Specifically, he was worried that an A.I. would engineer a lethal pathogen — some sort of super-coronavirus — to eliminate humanity. “I don’t think there’s anything close in terms of the scale of danger,” he said.

Contrast Dr. Bengio’s view with that of his frequent collaborator Yann LeCun, who heads A.I. research at Mark Zuckerberg’s Meta. Like Dr. Bengio, Dr. LeCun is one of the world’s most-cited scientists. He thinks that A.I. will usher in a new era of prosperity and that discussions of existential risk are ridiculous. “You can think of A.I. as an amplifier of human intelligence,” he said in 2023.

When nuclear fission was discovered in the late 1930s, physicists concluded within months that it could be used to build a bomb. Epidemiologists agree on the potential for a pandemic, and astrophysicists agree on the risk of an asteroid strike. But no such consensus exists regarding the dangers of A.I., even after a decade of vigorous debate. How do we react when half the field can’t agree on what risks are real?

One answer is to look at the data. After the launch of GPT-5 in August, some thought that A.I. had hit a plateau. Expert analysis suggests this isn’t true. GPT-5 can do things no other A.I. can do. It can hack into a web server. It can design novel forms of life. It can even build its own A.I. (albeit a much simpler one) from scratch.

For a decade, the debate over A.I. risk has been mired in theoreticals. Pessimistic literature like Eliezer Yudkowsky and Nate Soares’s best-selling book, “If Anyone Builds It, Everyone Dies,” relies on philosophy and sensationalist fables to make its points. But we don’t need fables; today there is a vanguard of professionals who research what A.I. is actually capable of. Three years after the launch of ChatGPT, these evaluators have produced a large body of evidence. Unfortunately, this evidence is as scary as anything in the doomerist imagination. (...)

In the course of quantifying the risks of A.I., I was hoping that I would realize my fears were ridiculous. Instead, the opposite happened: The more I moved from apocalyptic hypotheticals to concrete real-world findings, the more concerned I became. All of the elements of Dr. Bengio’s doomsday scenario were coming into existence. A.I. was getting smarter and more capable. It was learning how to tell its overseers what they wanted to hear. It was getting good at lying. And it was getting exponentially better at complex tasks. (...)

I’ve heard many arguments about what A.I. may or may not be able to do, but the data has outpaced the debate, and it shows the following facts clearly: A.I. is highly capable. Its capabilities are accelerating. And the risks those capabilities present are real. Biological life on this planet is, in fact, vulnerable to these systems. On this threat, even OpenAI seems to agree.

In this sense, we have passed the threshold that nuclear fission passed in 1939. The point of disagreement is no longer whether A.I. could wipe us out. It could... A destructive A.I., like a nuclear bomb, is now a concrete possibility. The question is whether anyone will be reckless enough to build one.

by Stephen Witt, NY Times | Read more:
Image: Martin Naumann

Thursday, October 9, 2025

Plastic-Eating Fungus

A fungus from the Amazon rainforest can break down polyurethane plastic without oxygen. It's the first organism discovered with this capability, and it can survive using plastic as its only food source.

Most plastic waste ends up deep in landfills where oxygen doesn't reach, precisely where this fungus thrives. Polyurethane persists for centuries in these environments. It's everywhere: mattresses, insulation foam, shoe soles, adhesives, car parts. Annual global plastic production exceeds 400 million tons. Less than 10% gets recycled.

Pestalotiopsis microspora was discovered in 2011 in Ecuador's Yasuní National Forest, isolated from plant stems. The endophytic fungus lives inside plant tissues without harming its host. Laboratory testing revealed its remarkable ability: it degrades plastic equally well with or without oxygen present.

The fungus secretes an enzyme that breaks apart the chemical bonds holding polyurethane together. In laboratory tests, concentrated enzyme extracts can completely break down polyurethane polymer in under an hour. The fungus also produces a second enzyme that degrades PET plastic, splitting it into simpler compounds the fungus then consumes as food.

What makes this significant? Other plastic-degrading organisms need oxygen to function. When tested without oxygen, fungi like Lasiodiplodia and Pleosporales slowed down or stopped working. P. microspora maintained the same performance. This ability to work without oxygen directly addresses the actual problem—plastic buried in oxygen-depleted landfill depths.

The enzyme production is adaptive. When the fungus grows in a basic environment with only plastic available, it ramps up enzyme output. These enzymes spread through the surrounding material, breaking down plastic well beyond where the fungus itself is growing. The enzyme breakdown converts long-lasting polymer into simple compounds the fungus uses as food.

This fungus offers a biological solution that works precisely where the problem exists, in oxygen-depleted landfills where an ever-increasing amount our plastic waste collects.

by Sam Knowlton, The Confluence |  Read more:
Image: uncredited
[ed. Always a good reason to preserve natural habitats - who knows what other plants have undiscovered special properties? See also: A fungus that eats polyurethane (Yale Magazine).]
***
AI Overview:
Q. How long does it take Pestalotiopsis microspora to eat plastic?

Pestalotiopsis microspora can degrade plastic in a matter of weeks to months, with experiments showing significant degradation in as little as two weeks and over 60% breakdown in six weeks under ideal conditions. The specific timeframe varies, with some sources noting a few months for complete digestion in certain projects.

Wednesday, October 8, 2025

Ask Not Why You Would Work in Biology, But Rather: Why Wouldn't You?

There’s a lot of essays that are implicitly centered around convincing people to work in biology. One consistent theme amongst them is that they all focus on how irresistibly interesting the whole subject is. Isn’t it fascinating that our mitochondria are potentially an endosymbiotic phenomenon that occurred millions of years ago? Isn’t it fascinating that the regulation of your genome can change throughout your life? Isn’t it fascinating that slime molds can solve mazes without neurons? Come and learn more about this strange and curious field! (...)

But I’d like to offer a different take on the matter. Yes, biology is very interesting, yes, biology is very hard to do well. Yet, it remains the only field that could do something of the utmost importance: prevent a urinary catheter from being shunted inside you in the upcoming future.

Being catheterized is not a big deal. It happens to literally tens of millions of people every single year [ed. Really? Just checked and it's true, at least for millions.]. There is nothing even mildly unique about the whole experience. And, you know, it may be some matter of privilege that you ever feel a catheter inside of you; the financially marginalized will simply soil themselves or die a very painful death from sepsis.

But when you are catheterized for the first time—since, make no mistake, there is a very high chance you will be if you hope to die of old age—you’ll almost certainly feel a sense of intense wrongness that it happens at all. The whole procedure is a few moments of blunt violence, invasiveness, that feels completely out of place in an age where we can edit genomes and send probes beyond the solar system. There may be times where you’ll be able to protect yourself from the vile mixture of pain and discomfort via general anesthesia, but a fairly high number of people undergo (repeated!) catheterization awake and aware, often gathering a slew of infections along the way. This is made far worse by the fact that the most likely time you are catheterized will be during your twilight years, when your brain has turned to soup and you’ve forgotten who your parents are and who you are and what this painful tube is doing in your urethra. If you aren’t aware of how urinary catheters work, there is a deflated balloon at the end of it, blown up once the tube is inside you. This balloon keeps the whole system uncomfortably stuck inside your bladder. So, you can fill in the details on how much violence a brain-damaged person can do to themselves in a position like this by simply yanking out the foreign material.

Optimizing for not having a urinary catheter being placed into you is quite a lofty goal. Are there any alternatives on the table? Not practical ones. Diapers don’t work if the entire bladder itself is dysfunctional, suprapubic tubes require making a hole into the bladder (and can also be torn out), and nerve stimulation devices require expensive, invasive surgery. And none of them will be relied upon for routine cases, where catheterization is the fastest, most reliable solution that exists. You won’t get the gentle alternatives because you won’t be in a position to ask for them. You’ll be post-operative, or delirious, or comatose, or simply too old and confused to advocate for something better.

This is an uncomfortable subject to discuss. But I think it’s worth level-setting with one another. Urinary catheterization is but one of the dozens of little procedures that both contributes to the nauseating amount of ambient human suffering that repeats over and over and over again across the entire medical system and is reasonably common enough that it will likely be inflicted upon you one day. And if catheterization doesn’t seem so bad, there are a range of other awful things that, statistically speaking, a reader has a decent chance of undergoing at some point: feeding tubes, pap smears, mechanical ventilation, and repeated colonoscopies are all candidates.

Moreover, keep in mind that all these are simply the solutions to help prevent something far more grotesque and painful from occurring! Worse things exist—cancer, Alzheimer’s, Crohn’s—but those have been talked about to death and feel a great deal more abstract than the relatively routine, but barbaric, medical procedures that occur millions of times per year.

How could this not be your life goal to work on? To reduce how awful maladies, and the awful solutions to those maladies, are? What else is there really? Better prediction markets? What are we talking about?

To be fair, most people go through their first few decades of life not completely cognizant how terrible modern medicine can be. But at some point you surely have to understand that you have been, thus far, lucky enough to have spent your entire life on the good side of medicine. In a very nice room, one in which every disease, condition, or malady had a very smart clinician on staff to immediately administer the cure. But one day, you’ll one day be shown glimpses of a far worse room, the bad side of medicine, ushered into an area of healthcare where nobody actually understands what is going on. (...)

I appreciate that many fields also demand this level of obedience to the ‘cause’, the same installation of ‘this is the only thing that matters!’. The energy, climate change, and artificial-intelligence sectors have similar do-or-die mission statements. But you know the main difference between those fields and biology?

In every other game, you can at least pretend the losers are going to be someone else, somewhere else in the world, happening to some poor schmuck who didn’t have your money or your foresight or your connections to do the Obviously Correct Thing. Instead, people hope to be a winner. A robot in my house to do my laundry, a plane that gets me from San Francisco to New York City in only an hour, an infinite movie generator so I can turn all my inner thoughts into reality. Wow! Capital-A Abundance beyond my wildest dreams! This is all well and good, but the unfortunate reality of the situation is that you will be a loser, an explicit loser, guaranteed to be a loser, in one specific game: biology. You will not escape being the butt of the joke here, because it will be you that betrays you, not the you who is reading this essay, but you, the you that cannot think, the you that has been shoddily shaped by the last several eons of evolution. Yes, others will also have their time underneath this harsh spotlight, but you will see your day in it too. (...)

Yes, things outside of biology are important too. Optimized supply chains matter, good marketing matters, and accurate securities risk assessments matter. Industries work together in weird ways. The people working on better short-form video and payroll startups and FAANGs are part of an economic engine that generates the immense taxable wealth required to fund the NIH grants. I know that the world runs on invisible glue.

Still, I can’t help but think that people’s priorities are enormously out of touch with what will actually matter most to their future selves. It feels as if people seem to have this mental model where medical progress simply happens. Like there’s some natural law of the universe that says “treatments improve by X% per year” and we’re all just passengers with a dumb grin on this predetermined trajectory. They see headlines about better FDA guidelines or CRISPR or immunotherapy or AI-accelerated protein folding and think, “Great, the authorities got it covered. By the time I need it, they’ll have figured it out.”. But that’s not how any of this works! Nobody has it covered! Medical progress happens because specific people chose to work on specific problems instead of doing something else with their finite time on Earth.

by Abhishaike Mahajan, Owl Posting |  Read more:
Image: uncredited
[ed. Just can't comprehend the thinking recently for cutting essential NIH and NSF research funding (and others like NOAA). We used to lead the world.]

Tuesday, October 7, 2025

Do Coconuts Go With Oysters? For Saving the Delaware Shore, Yes.

For the past 50 years, Gary Berti has watched as a stretch of Delaware’s coastline slowly disappeared. Rising tides stripped the shoreline, leaving behind mud and a few tree stumps.

“Year after year, it gradually went from wild to deteriorated,” said Mr. Berti, whose parents moved to Angola by the Bay, a private community in Lewes, Del., in 1977, where he now lives with his wife, Debbie.

But in 2023, an extensive restoration effort converted a half-mile of shoreline from barren to verdant. A perimeter of logs and rolls of coconut husk held new sand in place. Lush beds of spartina, commonly known as cordgrass, grew, inviting wading birds and blue crabs.

Together, these elements have created a living shoreline, a nature-based way of stabilizing the coast, to absorb energy from the waves and protect the land from washing away. 

Mr. Berti had never seen the waterfront like this before. “The change has just been spectacular,” he said.

Before
After

The practice of using natural materials to prevent erosion has been around for decades. But as sea levels rise and ever-intensifying storms pound coastlines, more places are building them.

The U.S. government counts at least 150 living shorelines nationwide, with East Coast states like Maryland, South Carolina and Florida remediating thousands of feet of tidal areas. Thanks to the efforts of the Delaware Living Shorelines Committee, a state-supported working group, Delaware has led the charge for years. (...)

“The living component is key,” said Alison Rogerson, an environmental scientist for the state’s natural resources department and chair of the living shoreline committee.

The natural materials, she said, provide a permeable buffer. As waves pass through, they leave the mud and sand they were carrying on the side of the barrier closer to the shore. This sediment builds up over time, creating a stable surface for plants. As the plants grow, their roots reinforce the barrier by holding everything in place. The goal is not necessarily return the land to how it was before, but to create new, stronger habitat.

More traditional rigid structures, like concrete sea walls, steel bulkheads and piles of stone known as riprap, can provide instant protection but inevitably get weaker over time. Bulkheads can also backfire by eroding at the base or trapping floodwaters from storms. And because hardened structures are designed to deflect energy, not absorb it, they can actually worsen erosion in nearby areas.

Though living shorelines need initial care while they start to grow, scientists have found they can outperform rigid structures in storms and can repair themselves naturally. And as sea levels rise, living shorelines naturally inch inland with the coastline, providing continuous protection, whereas sea walls have to be rebuilt.

When the engineers leave after creating a gray rigid structure, like a sea wall, “that’s the strongest that structure is ever going to be, and at some point, it will fail,” said David Burdick, an associate professor of coastal ecology at the University of New Hampshire. “When we install living shorelines, it’s the weakest it’s going to be. And it will get stronger over time.”

And just as coastal areas come in all shapes and sizes, so do living shorelines. In other places that the committee has supported projects, like Angola by the Bay and the Delaware Botanical Garden, brackish water meant that oysters wouldn’t grow. Instead, the private community opted for large timber logs while the botanical garden built a unique crisscross fence from dead tree branches found on site. (...)

Sometimes, an area’s waves and wind are too powerful for a living shoreline to survive on its own, Mr. Janiec said. In these situations, a hybrid approach that combines hard structures can create a protected zone for plants and oysters to grow. And these don’t need to be traditional sea walls or riprap. Scientists can also use concrete reef structures and oyster castles to break up waves while allowing wildlife to thrive.

Gregg Moore, an associate professor of coastal restoration at the University of New Hampshire, said homeowners often choose rigid structures because they don’t act on erosion until the situation is urgent. When it comes to a person’s home, “you can’t blame somebody for wanting to put whatever they think is the fastest, most permanent solution possible,” he said. (...)

“Living shorelines are easier than people think, but they take a little time,” Mrs. Allread said. “You have to trust the process. Nature can do its own thing if you let it.”

by Sachi Kitajima Mulkey, NY Times |  Read more:
Images: Erin Schaff
[ed. Streambank and coastal restoration/rehabilitation using bioengineering techniques has been standard practice in Alaska for decades (in fact, my former gf wrote the book on it - literally). I myself received a grant to rehabilitate 12 state park public use sites on the Kenai River (see here and here) that were heavily damaged and eroding from constant foot traffic and boat wakes. Won a National Coastal America Award for innovation. As noted here, most people want a quick fix, but this is a better, long-term solution.]

Monday, October 6, 2025

America Is Losing the Robotics Race

The impossible

AI is reshaping both soft power and hard power around the globe. The United States, to its credit, has an early lead with the former. The leading LLMs are trained on Western text, global training and inference are still dominated by American companies, and we are ahead in the global race for market share of total tokens generated.

But as it stands, China is running away with the hard power part of AI – robotics. As the incredible progress in AI continues, we start seeing intelligence embedded in the physical world – culminating in generalist robots that perform a wide variety of tasks across applications, from manufacturing to services to defense. This will redefine every aspect of our society and reshape daily life. The country betting on that future is China, not the US.

In the 10 years since the CCP released its “Made in China 2025” strategy, Chinese companies have leapfrogged the rest of the world’s density of robots per capita. They passed the United States in 2021, then the famously automated economics of Japan and Germany in 2024, and will soon eclipse Singapore and South Korea, their last remaining contenders. In short order China has become the world’s central robotics power. Entirely autonomous “dark factories,” like those of smartphone and automobile manufacturer Xiaomi, operate in complete darkness with no humans present.

China has successfully executed what we once thought impossible. Only ten years ago we scoffed that “China can copy, but they can’t innovate,” which we then revised to, “They can innovate, but they can’t make the upstream high-precision tooling.” Maybe we shouldn’t have been so comfortable, given how Chinese companies had outcompeted the rest of the world in industry after industry – from solar photovoltaics, where competition outside of China has been practically decimated, to 5G, whose global deployment was a massive success for China’s national champion Huawei. The same pattern is playing out now with robotics. China has built a playbook to dominate strategic industries, and has used that playbook to become the robot superpower.

Homegrown Chinese companies now design and fabricate precision parts like harmonic reducers at competitive quality, cheaper prices, and – most importantly – colocated with their customers in manufacturing superclusters. This is the part that should scare the West the most. The colocation of so many robot toolmakers, assemblers, and customers in nodes like Shenzhen or Shanghai is how new combinatorial use cases are discovered, how manufacturing sequences are optimized around that new potential, and how firms develop advanced process knowledge that is completely opaque to the West. In a few years, it will be Chinese companies that are making parts that we cannot replicate – not just at low cost, but at any cost. There are parallels from the past. In the 1970s, Japan shocked the world with Toyota’s lean production methods, just-in-time inventory, and ethic of kaizen, continuous improvement to eliminate waste. Initially dismissed, by the 1980s Japanese automakers had overtaken American and European giants and reshaped the global auto industry. If we do not act to avert it, this will be another Toyota moment, but on a much greater scale.

If we don’t act soon, the United States will find it extremely difficult to catch up: we are approaching a period of compounding improvement that threatens to make China’s advantage virtually insurmountable. As with LLMs, training advanced robotics systems requires pretraining data on the scale of the internet, along with reinforcement learning to train generalist policies that can reason across a wide range of distortions in environment, perception, and task. As data from real-world deployment comes online, the country with more robots gains flywheel momentum; more deployment means more high-quality data which underwrites further deployment. The United States isn’t entirely out of the game, and our lead in AI software carries over: American companies like World Labs are at the forefront of building frontier models that could allow robots to reason about 3D space. But as these capabilities mature, it will be action in the real world – from routing cable harnesses through chassis pathways in electronics assembly to simply doing laundry – that will unlock the economic and strategic promise of generalist robotics.

Micron tolerance

To understand what China has achieved in the past few years, let’s talk about the harmonic reducer – a simple manufactured part that’s deceptively hard to make.

Harmonic reducers are a type of gear system that looks almost like a shoulder or an elbow in its socket. They transfer rotational energy from one end (usually at high speed, from an electric motor) into a much slower gearing, at high torque. They do this by offsetting an inner and outer gear ring that are slightly offset from one another, paired with a rotating oval-shaped piece on the inside. When driven by an electric motor, this creates a waveform that slowly drives the outer socket with a high gear ratio and high torque – suitable for many robotic applications, including humanoid ones.

The challenge in manufacturing these tools comes from how sensitive they are to minute distortions in tooling and operating. They must be made micron-level precise, at low cost, to do their jobs correctly. Even more precision is required when these sockets are chained together into systems with multiple degrees of freedom, like the multiple joints on a robotic finger, hand, or limb. Achieving the strength and dexterity of a human hand, at non-prohibitive cost, requires true manufacturing excellence.

The precision required to manufacture harmonic reducers is well beyond the reach of most machine shops. Production has historically been dominated by highly specialized German and Japanese manufacturers: the Japanese company Sumitomo and the German-Japanese firm Harmonic Drive are the two dominant players in the space, together accounting for 95 percent of global market share. But in the last few years they’ve faced intensifying competition from new entrants from China. A firm called Green Harmonic, based in the city of Suzhou near Shanghai, offers harmonic reducers with performance comparable to products from Sumitomo and Harmonic Drive, but at roughly 30 to 50 percent cheaper price points. Green Harmonic now has more than 30 percent market share within China; and will soon look abroad. In the coming years, we can expect companies like Harmonic Drive to face their “Toyota moment,” with major strategic implications: there are countless cases of Chinese firms translating cheap, reliable manufacturing into global market share and eventually driving competitors out of business.

Harmonic reducers are just one illustrative part of the robotics hardware stack. Creating a fully functioning robot requires a huge variety of other small components – precision bearings that enable smooth joint rotation, custom printed circuit boards that route power and signals between subsystems, specialized connectors that maintain reliable communication in high-vibration environments, miniature encoders that provide millimeter-accurate position feedback, force-sensitive resistors embedded in fingertips for delicate manipulation, inertial measurement units that track orientation changes down to fractions of a degree, servo motors with sophisticated current control algorithms, shielding to prevent electromagnetic interference between tightly packed electronics, thermal interface materials that dissipate heat from high-performance processors, and countless fasteners, gaskets, and protective housings engineered to withstand the mechanical stresses of real-world operation. Each component must be carefully selected not just for its individual performance characteristics, but for how it integrates with the broader system: a single point of failure can render a sophisticated robot completely inoperable.

Chinese companies, from Siasun and Estun in controllers to AVIC Electromechanical in torque sensors, are rapidly entering and starting to win the market for every part of that system. Together, these firms and countless others constitute a sophisticated and mature ecosystem that has allowed Chinese firms to locally source practically the entire robot – not only from within China, but within a megacluster like Shenzhen.

We’re at the point today where Chinese domestic manufacturers and their suppliers contribute all of the parts necessary to bring robotic dreams to life, and iteratively learn from one another. The Chinese startup Unitree has captured the global imagination with highly advanced robots cheaper than anything else offered before – agile and LLM-integrated robot dogs for as little as $1,600, a humanoid for $5,900. Those costs will keep coming down; the robot dogs will keep getting stronger and more capable.

by Martin Casado and Anne Neuberger, A16Z |  Read more:
Image: uncredited
[ed. This is from Andreessen Horowitz, so little surprise they would view government subsidies (too little) and over regulation (too much) as major contributing factors. But politics, policy, and a lack of strategic planning and funding priorities are probably the more important constraints. I mean, we currently have a president and congress that give lip-service to reshoring American manufacturing, but have no idea what industries are most important, or even how infrastructure improvements and corporate incentives (and disincentives) could help. All the while re-directing trillions of dollars into the military, homeland security and immigration enforcement. No wonder China is pulling away on all fronts. They actually have a clear idea of where they want to go.]