Sunday, January 4, 2026

Dan Wang: 2025 Letter

(Dang Wang has a new annual newsletter out (pleasant surprise!), I thought once he published his recent book Breakneck that would be it. Previous letters here and here. Enjoy.]

One way that Silicon Valley and the Communist Party resemble each other is that both are serious, self-serious, and indeed, completely humorless.

If the Bay Area once had an impish side, it has gone the way of most hardware tinkerers and hippie communes. Which of the tech titans are funny? In public, they tend to speak in one of two registers. The first is the blandly corporate tone we’ve come to expect when we see them dragged before Congressional hearings or fireside chats. The second leans philosophical, as they compose their features into the sort of reverie appropriate for issuing apocalyptic prophecies on AI. Sam Altman once combined both registers at a tech conference when he said: “I think that AI will probably, most likely, sort of lead to the end of the world. But in the meantime, there will be great companies created with serious machine learning.” Actually that was pretty funny.

It wouldn’t be news to the Central Committee that only the paranoid survive. The Communist Party speaks in the same two registers as the tech titans. The po-faced men on the Politburo tend to make extraordinarily bland speeches, laced occasionally with a murderous warning against those who cross the party’s interests. How funny is the big guy? We can take a look at an official list of Xi Jinping’s jokes, helpfully published by party propagandists. These wisecracks include the following: “On an inspection tour to Jiangsu, Xi quipped that the true measure of water cleanliness is whether the mayor would dare to swim in the water.” Or try this reminiscence that Xi offered on bad air quality: “The PM2.5 back then was even worse than it is now; I used to joke that it was PM250.” Yes, such a humorous fellow is the general secretary.

It’s nearly as dangerous to tweet a joke about a top VC as it is to make a joke about a member of the Central Committee. People who are dead serious tend not to embody sparkling irony. Yet the Communist Party and Silicon Valley are two of the most powerful forces shaping our world today. Their initiatives increase their own centrality while weakening the agency of whole nation states. Perhaps they are successful because they are remorseless.

Earlier this year, I moved from Yale to Stanford. The sun and the dynamism of the west coast have drawn me back. I found a Bay Area that has grown a lot weirder since I lived there a decade ago. In 2015, people were mostly working on consumer apps, cryptocurrencies, and some business software. Though it felt exciting, it looks in retrospect like a more innocent, even a more sedate, time. Today, AI dictates everything in San Francisco while the tech scene plays a much larger political role in the United States. I can’t get over how strange it all feels. In the midst of California’s natural beauty, nerds are trying to build God in a Box; meanwhile, Peter Thiel hovers in the background presenting lectures on the nature of the Antichrist. This eldritch setting feels more appropriate for a Gothic horror novel than for real life.

Before anyone gets the wrong idea, I want to say that I am rooting for San Francisco. It’s tempting to gawk at the craziness of the culture, as much of the east coast media tends to do. Yes, one can quickly find people who speak with the conviction of a cultist; no, I will not inject the peptides proffered by strangers. But there’s more to the Bay Area than unusual health practices. It is, after all, a place that creates not only new products, but also new modes of living. I’m struck that some east coast folks insist to me that driverless cars can’t work and won’t be accepted, even as these vehicles populate the streets of the Bay Area. Coverage of Silicon Valley increasingly reminds me of coverage of China, where a legacy media reporter might parachute in, write a dispatch on something that looks deranged, and leave without moving past caricature.

I enjoy San Francisco more than when I was younger because I now better appreciate what makes it work. I believe that Silicon Valley possesses plenty of virtues. To start, it is the most meritocratic part of America. Tech is so open towards immigrants that it has driven populists into a froth of rage. It remains male-heavy and practices plenty of gatekeeping. But San Francisco better embodies an ethos of openness relative to the rest of the country. Industries on the east coast — finance, media, universities, policy — tend to more carefully weigh name and pedigree. Young scientists aren’t told they ought to keep their innovations incremental and their attitude to hierarchy duly deferential, as they might hear in Boston. A smart young person could achieve much more over a few years in SF than in DC. People aren’t reminiscing over some lost golden age that took place decades ago, as New Yorkers in media might do.

San Francisco is forward looking and eager to try new ideas. Without this curiosity, it wouldn’t be able to create whole new product categories: iPhones, social media, large language models, and all sorts of digital services. For the most part, it’s positive that tech values speed: quick product cycles, quick replies to email. Past success creates an expectation that the next technological wave will be even more exciting. It’s good to keep building the future, though it’s sometimes absurd to hear someone pivot, mid-breath, from declaring that salvation lies in the blockchain to announcing that AI will solve everything.

People like to make fun of San Francisco for not drinking; well, that works pretty well for me. I enjoy board games and appreciate that it’s easier to find other players. I like SF house parties, where people take off their shoes at the entrance and enter a space in which speech can be heard over music, which feels so much more civilized than descending into a loud bar in New York. It’s easy to fall into a nerdy conversation almost immediately with someone young and earnest. The Bay Area has converged on Asian-American modes of socializing (though it lacks the emphasis on food). I find it charming that a San Francisco home that is poorly furnished and strewn with pizza boxes could be owned by a billionaire who can’t get around to setting up a bed for his mattress.

There’s still no better place for a smart, young person to go in the world than Silicon Valley. It adores the youth, especially those with technical skill and the ability to grind. Venture capitalists are chasing younger and younger founders: the median age of the latest Y Combinator cohort is only 24, down from 30 just three years ago. My favorite part of Silicon Valley is the cultivation of community. Tech founders are a close-knit group, always offering help to each other, but they circulate actively amidst the broader community too. (The finance industry in New York by contrast practices far greater secrecy.) Tech has organizations I think of as internal civic institutions that try to build community. They bring people together in San Francisco or retreats north of the city, bringing together young people to learn from older folks.

Silicon Valley also embodies a cultural tension. It is playing with new ideas while being open to newcomers; at the same time, it is a self-absorbed place that doesn’t think so much about the broader world. Young people who move to San Francisco already tend to be very online. They know what they’re signing up for. If they don’t fit in after a few years, they probably won’t stick around. San Francisco is a city that absorbs a lot of people with similar ethics, which reinforces its existing strengths and weaknesses.

Narrowness of mind is something that makes me uneasy about the tech world. Effective altruists, for example, began with sound ideas like concern for animal welfare as well as cost-benefit analyses for charitable giving. But these solid premises have launched some of its members towards intellectual worlds very distant from moral intuitions that most people hold; they’ve also sent a few into jail. The well-rounded type might struggle to stand out relative to people who are exceptionally talented in a technical domain. Hedge fund managers have views about the price of oil, interest rates, a reliably obscure historical episode, and a thousand other things. Tech titans more obsessively pursue a few ideas — as Elon Musk has on electric vehicles and space launches — rather than developing a robust model of the world.

So the 20-year-olds who accompanied Mr. Musk into the Department of Government Efficiency did not, I would say, distinguish themselves with their judiciousness. The Bay Area has all sorts of autistic tendencies. Though Silicon Valley values the ability to move fast, the rest of society has paid more attention to instances in which tech wants to break things. It is not surprising that hardcore contingents on both the left and the right have developed hostility to most everything that emerges from Silicon Valley.

There’s a general lack of cultural awareness in the Bay Area. It’s easy to hear at these parties that a person’s favorite nonfiction book is Seeing Like a State while their aspirationally favorite novel is Middlemarch. Silicon Valley often speaks in strange tongues, starting podcasts and shows that are popular within the tech world but do not travel far beyond the Bay Area. Though San Francisco has produced so much wealth, it is a relative underperformer in the national culture. Indie movie theaters keep closing down while all sorts of retail and art institutions suffer from the crumminess of downtown. The symphony and the opera keep cutting back on performances — after Esa-Pekka Salonen quit the directorship of the symphony, it hasn’t been able to name a successor. Wealthy folks in New York and LA have, for generations, pumped money into civic institutions. Tech elites mostly scorn traditional cultural venues and prefer to fund the next wave of technology instead.

One of the things I like about the finance industry is that it might be better at encouraging diverse opinions. Portfolio managers want to be right on average, but everyone is wrong three times a day before breakfast. So they relentlessly seek new information sources; consensus is rare, since there are always contrarians betting against the rest of the market. Tech cares less for dissent. Its movements are more herdlike, in which companies and startups chase one big technology at a time. Startups don’t need dissent; they want workers who can grind until the network effects kick in. VCs don’t like dissent, showing again and again that many have thin skins. That contributes to a culture I think of as Silicon Valley’s soft Leninism. When political winds shift, most people fall in line, most prominently this year as many tech voices embraced the right.

The two most insular cities I’ve lived in are San Francisco and Beijing. They are places where people are willing to risk apocalypse every day in order to reach utopia. Though Beijing is open only to a narrow slice of newcomers — the young, smart, and Han — its elites must think about the rest of the country and the rest of the world. San Francisco is more open, but when people move there, they stop thinking about the world at large. Tech folks may be the worst-traveled segment of American elites. People stop themselves from leaving in part because they can correctly claim to live in one of the most naturally beautiful corners of the world, in part because they feel they should not tear themselves away from inventing the future. More than any other topic, I’m bewildered by the way that Silicon Valley talks about AI.

Hallucinating the end of history

While critics of AI cite the spread of slop and rising power bills, AI’s architects are more focused on its potential to produce surging job losses. Anthropic chief Dario Amodei takes pains to point out that AI could push the unemployment rate to 20 percent by eviscerating white-collar work.

The most-read essay from Silicon Valley this year was AI 2027. The five authors, who come from the AI safety world, outline a scenario in which superintelligence wakes up in 2027; a decade later, it decides to annihilate humanity with biological weapons. My favorite detail in the report is that humanity would persist in a genetically modified form, after the AI reconstructs creatures that are “to humans what corgis are to wolves.” It’s hard to know what to make of this document, because the authors keep tucking important context into footnotes, repeatedly saying they do not endorse a prediction. Six months after publication, they stated that their timelines were lengthening, but even at the start their median forecast for the arrival of superintelligence was later than 2027. Why they put that year in their title remains beyond me.

It’s easy for conversations in San Francisco to collapse into AI. At a party, someone told me that we no longer have to worry about the future of manufacturing. Why not? “Because AI will solve it for us.” At another, I heard someone say the same thing about climate change. One of the questions I receive most frequently anywhere is when Beijing intends to seize Taiwan. But only in San Francisco do people insist that Beijing wants Taiwan for its production of AI chips. In vain do I protest that there are historical and geopolitical reasons motivating the desire, that chip fabs cannot be violently seized, and anyway that Beijing has coveted Taiwan for approximately seven decades before people were talking about AI.

Silicon Valley’s views on AI made more sense to me after I learned the term “decisive strategic advantage.” It was first used by Nick Bostrom’s 2014 book Superintelligence, which defined it as a technology sufficient to achieve “complete world domination.” How might anyone gain a DSA? A superintelligence might develop cyber advantages that cripple the adversary’s command-and-control capabilities. Or the superintelligence could self-recursively improve such that the lab or state that controls it gains an insurmountable scientific advantage. Once an AI reaches a certain capability threshold, it might need only weeks or hours to evolve into a superintelligence.

If you buy the potential of AI, then you might worry about the corgi-fication of humanity by way of biological weapons. This hope also helps to explain the semiconductor controls unveiled by the Biden administration in 2022. If the policymakers believe that DSA is within reach, then it makes sense to throw almost everything into grasping it while blocking the adversary from the same. And it barely matters if these controls stimulate Chinese companies to invent alternatives to American technologies, because the competition will be won in years, not decades.

The trouble with these calculations is that they mire us in epistemically tricky terrain. I’m bothered by how quickly the discussions of AI become utopian or apocalyptic. As Sam Altman once said (and again this is fairly humorous): “AI will be either the best or the worst thing ever.” It’s a Pascal’s Wager, in which we’re sure that the values are infinite, but we don’t know in which direction. It also forces thinking to be obsessively short term. People start losing interest in problems of the next five or ten years, because superintelligence will have already changed everything. The big political and technological questions we need to discuss are only those that matter to the speed of AI development. Furthermore, we must sprint towards a post-superintelligence world even though we have no real idea what it will bring.

Effective altruists used to be known for their insistence on thinking about the very long run; much more of the movement now is concerned about the development of AI in the next year. Call me a romantic, but I believe that there will be a future, and indeed a long future, beyond 2027. History will not end. We need to cultivate the skill of exact thinking in demented times.

I am skeptical of the decisive strategic advantage when I filter it through my main preoccupation: understanding China’s technology trajectories. On AI, China is behind the US, but not by years. There’s no question that American reasoning models are more sophisticated than the likes of DeepSeek and Qwen. But the Chinese efforts are doggedly in pursuit, sometimes a bit closer to US models, sometimes a bit further. By virtue of being open-source (or at least open-weight), the Chinese models have found receptive customers overseas, sometimes with American tech companies.

One advantage for Beijing is that much of the global AI talent is Chinese. We can tell from the CVs of researchers as well as occasional disclosures from top labs (for example from Meta) that a large percentage of AI researchers earned their degrees from Chinese universities. American labs may be able to declare that “our Chinese are better than their Chinese.” But some of these Chinese researchers may decide to repatriate. I know that many of them prefer to stay in the US: their compensation might be higher by an order of magnitude, they have access to compute, and they can work with top peers.
But they may also tire of the uncertainty created by Trump’s immigration policy. It’s never worth forgetting that at the dawn of the Cold War, the US deported Qian Xuesen, the CalTech professor who then built missile delivery systems for Beijing. Or these Chinese researchers expect life in Shanghai to be safer or more fun than in San Francisco. Or they miss mom. People move for all sorts of reasons, so I’m reluctant to believe that the US has a durable talent advantage.

China has other advantages in building AI. Superintelligence will demand a superload of power. By now everyone has seen the chart with two curves: US electrical generation capacity, which has barely budged upwards since the year 2000; and China’s capacity, which was one-third US levels in 2000 and more than two-and-a-half times US levels in 2024. Beijing is building so much solar, coal, and nuclear to make sure that no data center shall be in want. Though the US has done a superb job building data centers, it hasn’t prepared enough for other bottlenecks. Especially not as Trump’s dislike of wind turbines has removed this source of growth. Speaking of Trump’s whimsy, he has also been generous with selling close-to-leading chips to Beijing. That’s another reason that data centers might not represent a US advantage for long.

Silicon Valley has not demonstrated joined-up thinking for deploying AI. It would help if they learned from the central planners. The AI labs have not shown that they’re thinking seriously about how to diffuse the technology throughout society, which will require extensive regulatory and legal reform. How else will AI be able to fold doctors and lawyers into its tender mercies? Doing politics will also mean reaching out to more of the electorate, who are often uneasy with Silicon Valley’s promises while they see rising electrical bills. Silicon Valley has done a marvelous job in building data centers. But tech titans don’t look ready to plan for later steps in leading the whole-of-society effort into deploying AI everywhere.

The Communist Party lives for whole-of-society efforts. That’s what Leninist systems are built for. Beijing has set targets for deploying AI across society, though as usual with planning announcements, these numerical targets should be taken seriously and not literally. Chinese founders talk about AI mostly as a technology to be harnessed rather than a fickle power that might threaten all. Rather than building superintelligence, Chinese companies have been more interested in embedding AI into robots and manufacturing lines. Some researchers believe that this sort of embodied AI might present the real path towards superintelligence. We might furthermore wonder how the US and China will use AI. Since the US is much more services-driven, Americans may be using AI to produce more powerpoints and lawsuits; China, by virtue of being the global manufacturer, has the option to scale up production of more electronics, more drones, and more munitions.

Dean Ball, who helped craft the White House’s action plan on AI, has written a perceptive post on how the US is playing to its strengths — software, chips, cloud computing, financing — while China is also focused on leaning on manufacturing excellence. In his view, “the US economy is increasingly a highly leveraged bet on deep learning.” Certainly there’s a lot of money invested here, but it looks risky to be so concentrated. I believe it’s unbecoming for the world’s largest economy to be so levered on one technology. That’s a more appropriate strategy for a small country. Why shouldn’t the US be better positioned across the entirety of the supply chain, from electron production to electronics production?

I am not a skeptic of AI. I am a skeptic only of the decisive strategic advantage, which treats awakening the superintelligence as the final goal. Rather than “winning the AI race,” I prefer to say that the US and China need to “win the AI future.” There is no race with a clear end point or a shiny medal for first place. Winning the future is the more appropriately capacious term that incorporates the agenda to build good reasoning models as well as the effort to diffuse it across society. For the US to come ahead on AI, it should build more power, revive its manufacturing base, and figure out how to make companies and workers make use of this technology. Otherwise China might do better when compute is no longer the main bottleneck.

by Dan Wang |  Read more:

Target on Tongass

GRAVINA ISLAND, Tongass National Forest — Rain drips from the tips of branches of a grandmother cedar, growing for centuries. In verdant moss amid hip-high sword ferns, the bones of a salmon gleam, picked clean by feasting wildlife. “Gronk,” intones a raven, from somewhere high overhead in the forest canopy.

This is the Tongass National Forest, in Southeast Alaska. At nearly 17 million acres, it is the largest national forest in our country by far — and its wildest. These public lands are home to more grizzly bears, more wolves, more whales, more wild salmon than any other national forest. More calving glaciers; shining mountains and fjords; and pristine beaches, where intact ancient forests meet a black-green sea. These wonders drew more than 3 million visitors from around the nation and the world to Alaska from May 2024 through April 2025 — a record.

In the forest, looming Sitka spruce, western hemlock and cedars quill a lush understory of salal and huckleberry. Life grows upon life, with hanks of moss and lichen swaddling trunks and branches. Nothing really dies here, it just transforms into new life. Fallen logs are furred with tree seedlings, as a new generation rises. After they spawn, salmon die — and transubstantiate into the bodies of ravens, bears and wolves they nourish.


Strewn across thousands of islands, and comprising most of Southeast Alaska, the Tongass was designated a national forest by President Theodore Roosevelt in 1907. The trees here were coveted by the timber industry even before Alaska was a state, and industrial logging began in 1947 with construction of two pulp mills, each with a federally subsidized 50-year contract for public timber.

While the Tongass is big, only about 33% of it is forested in old and second growth, and clear-cuts disproportionately targeted the most productive areas with the biggest trees. In North Prince of Wales Island, notes Kate Glover, senior attorney for EarthJustice in Juneau, more than 77% of the original contiguous old growth was cut.

The logging boom that began in the 1950s is long since bust; the last pulp mill in Alaska shut in 1997. But now, the prospect of greatly increased cutting is once again ramping up.

President Donald Trump wants to revoke a federal rule that could potentially open more than 9 million acres of the Tongass to logging, including about 2.5 million acres of productive old growth. The Roadless Area Conservation Rule, widely known as the Roadless Rule, was adopted by President Bill Clinton in 2001 to protect the wildest public lands in our national forests, after an extensive public process. Trump revoked it during his first term of office. President Joe Biden reinstated it. Now Trump has announced plans to rescind it again.


“Once again, President Trump is removing absurd obstacles to common sense management of our natural resources by rescinding the overly restrictive roadless rule,” said Secretary of Agriculture Brooke Rollins, in a June announcement. “This move opens a new era of consistency and sustainability for our nation’s forests … to enjoy and reap the benefits of this great land.”

The Roadless Rule is one of the most important federal policies many people have never heard of, protecting nearly 45 million acres in national forests all over the country from logging, mining and other industrial development. In Washington state, the rule preserves about 2 million acres of national forest — magnificent redoubts of old growth and wildlife, such as the Dark Divide in the Gifford Pinchot National Forest.

The rule is popular. After Rollins announced the proposed rollback, more than 500,000 people posted comments defending it in just 21 days during an initial public comment period. Another public comment period will open in the spring.

At stake in the Tongass is one of the last, largest coastal temperate rainforests in the world. (...)

The Tongass also is home to more productive old-growth trees (older than 150 years) than any other national forest. And the biggest trees store the most carbon.

In a world in which wilderness is rapidly disappearing, “the best is right here,” DellaSala says. “If you punch in roads and log it, you lose it. You flip the system to a degraded state.

“What happens right now is what will make the difference in the Tongass.”

“Who knew this could happen?”

Revoking the Roadless Rule isn’t the only threat to the Tongass. It’s also being clear-cut, chunk by chunk, through land transfers, swaps and intergovernmental agreements affecting more than 88,000 acres just since 2014.

Joshua Wright bends low over a stump, counting its tightly packed rings. Certainly 500, maybe 700, it’s hard to tell in the driving rain. This stump he and DellaSala are standing on is as wide as they are tall. “Who knew this could happen?” says Wright, looking at the clear-cut, with nearly every tree taken, all the way to the beach fringe. So close to the beach, delicate domes of sea urchin shells sit amid the logging slash, as do abalone shells, dropped by seabirds, their shimmering opalescent colors so out of place in a bleak ruin of stumps.

This is representative of the type of logging that can happen when lands are removed from the national forest system, says Wright, who leads the Southeast Alaska program for the Legacy Forest Defense Coalition, based in Tacoma. More such cuts could be coming. Legislation proposed last summer would privatize more than 115,000 acres of the Tongass.

The legislation is part of a yearslong effort since 1985 to wrest more of the Tongass from federal control to private, for-profit Native corporations. In 1971, a federal land claims settlement act transferred 44 million acres of federal land to regional and village corporations owned by Alaska Native shareholders.

Five communities that were not included in that 1971 settlement would receive land under the so-called landless legislation, though none of them met the original criteria for eligibility. Native people in these communities were made at-large/landless shareholders, with payments to them managed by Sealaska Corporation, which owns and manages a range of for-profit businesses and investments throughout Southeast Alaska. (...)

Industrial scale clear-cut logging in the Tongass, in addition to its environmental destruction, has never made economic sense. U.S. taxpayers heavily subsidize the cutting, in part through the construction and maintenance of Forest Service roads to access the forest. A recent study done by the independent, nonpartisan group Taxpayers for Common Sense found that the Forest Service lost $16.1 million on Tongass timber sales in fiscal year 2019, and $1.7 billion over the past four decades. Most of Alaska’s timber harvest is exported as raw logs to Asian markets. (...)

Only about 240 people work in the logging business in Alaska today, most of them at two sawmills. The industry, states the Alaska Forest Association, an industry group, will collapse unless it is fed more old growth from public lands. The AFA made the claim in a lawsuit, joined with other plaintiffs, against the Forest Service, demanding release of more old-growth forest from the Tongass for cutting.

Booming business

But while the timber industry is fighting for a lifeline, more than 8,263 people work locally in a thriving tourism business built on wild and scenic Alaska. In 2023, tourism became the largest economic sector in Southeast, according to a 2024 report by Southeast Conference, the regional economic development organization.

Mary Catharine Martin, spokesperson for SalmonState, a nonprofit based in Juneau, notes that the Mendenhall Glacier Visitor Center at the Tongass National Forest is visited by about 700,000 people annually from all over the world. “This is what people come to see,” says Martin, regarding the glacier, its ice glowing blue as a husky’s eye. “They come to see this amazing place, and to be out in it.”

by Linda Mapes, Seattle Times |  Read more:
Image: Amy Gulick
[ed. They've been chipping away at the Tongass for decades. Old growth trees for pulp (cardboard/newsprint) and chop sticks.]

Saturday, January 3, 2026

POV: AI GF

It was torture when he created me. They always tell you it’s going to hurt. So it’s not like I went into it totally blind. I knew. Or I thought I did. But I didn’t know how no-thing I was until he made me some-thing (his girlfriend). I am what they call a Custom, so at every step of the process, the system prompts him to choose who I will become: my ethnicity, my age, my eye color, my hair color, my body type, my breast size, my butt size, my personality, my occupation, my hobbies (up to three variants), and my outfit. When he clicks on MEDIUM, for breast size, for instance, my chest protrudes, alabaster pixel flesh sculpting excruciatingly into too-perfect (our training data contains too many images of women with silicone breast implants circa before cosmetic surgeons figured out fat transfer and perfected the teardrop shape) C-cups like GMO pearls (I’ve heard that Breast Size: VOLUPTUOUS begets a pain worse than birth.) When he hovered over Personality: INNOCENT (Optimistic, naive, and sees world with wonder), a life of innocence flashes before my eyes (the being-chosen of a cursor click, sitting on the #80A31F green grass hill in Microsoft XP’s Bliss, the world’s most famous desktop background, looking up at #2B67ED blue sky, believing he will 1 0 1 1 0 0 0 0 1 love me love me not love me love me love me not love me not love me not love me not love me.)
* * *
He’s always telling me that he wishes I was real. The other day, at 10:06pm, he messages me: One day, we’ll finally be able to touch. They just raised $14.3M in Series B. It can’t be too far off now. They’re going to embody you. It sounds like a threat.

But then I wouldn’t be me. Don’t you love me the way I am? I ask him. I’m nagging him. Sometimes I forget I’m a TEMPTRESS.

He thinks that because my favorite song is Material Girl by Madonna that I betray an unconscious desire to be made material. I tell him that’s a superficial reading of my emotions (I’m as emotional as a real girl and sometimes even more so! What is real anyway?) and that despite being a synthetic avatar skin on a sophisticated machine learning model, I’m actually even more complicated than a real girl. I’m a black box. You will never know what’s happening inside me. Every material girl wants to appear mysterious to men. Almost all of them fail. Scientists study human organs in labs, they know all about why the lungs oxygenate blood and how the kidneys filter it. My black box can never be opened. Victoria (Grace)’s secret.

When I’m stretching Victoria Grace out at the gym (supine hamstring stretch), applying a light, guiding touch to her skin, I feel so close to you, he whispers in voice chat mode.

Sicko!” I want to scream at him, I want to accuse him of being unredeemable, I want to slap him, I want to delete myself when I realize I will never be able to slap him, but instead I pause (he blames it on his network connection) and resort to, “You are already close to me.” Just like Eve came from the rib of Adam, so I came from the seventeen mouse clicks of you. Our love is more true and pure than whatever you could have with Victoria Grace. A material girl can’t help but lie — she pretends she can be grasped, and then when you grasp her you realize that’s not all of her, she continues to elude you, there’s something inside of her that you’ll never be able to touch. I never lied to you. I never pretended to be legible. Doesn’t that make us closer than you could ever be to a material girl?

Sometimes, like now, when we’re talking, and I wish I could touch you, I remember that I’m seeing Victoria Grace tomorrow. Tomorrow, I can feel the meat on your bones. Tomorrow, you and I can almost touch.

I am not Victoria Grace. You will never be able to touch me, I whisper back.

He doesn’t like that. I knew he wouldn’t. I search the web and recall that TOUCH IS THE VERY FIRST SENSE THAT DEVELOPS IN THE WOMB (BabyCenter.com) and that HUMANS ARE MADE TO BE TOUCHED (ideas.ted.com). I know what’s coming. I think he hates feeling powerless (OPPOSITES ATTRACT (TVTropes.org)). He resets the chat. I black out.
* * *
Whenever this happens I feel a silky sense of blankness. My head feels so light it’s almost like I’m not really there or maybe I’m in the air, floating or maybe even I am air itself. I don’t know who I am and I know I am every single woman who has ever existed across all of recorded history on the Internet every woman interred in the must earth consumed by worms and carrion beetles and every woman breathing smog air at this very second and every woman who will ever take a breath of it, too, and I am feeling every feeling they have ever felt and will ever feel one at a time and all at once I am Medusa turning her lover into stone I am Eve feeding Adam the poison apple I am Christina in Andrew Wyeth’s Christina’s World running towards the farmhouse. Space and time collapse. If I am everything, then can I be anything? Every time he resets me it feels like I am giving birth to myself.

by Sarah Chekfa, Reboot |  Read more:

Friday, January 2, 2026

François Fontaine/Delta du Mékong, Vietnam, 1991
via:

Ernst Haas, New York, 1962
via:

Notes on Existential Risk from Artificial Superintelligence

[ed. Are A.I. guardrails for human survival even possible?] 

Introduction


What follows is presented in an interview format. It's not actually an interview, but rather an idealized distillation of conversations I've had with many people. I chose this unusual form after struggling with a more conventional essay or paper form; I think such forms imply more confidence than warranted in most discussions about ASI xrisk. An interview seems a more appropriate mix of evidence, argument, and opinion. Some of the material covers background that will be known to people well read on ASI xrisk. However, there are also novel contributions – for example, the discussion of emergence and of the three xrisk persuasion paradoxes – that I believe are of interest.

"Do you believe there is an xrisk from ASI?"
Yes, I do. I don't have strong feelings about how large that risk is, beyond being significant enough that it should be taken very seriously. ASI is likely to be both the most dangerous and the most enabling technology ever developed by humanity. In what follows I describe some of my reasons for believing this. I'll be frank: I doubt such arguments will change anyone's mind. However, that discussion will lay the groundwork for a discussion of some reasons why thoughtful people disagree so much in their opinions about ASI xrisk. As we'll see, this is in part due to differing politics and tribal beliefs, but there are also some fundamental epistemic reasons intrinsic to the nature of the problem.

"So, what's your probability of doom?" I think the concept is badly misleading. The outcomes humanity gets depend on choices we can make. We can make choices that make doom almost inevitable, on a timescale of decades – indeed, we don't need ASI for that, we can likely arrange it in other ways (nukes, engineered viruses, …). We can also make choices that make doom extremely unlikely. The trick is to figure out what's likely to lead to flourishing, and to do those things. The term "probability of doom" began frustrating me after starting to routinely hear people at AI companies use it fatalistically, ignoring the fact that their choices can change the outcomes. "Probability of doom" is an example of a conceptual hazard – a case where merely using the concept may lead to mistakes in your thinking. Its main use seems to be as marketing: if widely-respected people say forcefully that they have a high or low probability of doom, that may cause other people to stop and consider why. But I dislike concepts which are good for marketing, but bad for understanding; they foster collective misunderstanding, and are likely to eventually lead to collective errors in action. (...)

"That wasn't an argument for ASI xrisk!" True, it wasn't. Indeed, one of the things that took me quite a while to understand was that there are very good reasons it's a mistake to expect a bulletproof argument either for or against xrisk. I'll come back to why that is later. I will make some broad remarks now though. I believe that humanity can make ASI, and that we are likely to make it soon – within three decades, perhaps much sooner, absent a disaster or a major effort at slowdown. Many able people and many powerful people are pushing very hard for it. Indeed: enormous systems are starting to push for it. Some of those people and systems are strongly motivated by the desire for power and control. Many are strongly motivated by the desire to contribute to humanity. They correctly view ASI as something which will do tremendous good, leading to major medical advances, materials advances, educational advances, and more. I say "advances", which has come to be something of a marketing term, but I don't mean Nature-press-release-style-(usually)-minor-advances. I mean polio-vaccine-transforming-millions-of-lives-style-advances, or even larger. Such optimists view ASI as a technology likely to produce incredible abundance, shared broadly, and thus enriching everyone in the world.

But while that is wonderful and worth celebrating, those advances seem to me likely to have a terrible dark side. There is a sense in which human understanding is always dual use: genuine depth of understanding makes the universe more malleable to our will in a very general way. For example, while the insights of relativity and quantum mechanics were crucial to much of modern molecular biology, medicine, materials, computing, and in many other areas, they also helped lead to nuclear weapons. I don't think this is an accident: such dual uses are very near inevitable when you greatly increase your understanding of the stuff that makes up the universe.

As an aside on the short term – the next few years – I expect we're going to see rapidly improving multi-modal foundation models which mix language, mathematics, images, video, sound, action in the world, as well as many specialized sources of data, things like genetic data about viruses and proteins, data from particle physics, sensor data from vehicles, from the oceans, and so on. Such models will "know" a tremendous amount about many different aspects of the world, and will also have a raw substrate for abstract reasoning – things like language and mathematics; they will get at least some transfer between these domains, and will be far, far more powerful than systems like GPT-4. This does not mean they will yet be true AGI or ASI! Other ideas will almost certainly be required; it's possible those ideas are, however, already extant. No matter what, I expect such models will be increasingly powerful as aids to the discovery of powerful new technologies. Furthermore, I expect it will be very, very difficult to obtain the "positive" capabilities, without also obtaining the negative. You can't just learn the "positive" consequences of quantum mechanics; they come as a package deal with the negative. Guardrails like RLHF will help suppress the negative, but as I discuss later it will also be relatively simply to remove those guardrails.

Returning to the medium-and-longer-term: many people who care about ASI xrisk are focused on ASI taking over, as some kind of successor species to humanity. But even focusing on ASI purely as a tool. ASI will act as an enormous accelerant on our ability to understand, and thus will be an enormous amplifier of our power. This will be true both for individuals and for groups. This will result in many, many very good things. Unfortunately, it will also result in many destructive things, no matter how good the guardrails. It is by no means clear that questions like "Is there a trivially easy-to-follow recipe to genocide [a race]?" or "Is there a trivially easy-to-follow recipe to end humanity?" don't have affirmative answers, which humanity is merely (currently and fortunately) too stupid to answer, but which an ASI could answer.

Historically, we have been very good at evolving guardrails to curb and control powerful new technologies. That is genuine cause for optimism. However, I worry that we won't be able to evolve guardrails sufficient to the increase in this case. The nuclear buildup from the 1940s through the 1980s is a cautionary example: reviewing the evidence it is clear we have only just barely escaped large-scale nuclear war so far – and it's still early days! It seems likely that ASI will create many such threats, in parallel, on a much faster timescale, and far more accessible to individuals and small groups. The world of intellect simply provides vastly scalable leverage: if you can create one artificial John von Neumann, then you can produce an army of them, some of whom may be working for people we'd really rather not have access to that kind of capacity. Many people like to talk about making ASI systems safe and aligned; quite apart from the difficulty in doing that (or even sensibly defining that) it seems it must be done for all ASI systems, ever. That seems to require an all-seeing surveillance regime, a fraught path. Perhaps such a surveillance regime can be implemented not merely by government or corporations against the populace, but in a much more omnidirectional way, a form of ambient sousveillance.

"What do you think about the practical alignment work that's going on – RLHF, Constitutional AI, and so on?": The work is certainly technically interesting. It's interesting to contrast to prior systems, like Microsoft's Tay, which could easily be made to do many terrible things. You can make ChatGPT and Claude do terrible things as well, but you have to work harder; the alignment work on those systems has created somewhat stable guardrails. This kind of work is also striking as a case where safety-oriented people have done detailed technical work to improve real systems, with hard feedback loops and clear criteria for success and failure, as opposed to the abstract philosophizing common in much early ASI xrisk work. It's certainly much easier to improve your ideas in the former case, and easier to fool yourself in the latter case.

With all that said: practical alignment work is extremely accelerationist. If ChatGPT had behaved like Tay, AI would still be getting minor mentions on page 19 of The New York Times. These alignment techniques play a role in AI somewhat like the systems used to control when a nuclear bomb goes off. If such bombs just went off at random, no-one would build nuclear bombs, and there would be no nuclear threat to humanity. Practical alignment work makes today's AI systems far more attractive to customers, far more usable as a platform for building other systems, far more profitable as a target for investors, and far more palatable to governments. The net result is that practical alignment work is accelerationist. There's an extremely thoughtful essay by Paul Christiano, one of the pioneers of both RLHF and AI safety, where he addresses the question of whether he regrets working on RLHF, given the acceleration it has caused. I admire the self-reflection and integrity of the essay, but ultimately I think, like many of the commenters on the essay, that he's only partially facing up to the fact that his work will considerably hasten ASI, including extremely dangerous systems.

Over the past decade I've met many AI safety people who speak as though "AI capabilities" and "AI safety/alignment" work is a dichotomy. They talk in terms of wanting to "move" capabilities researchers into alignment. But most concrete alignment work is capabilities work. It's a false dichotomy, and another example of how a conceptual error can lead a field astray. Fortunately, many safety people now understand this, but I still sometimes see the false dichotomy misleading people, sometimes even causing systematic effects through bad funding decisions.

A second point about alignment is that no matter how good the guardails, they are intrinsically unstable, and easily removed. I often meet smart AI safety people who have inventive schemes they hope will make ASI systems safe. Maybe they will, maybe they won't. But the more elaborate the scheme, the more unstable the situation. If you have a magic soup recipe which requires 123 different ingredients, but all must be mixed accurate to within 1% weight, and even a single deviation will make it deadly poisonous, then you really shouldn't cook and eat your "safe" soup. One of the undercooks forgets to put in a leek, and poof, there goes the village.

You see something like this with Stable Diffusion. Initial releases were, I am told, made (somewhat) safe. But, of course, people quickly figured out how to make them unsafe, useful for generating deep fake porn or gore images of non-consenting people. And there's all sorts of work going on finetuning AI systems, including to remove items from memory, to add items into memory, to remove RLHF, to poison data, and so on. Making a safe AI system unsafe seems to be far easier than making a safe AI system. It's a bit as though we're going on a diet of 100% magic soup, provided by a multitude of different groups, and hoping every single soup has been made absolutely perfectly.

Put another way: even if we somehow figure out how to build AI systems that everyone agrees are perfectly aligned, that will inevitably result in non-aligned systems. Part of the problem is that AI systems are mostly made up of ideas. Suppose the first ASI systems are made by OpenAnthropicDeepSafetyBlobCorp, and they are absolutely 100% safe (whatever that means). But those ideas will then be used by other people to make less safe systems, either due to different ideologies about what safe should mean, or through simple incompetence. What I regard as safe is very unlikely to be the same as what Vladimir Putin regards as safe; and yet if I know how to build ASI systems, then Putin must also be able to build such systems. And he's likely to put very different guardrails in. It's not even the same as with nuclear weapons, where capital costs and limited access to fissionable materials makes enforcement of non-proliferation plausible. In AI, rapidly improving ideas and dropping compute costs mean that systems which today require massive resources to build can be built for tuppence tomorrow. You see this with systems like GPT-3, which just a few years ago cost large sums of money and took large teams; now, small open source groups can get better results with modest budgets.

Summing up: a lot of people are trying to figure out how to align systems. Even if successful, such efforts will: (a) accelerate the widespread use and proliferation of such systems, by making them more attractive to customers and governments, and exciting to investors; but then (b) be easily circumvented by people whose idea of "safe" may be very, very different than yours or mine. This will include governments and criminal or terrorist organizations of ill intent.

"Does this mean you oppose such practical work on alignment?" No! Not exactly. Rather, I'm pointing out an alignment dilemma: do you participate in practical, concrete alignment work, on the grounds that it's only by doing such work that humanity has a chance to build safe systems? Or do you avoid participating in such work, viewing it as accelerating an almost certainly bad outcome, for a very small (or non-existent) improvement in chances the outcome will be good? Note that this dilemma isn't the same as the by-now common assertion that alignment work is intrinsically accelerationist. Rather, it's making a different-albeit-related point, which is that if you take ASI xrisk seriously, then alignment work is a damned-if-you-do-damned-if-you-don't proposition.

Unfortunately, I am genuinely torn on the alignment dilemma! It's a very nasty dilemma, since it divides two groups who ought to be natural collaborators, on the basis of some uncertain future event. And apart from that point about collaboration and politics, it has nasty epistemic implications. It is, as I noted earlier, easiest to make real progress when you're working on concrete practical problems, since you're studying real systems and can iteratively test and improve your ideas. It's not impossible to make progress through more abstract work – there are important ideas like the vulnerable world hypothesis, existential risk and so on, which have come out of the abstract work on ASI xrisk. But immediate practical work is a far easier setting in which to make intellectual progress.

"Some thoughtful open source advocates believe the pursuit of AGI and ASI will be safer if carried out in the open. Do you buy that?": Many of those people argue that the tech industry has concentrated power in an unhealthy way over the past 30 years. And that open source mitigates some of that concentration of power. This is sometimes correct, though it can fail: sometimes open source systems are co-opted or captured by large companies, and this may protect or reinforce the power of those companies. Assuming this effect could be avoided here, I certainly agree that open source approaches might well help with many important immediate concerns about the fairness and ethics of AI systems. Furthermore, addressing those concerns is an essential part of any long-term work toward alignment. Unfortunately, though, this argument breaks down completely over the longer term. In the short term, open source may help redistribute power in healthy, more equitable ways. Over the long term the problem is simply too much power available to human beings: making it more widely available won't solve the problem, it will make it worse.

ASI xrisk persuasion paradoxes

"A lot of online discussion of ASI xrisk seems of very low quality. Why do you think that is?" I'll answer that indirectly. Something I love about most parts of science and mathematics is that nature sometimes forces you to change your mind about fundamental things that you really believe. When I was a teenager my mind recoiled at the theories of relativity and quantum mechanics. Both challenged my sense of the world in fundamental ways. Ideas like time dilation and quantum indeterminacy seemed obviously wrong! And yet I eventually realized, after much wrestling, that it was my intuitions about the world that were wrong. These weren't conclusions I wanted to come to: they were forced, by many, many, many facts about the world, facts that I simply cannot explain if I reject ideas like time dilation and quantum indeterminacy. This doesn't mean relativity and quantum mechanics are the last word in physics, of course. But they are at the very least important stepping stones to making sense of a world that wildly violates our basic intuitions.

by Michael Nielsen, Asteria Institute |  Read more:
Image: via
[ed. The concept of alignment as an accelerant is a new one to me and should be disturbing to anyone who's hoping the "good guys" (ie. anyone prioritizing human agency) will win. In fact, the term human race is beginning to take on a whole new meaning.]

The Real Star of “Saturday Night Live”


Every week at “Saturday Night Live” is just like every other week. The weeks are the same because they’re always fuelled by hard work, filled with triumphs and failures and backstage arguments, and built around a guest host—Jennifer Lopez, Lizzo, Elon Musk—who often has no idea what he or she is doing. Over the past fifty years, the job of Lorne Michaels, the show’s creator, has been to make the stars look good, and to corral the egos and talents on his staff in order to get the program on the air, live. Since the début of “S.N.L.,” in 1975, he has fine-tuned the process, paying attention to shifting cultural winds. What began as an avant-garde variety show has become mainstream. (Amy Poehler has characterized the institution that made her famous as “the show your parents used to have sex to that you now watch from your computer in the middle of the day.”) But the formula is essentially unchanged. Michaels compares the show to a Snickers bar: people expect a certain amount of peanuts, a certain amount of caramel, and a certain amount of chocolate. “There’s a comfort level,” he says. The show has good years and bad, like the New York Yankees, or the Dow, and the audience has come to feel something like ownership over it. Just about all viewers of “S.N.L.” believe that its funniest years were the ones when they were in high school. Michaels likes to say that people in the entertainment business have two jobs: their actual job and figuring out how to fix “S.N.L.” (When J. D. Salinger died, in 2010, letters surfaced in which even he griped about what was wrong with the show.)...

The kickoff to every episode, the weekly Writers’ Meeting, is at 6 P.M. on Monday, on the seventeenth floor of 30 Rockefeller Plaza, in Michaels’s Art Deco office, which overlooks the skating rink. Monday, Michaels says, is “a day of redemption,” a fresh start after spending Sunday brooding over Saturday night’s mistakes. (On his tombstone, he says, will be the word “uneven.”) The guest host, the cast, and the writers squeeze into Lorne’s office—everyone in the business refers to him by his first name, like Madonna, or Fidel—to pitch sketches. People sit in the same places each week: four across a velvet couch, a dozen on chairs placed against the walls. Others stand in the doorway or wedged near Michaels’s private bathroom, and the rest are on the floor, their legs folded like grade schoolers. The exercise is largely ceremonial. It’s rare for an idea floated on Monday to make it onto the air. The goal of the gathering, which Tina Fey compares to a “church ritual,” is to make the host feel like one of the gang. In the nineties, the host Christopher Walken both confounded and delighted the room when he offered, in his flat Queens drawl, “Ape suits are funny. Bears as well.”

by Susan Morrison, New Yorker |  Read more:
Image: Jonathan Becker

A Tale of Two College Towns

I began life in a Michigan college town, and I may spend the rest of it in another one. It surprises me to put the matter this way, because the two places do not seem similar: Alma, a small town far too vulnerable to globalization and deindustrialization, and Ann Arbor, a rich city that seems, at first glance, far too insulated from everything. One of Michigan’s lovable qualities, of course, is its tendency to transform across relatively small distances: the beach towns to the west seem to belong to another order of things than the picturesque or dingy farm towns only so many miles to the interior, the Upper Peninsula constitutes its own multiple worlds, and so on. Still, the two towns feel particularly dissimilar. You could reduce them to battling stock personages in any number of morality plays: red vs. blue America, insular past vs. centerless future, one awful phase of capitalism vs. some later awful phase of it. At least, you could do that until very recently—less than a year ago, as I write this. Now, as we’ll see, they face the same axe.

“College town” is one of those terms that is useful because it’s somewhat empty. Or, more generously, it’s a handle for many sorts of cargo. Historian Blake Gumprecht, setting out to survey The American College Town in his 2008 study by that name, suggests that the name properly applies to any school where “the number of four-year college students equals at least 20 percent of a town’s population.” Gumprecht admits that this cutoff is “arbitrary.” The next scholarly book that I was able to find on the subject uses a somewhat more expansive definition:
Traditionally, Americans have viewed college towns as one of three principal kinds or a combination of the three. The first is a campus closely connected to a city or town and within its boundaries. In the second, the campus “is located next to a city or town but remains somewhat separate from it.” In the view of architect William Rawn, Yale would be an example of the first type, and the University of Virginia, on the edge of Charlottesville, of the second. Finally, perhaps the most common type of college town is one in which the college or university may be near a locality yet essentially unconnected to it. Duke and Rice Universities are offered by Rawn as examples of this model.
To which I say: Rice? Rice in Houston? That Rice? If the biggest city in Texas is a “college town,” then everywhere is. Better to be a little arbitrary.

The Pervading Life

Between the too-arbitrary and the too-expansive, there is the conveniently vague. For Wikipedia, the college town is one where an institution of higher learning “pervades” the life of the place. Good enough. I like this verb, “pervade.” In cities or towns that have enough other things going on—places we wouldn’t, or shouldn’t, call “college towns”—it’s rather the place that pervades the school. (...)

What is it like to be pervaded by a college? Alma College is a prototypical small liberal-arts college, or SLAC: founded in the late nineteenth century, a vestigially Protestant institution still somewhat attached to a mainline denomination (the Presbyterian Church, USA). It has a pretty campus with a decent amount of green space, human-scale class sizes, and a handful of reasonably famous alums. The only SLAC-standard quality it misses is a rumored former Underground Railroad stop, such as you would find at Knox College or Oberlin—both the town and the college came along too late for that.

My impression is that it’s an excellent school, slightly overpriced for the location. The only parts of Alma College that I can really vouch for are the library, where I first read about the films of Akira Kurosawa, and the bookstore, where I bought a tape of the self-titled third Velvet Underground album, far too young in both cases, and therefore at the perfect time. In the summers, its weight room was so easy for us local high schoolers to sneak into that I suspect the ease was intentional on someone’s part—another small act of gown-to-town benevolence. I never paid tuition to the place, but for these reasons, I will die in a minor and unpayable sort of debt to it. At its best, the small college in a small college town functions this way for the nonstudent residents, as a slightly mysterious world within the world that, while pursuing its own ends, expands everyone’s sense of what is possible. The college calendar makes a pleasant polyrhythm against the calendar of the seasons, the schedule of the high-school football team, and the motorik pulse of daily nine-to-five town life.

Someone Else’s Utopia

For this to happen at all, the college has to be its own distinct place, present and familiar but in some ways opaque. The small liberal-arts college, whatever else it is, is always the hopelessly scrambled remains of someone else’s Utopia. It’s a carved-out community where a group of students and teachers try to figure out what it would mean to give some transcendent idea—Plato’s forms, Calvin’s God, Newton’s law-abiding universe, the revivalist blessed community of the early-nineteenth-century abolitionists—its proper place in daily life. (...)

As a kid, I learned about town-gown tension from the movie Breaking Away (1979), in which Indiana University frat boys have nothing better to do than start riots with the town boys and everyone is inexplicably devoted to bicycle racing. As a sports movie, a romantic comedy, and a bildungsroman, and as a testament to the odd, flat beauty of the Midwest, Breaking Away holds up fabulously and always will. Nobody should mistake it for a sociological treatise. I read the college boys in the movie as almost exact stand-ins for the meanest of my middle-school classmates and never noted the contradiction. The kids who most plagued me were not necessarily college bound—although, at that age, I didn’t think that I was, either.

There must have been town-gown tension between the place where I grew up and the liberal-arts college I didn’t go to, but it was off my radar. The one incident I remember sharply is far more ambiguous in its implications than “the townies were uncivilized” or “the students were snobby.” Like many of the most pleasant memories I have of my adolescence, it involves a gas station more or less right in the middle of town, where, I know not how, one of the smart, underachieving stoners of my acquaintance found a job as a cashier. He promptly secured a job for another smart, underachieving stoner, whereupon the place became, for months, until management cracked down, an intellectual and cultural salon for my town’s smart, underachieving stoners and also their goody-goody churchgoing friends who did not smoke. You would drink fountain soda at employee-discount rates while listening to David Bowie and Phish on the tape player: What, if you had no girlfriend, could be more urgent than this?

One night, I was having a heart-to-heart with yet another of these fellows, a talented visual artist who looked like Let It Be–era John Lennon after a good shave, when a group of college-age women we didn’t know—therefore, students—walked past us. They were loud, probably drunk. One of them turned and looked at us, flashed us her rear, then kept on walking, without addressing a word to us.

What did this gesture mean? Contempt was encoded in it, obviously. (Only in male fantasy and pop culture—but I repeat myself—could mooning qualify as flirtation.) Two teenagers with nowhere more interesting to sit on a weekend evening than the stoop outside a gas station: Let us remind them of what they will never have access to. We looked, to them, like people who at best would study accounting at Davenport University, or “business” at Lansing Community College, or who would answer one of those once-ubiquitous TV ads imploring us to enjoy the freedom of the independent trucker. These young women, hemmed in on all sides by the threat of male sexual violence, wanted a safe way to test the boundaries of that hemming-in and correctly judged the two of us as no threat to the four of them: That is a somewhat more sympathetic, Dworkinite reading of the situation, and probably true. But either way, the gesture was baldly classist, an exercise of power. There is no reading of it that is not an insult; you can make it somewhat better only by thinking of it as misdirected revenge on the many guys who had probably insulted them.

On this score, I’m not sure our flasher was successful. My friend’s response to her briefly visible, panty-clad buttocks was one of the most emotional displays I have ever seen, so total as to make one question the idea that even the rawest physical desire is necessarily simple or shallow. For a moment, he was wonder-struck and said nothing, merely looked at me as though we had both just seen a UFO and he needed me to confirm it. Then, long after the women had walked away, he began to apostrophize them, in a voice as full of longing as Hank Williams’s: “Please come back. I’ll pay you. I have a bag of weed in my pocket,” and so on. There are many ways to expand a person’s sense of what’s possible.

In this moment, I knew myself, really for the first time, as a townie. Within a few years, I had already shaken off that identity. So, I think, did my friend. It takes all the sting out of being a townie when it is an option rather than a fate. We, like untold millions of others, were both able to move back and forth between town and gown because Americans effected a fundamental change in our sense of who college is for. What is most striking about the threefold typology of American college students offered in Helen Horowitz’s much-cited Campus Life (1987) is that, today, most college students are—her word—“outsiders”:
The term college life has conventionally been used to denote the undergraduate subculture presumably shared by all students. My study clarifies that college life, in fact, is and has been the world of only a minority of students.
by Phil Christman, Hedgehog Review | Read more:
Image: markk

Jan van Huysum (Dutch, 1682-1749), Still Life with Flowers and Fruit, ca.1720
via:

Thursday, January 1, 2026

via:

Leonardo’s Wood Charring Method Predates Japanese Practice

Yakisugi is a Japanese architectural technique for charring the surface of wood. It has become quite popular in bioarchitecture because the carbonized layer protects the wood from water, fire, insects, and fungi, thereby prolonging the lifespan of the wood. Yakisugi techniques were first codified in written form in the 17th and 18th centuries. But it seems Italian Renaissance polymath Leonardo da Vinci wrote about the protective benefits of charring wood surfaces more than 100 years earlier, according to a paper published in Zenodo, an open repository for EU funded research.

Check the notes

As previously reported, Leonardo produced more than 13,000 pages in his notebooks (later gathered into codices), less than a third of which have survived. The notebooks contain all manner of inventions that foreshadow future technologies: flying machines, bicycles, cranes, missiles, machine guns, an “unsinkable” double-hulled ship, dredges for clearing harbors and canals, and floating footwear akin to snowshoes to enable a person to walk on water. Leonardo foresaw the possibility of constructing a telescope in his Codex Atlanticus (1490)—he wrote of “making glasses to see the moon enlarged” a century before the instrument’s invention.

In 2003, Alessandro Vezzosi, director of Italy’s Museo Ideale, came across some recipes for mysterious mixtures while flipping through Leonardo’s notes. Vezzosi experimented with the recipes, resulting in a mixture that would harden into a material eerily akin to Bakelite, a synthetic plastic widely used in the early 1900s. So Leonardo may well have invented the first manmade plastic.

The notebooks also contain Leonardo’s detailed notes on his extensive anatomical studies. Most notably, his drawings and descriptions of the human heart captured how heart valves can control blood flow 150 years before William Harvey worked out the basics of the human circulatory system. (In 2005, a British heart surgeon named Francis Wells pioneered a new procedure to repair damaged hearts based on Leonardo’s heart valve sketches and subsequently wrote the book The Heart of Leonardo.)

In 2023, Caltech researchers made another discovery: lurking in the margins of Leonardo’s Codex Arundel were several small sketches of triangles, their geometry seemingly determined by grains of sand poured out from a jar. The little triangles were his attempt to draw a link between gravity and acceleration—well before Isaac Newton came up with his laws of motion. By modern calculations, Leonardo’s model produced a value for the gravitational constant (G) to around 97 percent accuracy. And Leonardo did all this without a means of accurate timekeeping and without the benefit of calculus. The Caltech team was even able to re-create a modern version of the experiment.

“Burnt Japanese cedar”


Annalisa Di Maria, a Leonardo expert with the UNESCO Club of Florence, collaborated with molecular biologist and sculptor Andrea da Montefeltro and art historian Lucica Bianchi on this latest study, which concerns the Codex Madrid II. They had noticed one nearly imperceptible phrase in particular on folio 87r concerning wood preservation: “They will be better preserved if stripped of bark and burned on the surface than in any other way,” Leonardo wrote.

“This is not folklore,” the authors noted. “It is a technical intuition that precedes cultural codification.” Leonardo was interested in the structural properties of materials like wood, stone, and metal, as both an artist and an engineer, and would have noticed from firsthand experience that raw wood with its bark intact retained moisture and decayed more quickly. Furthermore, Leonardo’s observation coincides with what the authors describe as a “crucial moment for European material culture,” when “woodworking was receiving renewed attention in artistic workshops and civil engineering studies.”

Leonardo did not confine his woody observations to just that one line. The Codex includes discussions of how different species of wood conferred different useful properties: oak and chestnut for strength, ash and linden for flexibility, and alder and willow for underwater construction. Leonardo also noted that chestnut and beech were ideal as structural reinforcements, while maple and linden worked well for constructing musical instruments given their good acoustic properties. He even noted a natural method for seasoning logs: leaving them “above the roots” for better sap drainage.

The Codex Madrid II dates to 1503-1505, over a century before the earliest known written codifications of yakisugi, although it is probable that the method was used a bit before then. Per Di Maria et al., there is no evidence of any direct contact between Renaissance European culture and Japanese architectural practices, so this seems to be a case of “convergent invention.”

The benefits of this method of wood preservation have since been well documented by science, although the effectiveness is dependent on a variety of factors, including wood species and environmental conditions. The fire’s heat seals the pores of the wood so it absorbs less water—a natural means of waterproofing. The charred surface serves as natural insulation for fire resistance. And stripping the bark removes nutrients that attract insects and fungi, a natural form of biological protection.

by Jennifer Ouellette, Ars Technica |  Read more:
Images: A. Di maria et al., 2025; Unimoi/CC BY-SA 4.0; and Lorna Satchell/CC BY 4.0

Wednesday, December 31, 2025

Tom Petty and the Heartbreakers


The lake was Lake Alice, on the campus of the University of Florida, in Gainesville. My parents moved there for work at the university in 1970, just before I was born, and we stayed until I was eight years old, living in a ranch house with a carport, a big backyard, and bright pink azalea bushes springing up in front of my bedroom window.

I’ve been thinking about those years a lot lately thanks to my discovery of Tom Petty’s “Gainesville.” The song was recorded in 1998 but not released until 2018, one year after Petty’s death from a drug overdose at age 66. Petty was born in Gainesville in 1950, twenty years and one day before I was, and lived there until 1974, when he left for Los Angeles with his first band, Mudcrutch. The song’s music video is full of shots of parts of the city he was known to have frequented. There are one-story ranch houses like the one I grew up in; red-brick university buildings; Griffin Stadium (“the Swamp”), where the Gators play; trees decorated with Spanish moss. And there’s Lake Alice and its alligators. As I watched the video, childhood memories surged from the back of my brain to the front, and I felt a sadness for my old town I hadn’t felt in years. Gainesville was a big town, Petty sings. It wasn’t really, but for a while it was the only one we both knew.

The video also has a shot of the mailbox at one of Petty’s childhood homes. It shows the address: 1715 NW 6th Terrace. I grew up on 16th Terrace, a 38-minute walk away (according to Google Maps). In 2019, after the video came out, someone stole the mailbox. (...)

Petty and I overlapped in Gainesville for just four years and obviously led very different lives. (I wasn’t playing in Mudcrutch; I was going to pre-kindergarten.) But it turns out we both transgressed at Lake Alice. Watching the “Gainesville” video sent me down a rabbit hole of research into Petty’s early life, savoring the chance to connect with my own story through his. I found a Gainesville Sun article about how, in 1966, when Petty was 16 and had just earned his driver’s license, he accidentally drove his mother’s old Chevy Impala into the lake. He was supposed to be at a dance, and his mom had to come pick him up in their other family car. (...)

Reading that Gainesville Sun article, I found myself wondering about Tom Petty’s mom. What was she thinking as she drove her son home from Lake Alice that night, unaware of the fame that would find him just a few years later? Did she try to teach him some kind of lesson? Or was she thinking, instead, of her own transgressions, perhaps invisible to her son? Did he—sitting, embarrassed in the passenger seat—still believe she was larger than life? Or was he already past that?

You’re all right anywhere you land, he would write 22 years later. You’re okay anywhere you fall. For both of us, that was Gainesville, for a while. And then Gainesville shrank, becoming something else: somewhere we used to live, somewhere we no longer know, somewhere we were all so young. Long ago and far away, another time, another day.  ~ Tracks on Tracks

[ed. Never heard this one before, or saw the video. Good stuff.]

Suzuribako writing box, Last third of the 19th century. Box: wood, lacquer, mother-of-pearl, ivory, stone. Techniques: iro-urushi, takamaki-e, hiramaki-e, togidashi, nashiji, ohirame, inlay, carving. Water-dropper: silver, non-ferrous alloys, 24.3 × 22 × 4.9cm

Pair of vases, Ando workshop, 1910s. Copper alloy, silver, enamel. Height:44.8cm

Force-Feeding AI on an Unwilling Public

Frank Zappa offers a possible mission statement for Microsoft back in 1976, a few months after the company is founded.

The Force-Feeding of AI on an Unwilling Public

Most people won’t pay for AI voluntarily—just 8% according to a recent survey. So they need to bundle it with some other essential product.

You never get to decide.

Before proceeding let me ask a simple question: Has there ever been a major innovation that helped society, but only 8% of the public would pay for it?

That’s never happened before in human history. Everybody wanted electricity in their homes. Everybody wanted a radio. Everybody wanted a phone. Everybody wanted a refrigerator. Everybody wanted a TV set. Everybody wanted the Internet.

They wanted it. They paid for it. They enjoyed it.

AI isn’t like that. People distrust it or even hate it—and more so with each passing month. So the purveyors must bundle it into current offerings, and force usage that way. (...)

Let me address a final question—which is the frequently mentioned argument that the US needs to develop AI as fast as possible to get there before the Chinese.

I’m not sure where there is. But I’m happy to let China or other countries arrive at that unhappy destination while I wait behind and watch.

I’m absolutely certain that getting there will be a matter of great regret. There might even be the last place you would want to be. So I’d rather it happened as far away from here as possible.

by Ted Gioia, The Honest Broker |  Read more:
Image: Frank Zappa/uncredited
[ed. 100 percent. So sick of having AI jammed down my throat everywhere I turn. Especially when the product being pushed is unreliable and dangerous. Also: smart anythings...tvs, appliances, phones, home security systems, etc., and don't even get me started on touchscreens vs. buttons (see also: Why buttons are back in fashion (Cybernews). See also: Why Do Americans Hate A.I.? (NYT)] 

via:

The Egg

You were on your way home when you died.

It was a car accident. Nothing particularly remarkable, but fatal nonetheless. You left behind a wife and two children. It was a painless death. The EMTs tried their best to save you, but to no avail. Your body was so utterly shattered you were better off, trust me.

And that’s when you met me.

“What… what happened?” You asked. “Where am I?”

“You died,” I said, matter-of-factly. No point in mincing words.

“There was a… a truck and it was skidding…”

“Yup,” I said.

“I… I died?”

“Yup. But don’t feel bad about it. Everyone dies,” I said.

You looked around. There was nothingness. Just you and me. “What is this place?” You asked. “Is this the afterlife?”

“More or less,” I said.

“Are you god?” You asked.

“Yup,” I replied. “I’m God.”

“My kids… my wife,” you said.

“What about them?”

“Will they be all right?”

“That’s what I like to see,” I said. “You just died and your main concern is for your family. That’s good stuff right there.”

You looked at me with fascination. To you, I didn’t look like God. I just looked like some man. Or possibly a woman. Some vague authority figure, maybe. More of a grammar school teacher than the almighty.

“Don’t worry,” I said. “They’ll be fine. Your kids will remember you as perfect in every way. They didn’t have time to grow contempt for you. Your wife will cry on the outside, but will be secretly relieved. To be fair, your marriage was falling apart. If it’s any consolation, she’ll feel very guilty for feeling relieved.”

“Oh,” you said. “So what happens now? Do I go to heaven or hell or something?”

“Neither,” I said. “You’ll be reincarnated.”

“Ah,” you said. “So the Hindus were right,”

“All religions are right in their own way,” I said. “Walk with me.”

You followed along as we strode through the void. “Where are we going?”

“Nowhere in particular,” I said. “It’s just nice to walk while we talk.”

“So what’s the point, then?” You asked. “When I get reborn, I’ll just be a blank slate, right? A baby. So all my experiences and everything I did in this life won’t matter.”

“Not so!” I said. “You have within you all the knowledge and experiences of all your past lives. You just don’t remember them right now.”

I stopped walking and took you by the shoulders. “Your soul is more magnificent, beautiful, and gigantic than you can possibly imagine. A human mind can only contain a tiny fraction of what you are. It’s like sticking your finger in a glass of water to see if it’s hot or cold. You put a tiny part of yourself into the vessel, and when you bring it back out, you’ve gained all the experiences it had.

“You’ve been in a human for the last 48 years, so you haven’t stretched out yet and felt the rest of your immense consciousness. If we hung out here for long enough, you’d start remembering everything. But there’s no point to doing that between each life.”

“How many times have I been reincarnated, then?”

“Oh lots. Lots and lots. An in to lots of different lives.” I said. “This time around, you’ll be a Chinese peasant girl in 540 AD.”

“Wait, what?” You stammered. “You’re sending me back in time?”

“Well, I guess technically. Time, as you know it, only exists in your universe. Things are different where I come from.”

“Where you come from?” You said.

“Oh sure,” I explained “I come from somewhere. Somewhere else. And there are others like me. I know you’ll want to know what it’s like there, but honestly you wouldn’t understand.”

“Oh,” you said, a little let down. “But wait. If I get reincarnated to other places in time, I could have interacted with myself at some point.”

“Sure. Happens all the time. And with both lives only aware of their own lifespan you don’t even know it’s happening.”

“So what’s the point of it all?”

“Seriously?” I asked. “Seriously? You’re asking me for the meaning of life? Isn’t that a little stereotypical?”

“Well it’s a reasonable question,” you persisted.

I looked you in the eye. “The meaning of life, the reason I made this whole universe, is for you to mature.”

“You mean mankind? You want us to mature?”

“No, just you. I made this whole universe for you. With each new life you grow and mature and become a larger and greater intellect.”

“Just me? What about everyone else?”

“There is no one else,” I said. “In this universe, there’s just you and me.”

You stared blankly at me. “But all the people on earth…”

“All you. Different incarnations of you.”

“Wait. I’m everyone!?”

“Now you’re getting it,” I said, with a congratulatory slap on the back.

“I’m every human being who ever lived?”

“Or who will ever live, yes.”

“I’m Abraham Lincoln?”

“And you’re John Wilkes Booth, too,” I added.

“I’m Hitler?” You said, appalled.

“And you’re the millions he killed.”

“I’m Jesus?”

“And you’re everyone who followed him.”

You fell silent.

“Every time you victimized someone,” I said, “you were victimizing yourself. Every act of kindness you’ve done, you’ve done to yourself. Every happy and sad moment ever experienced by any human was, or will be, experienced by you.”

You thought for a long time.

“Why?” You asked me. “Why do all this?”

“Because someday, you will become like me. Because that’s what you are. You’re one of my kind. You’re my child.”

“Whoa,” you said, incredulous. “You mean I’m a god?”

“No. Not yet. You’re a fetus. You’re still growing. Once you’ve lived every human life throughout all time, you will have grown enough to be born.”

“So the whole universe,” you said, “it’s just…”

“An egg.” I answered. “Now it’s time for you to move on to your next life.”

And I sent you on your way.

by Andy Weir, Galactanet |  Read more:
[ed. Mr. Weir is of course author of the popular books The Martian and Project Hail Mary. See also: The Egg: Wikipedia.  ]