Duck Soup
...dog paddling through culture, technology, music and more.
Wednesday, January 7, 2026
Tuesday, January 6, 2026
Ozu: The Bond Between Parent and Child
Why was I thinking about flower arrangement while watching “The Only Son” the first sound film made by the Japanese master Ozu? It must have involved the meticulous and loving care he used with his familiar visual elements. In Japan in 1984 I attended a class at the Sogetsu School, which teaches ikebana, the Japanese art of flower arranging. I learned quickly that sorting a big bunch of flowers in a vase was not ikebana. One selected just a few elements and found a precise way in which they rested together harmoniously.
If you think that ikebana has nothing to do with film direction, think again. The Sogetsu School was then being run by Hiroshi Teshigahara, the director of “Woman in the Dunes,” who left filmmaking to become the third generation of his family to head of the school. after he died in 1991, his daughter became the fourth. I gathered that the Teshigaharas believed when you studied ikebana you studied your relationship with the material world.
Now turn to Yasujiro Ozu, who is one of the three of four best filmmakers in the world, and certainly the one who brings me the most serenity. I’ve seen 14 of his films, four of them with the shot-by-shot approach. That doesn’t make me an expert, but it makes me familiar with his ways of seeing. In the films I’ve seen, he has a few favorite themes, subjects and compositions, and carefully arranges and rearranges them. Some say “he makes the same film every time.” That’s like saying “all people are born with two eyes.” What matters is how you see with them.
Over an opening frame of “The Only Son” (1936), we read a quotation by the writer Akutagawa: “Life’s tragedy begins with the bond between parent and child.” So do most of Ozu’s films. Again and again, he focuses on parents and their children, and often on their grandchildren. A typical plot will involve sacrifice by a parent or a child for the happiness of the other. It is not uncommon for both parent and child to make sacrifices in a mistaken belief about what the other desires. The issues involved are marriage, children, independence for the young, care for the old, and success in the world.
He tells these stories within a visual frame so distinctive that I believe you can identify any Ozu film after seeing a shot or two, sometimes even from a still. How he came upon his approach I don’t know, but you see it fully mature even in his silent films. For Ozu, all depends on the composition of the shot. He almost never moves his camera. He usually shoots from the eye level of a person seated on a tatami mat. He often begins shots before characters enter, and holds them after they leave. He separates important scenes with “pillow shots” of exterior architectural or landscape details. He uses evocative music, never too loud. I have never seen him use violence. When violence occurs, people commit it within themselves.
Parents and children, then families, are his chosen subjects. He tells each story with his familiar visual strategy, which is pure and simplified, never calling attention to itself. His straight-on shots are often framed on sides and back, and with foreground objects. His exteriors and groups of two or more characters are usually at oblique angles. Is this monotonous? Never, because within his rules he finds infinite variation. A modern chase scene is much more monotonous, because it gives you nothing to think about.
In “The Only Son,” there is a remarkable moment when we have a great deal of time to think. The story is about the son of a widowed mother who works in a provincial silk spinning mill. This is hard and spirit-crushing work, but she does it to put her son through high school and set him on his road in life. After graduating, he follows an admired teacher to seek his future in Tokyo. Four years pass. His mother comes to visit him, unannounced. They are happy to see one another, they love one another, but he has a surprise: He has a wife and an infant child. Why didn’t he tell her? We gather he didn’t want to create an occasion for her to visit Tokyo and find that he is very poor, has a low-paying job, teaching geometry in a night school, and that he lives in a desolate district in view of the smokestacks of the Tokyo garbage incinerators.
The rest of the plot you can discover. It leads to a conversation in which he shares his discouragement, and tells her she may have wasted her sacrifice. She encourages him to persevere. He thinks he’s had a bad roll of the dice. There is no place for him in Tokyo. Simple mill worker that she is, what can she reply to this? She sits up late, sleepless. He awakens, and they talk some more. She weeps. In a reframed shot, his wife weeps. Then Ozu provides a shot of an unremarkable corner of the room. Nothing much there. A baby bottle. A reproduction of a painting. Nothing. He holds this shot. And holds it. And holds it. I feel he could not look at them any longer, and had to look away, thinking about what has happened. Finally there is an exterior pillow shot of the morning.
If Ozu returns to characteristic visuals, he also returns to familiar actors. In “An Only Son,” the small but important role of the hero’s teacher is played by Chishu Ryu — the teacher who, after moving to Tokyo, fails to realize his own dreams and, as the son bitterly tells his mother, is “reduced to frying pork cutlets.” This was Ryu’s seventh film for Ozu. In all he was to appear in 52 of Ozu’s 54 films, between 1929 and 1962. He is the old father in “Tokyo Story” (1953).
Ryu is an actor who we recognize from body language. He exudes restraint, courtesy. He smokes meditatively. He said Ozu directed him as little as possible: “He had made up the complete picture in his head before he went on the set, so that all we actors had to do was to follow his directions, from the way we lifted and dropped our arms to the way we blinked our eyes.”
From time to time I return to Ozu feeling a need to be calmed and restored. He is a man with a profound understanding of human nature, about which he makes no dramatic statements. We are here, we hope to be happy, we want to do well, we are locked within our aloneness, life goes on. He embodies this vision in a cinematic style so distinctive that you can tell an Ozu film almost from a single shot."
If you think that ikebana has nothing to do with film direction, think again. The Sogetsu School was then being run by Hiroshi Teshigahara, the director of “Woman in the Dunes,” who left filmmaking to become the third generation of his family to head of the school. after he died in 1991, his daughter became the fourth. I gathered that the Teshigaharas believed when you studied ikebana you studied your relationship with the material world.
Now turn to Yasujiro Ozu, who is one of the three of four best filmmakers in the world, and certainly the one who brings me the most serenity. I’ve seen 14 of his films, four of them with the shot-by-shot approach. That doesn’t make me an expert, but it makes me familiar with his ways of seeing. In the films I’ve seen, he has a few favorite themes, subjects and compositions, and carefully arranges and rearranges them. Some say “he makes the same film every time.” That’s like saying “all people are born with two eyes.” What matters is how you see with them.
Over an opening frame of “The Only Son” (1936), we read a quotation by the writer Akutagawa: “Life’s tragedy begins with the bond between parent and child.” So do most of Ozu’s films. Again and again, he focuses on parents and their children, and often on their grandchildren. A typical plot will involve sacrifice by a parent or a child for the happiness of the other. It is not uncommon for both parent and child to make sacrifices in a mistaken belief about what the other desires. The issues involved are marriage, children, independence for the young, care for the old, and success in the world.
He tells these stories within a visual frame so distinctive that I believe you can identify any Ozu film after seeing a shot or two, sometimes even from a still. How he came upon his approach I don’t know, but you see it fully mature even in his silent films. For Ozu, all depends on the composition of the shot. He almost never moves his camera. He usually shoots from the eye level of a person seated on a tatami mat. He often begins shots before characters enter, and holds them after they leave. He separates important scenes with “pillow shots” of exterior architectural or landscape details. He uses evocative music, never too loud. I have never seen him use violence. When violence occurs, people commit it within themselves.
Parents and children, then families, are his chosen subjects. He tells each story with his familiar visual strategy, which is pure and simplified, never calling attention to itself. His straight-on shots are often framed on sides and back, and with foreground objects. His exteriors and groups of two or more characters are usually at oblique angles. Is this monotonous? Never, because within his rules he finds infinite variation. A modern chase scene is much more monotonous, because it gives you nothing to think about.
In “The Only Son,” there is a remarkable moment when we have a great deal of time to think. The story is about the son of a widowed mother who works in a provincial silk spinning mill. This is hard and spirit-crushing work, but she does it to put her son through high school and set him on his road in life. After graduating, he follows an admired teacher to seek his future in Tokyo. Four years pass. His mother comes to visit him, unannounced. They are happy to see one another, they love one another, but he has a surprise: He has a wife and an infant child. Why didn’t he tell her? We gather he didn’t want to create an occasion for her to visit Tokyo and find that he is very poor, has a low-paying job, teaching geometry in a night school, and that he lives in a desolate district in view of the smokestacks of the Tokyo garbage incinerators.
The rest of the plot you can discover. It leads to a conversation in which he shares his discouragement, and tells her she may have wasted her sacrifice. She encourages him to persevere. He thinks he’s had a bad roll of the dice. There is no place for him in Tokyo. Simple mill worker that she is, what can she reply to this? She sits up late, sleepless. He awakens, and they talk some more. She weeps. In a reframed shot, his wife weeps. Then Ozu provides a shot of an unremarkable corner of the room. Nothing much there. A baby bottle. A reproduction of a painting. Nothing. He holds this shot. And holds it. And holds it. I feel he could not look at them any longer, and had to look away, thinking about what has happened. Finally there is an exterior pillow shot of the morning.
If Ozu returns to characteristic visuals, he also returns to familiar actors. In “An Only Son,” the small but important role of the hero’s teacher is played by Chishu Ryu — the teacher who, after moving to Tokyo, fails to realize his own dreams and, as the son bitterly tells his mother, is “reduced to frying pork cutlets.” This was Ryu’s seventh film for Ozu. In all he was to appear in 52 of Ozu’s 54 films, between 1929 and 1962. He is the old father in “Tokyo Story” (1953).
Ryu is an actor who we recognize from body language. He exudes restraint, courtesy. He smokes meditatively. He said Ozu directed him as little as possible: “He had made up the complete picture in his head before he went on the set, so that all we actors had to do was to follow his directions, from the way we lifted and dropped our arms to the way we blinked our eyes.”
by Roger Ebert, RogerEbert.com | Read more:
Image: The Only Son
[ed. I don't watch much tv, which is sometimes unfortunate. It'd be nice to have that little distraction whenever boredom (or ennui) sets in and whatever book I'm reading isn't finding traction. Anyway, tonight I decided to look for a foreign film that would be time well spent. Of course, in deciding what to choose I fell down this rabbit hole. I've seen Tokyo Story, but little else of Ozu's work. I'll start with The Only Son, then see what An Autumn Afternoon has to offer:]
***
"The more you learn about Yasujiro Ozu, the director of “An Autumn Afternoon” (1962), the more you realize how very deep the waters reach beneath his serene surfaces. Ozu is one of the greatest artists to ever make a film. This was his last one. He never married. He lived for 60 years with his mother, and when she died, he was dead a few months later. Over and over again, in almost all of his films, he turned to the same central themes, of loneliness, of family, of dependence, of marriage, of parents and children. He holds these themes to the light and their prisms cast variations on each screenplay. His films are all made within the emotional space of his life, in which he finds not melodramatic joy or tragedy, but mono no aware, which is how the Japanese refer to the bittersweet transience of all things.From time to time I return to Ozu feeling a need to be calmed and restored. He is a man with a profound understanding of human nature, about which he makes no dramatic statements. We are here, we hope to be happy, we want to do well, we are locked within our aloneness, life goes on. He embodies this vision in a cinematic style so distinctive that you can tell an Ozu film almost from a single shot."
Blame and Claim
A public adjuster on insuring a burning world
Just off a hiking trail, not far from where Sunset Boulevard meets the sea, a fuel and an oxidant combine and combust. The underbrush is dry and dusty, and within an hour flames engulf your home. Smoke fills your kitchen and your garage. Flecks of wallpaper from your children’s bedroom float down onto a nearby parking lot. Your wedding photos melt, as does your car battery. The glass windows of your dining room shatter and temperatures reach a thousand degrees. The root cause might have been a mountaineer who burned his toilet paper at dawn, a spark at a faulty transmission line in the foothills, a discarded cigarette fanned by the Santa Anas, or, simply, arson.
But it is too early to assign blame. Your attention is elsewhere. You are not home and you cannot get there, as the fire department has evacuated your neighborhood, the Pacific Palisades in Los Angeles. Your mind races, and you reach for your phone to ensure your family is safe even if you have already heard from them. Maybe you call the police, even though you hear the sirens throughout your neighborhood and see the caravans of emergency vehicles filling the streets.
When you do manage to get home, you stand on the sidewalk watching your rafters collapse and, covering your mouth with a shirtsleeve, you make your next call, to your insurance company to file a claim. You don’t know what this process entails. You have never filed a homeowner’s or business insurance claim, you have never read your policy, and you do not know if your policy covers what has happened, since you do not know what has happened or what caused it.
But as you stand there, a man in business-casual attire emerges from the smoke and approaches you apprehensively. He introduces himself as someone who can help. His title, too, is adjuster, but if you are able to focus enough on his pitch, he tells you he is not an employee of your insurance company or of a roofing company or a general contractor. If you would like help navigating the ashes of your new life, he will help you rebuild: independently value your losses, handle communications and negotiations with your insurer, draft paperwork, and take care of the settlement of the claims. He is part private detective, part lawyer, part psychologist. All of this sounds reasonable, so you take his card and tell him you’ll be in touch.
That evening, as you make plans for your family to sleep at a nearby friend’s house or in a hotel, some quick internet research teaches you this “public” adjuster is indeed part of a legitimate industry (although sometimes public adjusters, you discover, are known as “private” adjusters). Staff adjusters, you learn, are the ones that work for insurance companies, and independent adjusters are contracted for certain projects by insurance companies.
This ecosystem of adjusters is baffling, but you decide to retain the public adjuster. As you sign his contract, he informs you that he will take a significant cut of any claim settlement he negotiates. Your calculation is that outsourcing the administration of the recovery of your life is worth the cost—so long as the insurance company agrees to write a check.
I recently spoke with the president of a large public adjuster firm in California that represented victims of the Palisades and Eaton fires that broke out in early 2025 and destroyed about sixteen thousand buildings on nearly forty thousand acres, causing tens of billions of dollars in damages. This conversation has been edited for length and clarity.
Adjuster: It depends on the size of the claim, but some will do a hundred claims a year, mostly smaller—$10,000 claims or $50,000 claims. But if you’re talking about somebody who’s handling complicated claims, I’d say an average load for an adjuster is somewhere between twenty and fifty a year.
TM: And you handle more than just massive disasters, right?
Adjuster: We respond to disasters every day, 365 days a year. Some of them are disasters that affect a hundred people or a thousand people. Those are big events. But there are buildings that burn down every single day. It doesn’t matter whether you’re in Minnesota or if you’re in New York, there’s water damage, there’s flooding, there are fires, there are robberies. It doesn’t require a hurricane or a wildfire for there to be a need for our service.
TM: I’ve read that clients don’t really know that public adjusters exist until they are desperate. Is part of your job getting the word out that this is an industry?
Adjuster: We’re luckier now in today’s world of technology because people can search for things online. I’ve been doing this thirty-three or thirty-four years, when there was no internet to search. If you had an insurance claim, you only had the connections you had, but today people can type into Google, “Can I get any help with my insurance claim?”
TM: I presume you go out into the field to attract clients?
Adjuster: Yes, part of the job is to be out there when an event happens or shortly after an event is over, to let people know that we exist.
TM: When a large fire like in the Pacific Palisades in Los Angeles breaks out, you go as quickly as possible to the scene?
Adjuster: Yes. When you show up at somebody’s house and the family is in the front yard crying and trying to save things that aren’t savable, it’s sad. Sometimes it’s total loss, and you find people sifting through the rubble, lining up bits of pottery.
TM: And when you approach these suffering people, how do they respond?
Adjuster: You get a wide range of emotional responses, from “Get the fuck off my property, you ambulance-chasing vulture” to “Oh my God, we’re so lost. We don’t know what to do. Thank you so much for being here. Can you help us?”
TM: That must be a difficult emotional minefield to wade into.
Adjuster: Yes, and when you’re walking up to meet these people, most of the time they’ve never heard of a public adjuster. They have no idea who we are or what we do or that it’s a licensed profession. It can look like we’re trying to prey on people when they’re at this vulnerable point. The reality is that’s when they need help the most, because often they do whatever the insurance company tells them to do. That puts them in the worst spot they could be in.
TM: Worst spot?
Adjuster: So, say someone calls us six months after a fire. They have been arguing with their insurance company about the value of a claim and then, out of nowhere, they get a $65,000 bill from the restoration company [a third-party, for-profit vendor] and they want us to deal with that too. We have to say: You already agreed in writing and signed for them to do that work. That money’s gone, you spent it. We can’t take that back because it was an agreement you made before we were involved.
Most people just know they have an insurance agent that sold them some insurance, and they do what they’re told. Often that results in mistakes.
TM: What kinds of mistakes?
Adjuster: I’ll give you the easiest one. There is a fire in your house, but it burns only part of your house down. There’s still stuff in it. It’s not like a wildfire where it burns all the way to the ground. So the insurance company comes out, and they bring a restoration contractor. He’s going to help you get your stuff out of the house, store it, and get it cleaned up. Seems like an incredibly important service. He says it’s going to get worse if we don’t get your stuff out of the environment. Just sign here.
TM: Okay.
Adjuster: If the owner asks, “Who pays for this?,” the automatic response is “Oh, don’t worry about it, the insurance company pays for it, it’s part of your policy.” It makes perfect sense at the time. What they don’t share is that it erodes your contents limit [which means it reduces how much money the insurance company is likely to pay out]. You have given them carte blanche, and they can bill the insurance company directly. They charge not only for clean-up but for storage. And there’s no language that protects the homeowner if they’re not happy with the service.
TM: The homeowner is vulnerable at this point.
Adjuster: What they don’t understand is that six months from now, their stuff has all been cleaned, and the restoration company charged maybe a thousand dollars to clean something that was worth four hundred dollars and they don’t even want anymore. They could have just said, “Oh, a thousand dollars to clean that item? I don’t care about that anymore. Give me the thousand dollars.”
TM: And what can you do as an adjuster to prevent this?
Adjuster: You can say to the insurance company that our client wants to select items that have intrinsic value or that we believe are valuable enough to save and restore. We can advise that often the cost to clean something is more than its value or that it’s too damaged to properly restore it. Otherwise, a homeowner will find out that the restoration company has charged $65,000 when they have $300,000 of coverage for their contents, and that $65,000 is coming right off the top, and the cleaning costs reduces the amount of insurance they have for the things that they’ve completely lost.
TM: Back to the field, is the pitch as simple as “Hi, this might be awkward, but my name is x and I’m a public adjuster, which means I help people like you”?
Adjuster: Yeah. Often it’s “Your insurance company’s going to come out here, they’re going to assign an adjuster. That adjuster works for the insurance company. They don’t work for you. You have the opportunity and you have the right to hire your own public adjusting team that counterbalances the insurance company’s team so that you have an advocate who’s a true advocate for you to level the playing field.” That’s the pitch.
TM: Do you have a sense for what percentage of people who’ve been victimized by a catastrophe are able to engage public adjusters? I assume that most people, when they’ve gone through something like that, call their insurance company, right?
Adjuster: That’s traditionally what happens, yes. They either call their agent, if their insurance agent is somebody who they’re close with, or they call the insurance company and give notice that they have a claim. And some agents will refer clients to us in a secretive way. Some brokers [who work for policy holders, not insurance companies] think that if the carriers see that they’re recommending a public adjuster, that will be bad for their reputation with the insurance carriers. Some brokers don’t care.
TM: So how does that work?
Adjuster: Some brokers say, “Hey, don’t tell anybody I told you this, but you should talk to x public adjuster.” Or sometimes it’s more open, like, “Hey, [this public adjuster company] helped a lot of my clients, so you might want to talk to them.”
TM: So how do the brokers respond to you?
Adjuster: There are insurance brokers who haven’t worked with us or don’t know us. Or they feel threatened because they were hired to do this job, and by bringing or inviting you in as a public adjuster, they’re admitting that they don’t know what they’re doing. If you’re a salesperson and you’re selling insurance policies and you’re a credible person, you want to believe that what you’re selling is the best product available. You want to hold your head up high and say, “I represent x insurance company and they’re great insurance.” So for some insurance brokers, saying “Maybe you need help getting money” is saying something negative about the insurance company. For some insurance agents, that doesn’t feel right.
TM: Do you feel you are adversarial to insurance companies?
Adjuster: We are advocating for the policy holder, not the insurance company. The insurance companies like to say, “Why do you need a public adjuster? We’re going to pay you all the money you’re owed anyway.” But if that was true, then why would they care? Why would they even have that discussion if they’re going to pay the same benefits regardless of whether somebody has somebody helping them put it together? The reality is that they’re going to pay as little as they can. So are we adversarial, or are we just taking the workload off the policy holder? It’s an arduous process. Imagine a family where everything is gone, disappeared into the smoke, and you have the burden of sharing with the insurance company everything that you lost. Where would you start?
Just off a hiking trail, not far from where Sunset Boulevard meets the sea, a fuel and an oxidant combine and combust. The underbrush is dry and dusty, and within an hour flames engulf your home. Smoke fills your kitchen and your garage. Flecks of wallpaper from your children’s bedroom float down onto a nearby parking lot. Your wedding photos melt, as does your car battery. The glass windows of your dining room shatter and temperatures reach a thousand degrees. The root cause might have been a mountaineer who burned his toilet paper at dawn, a spark at a faulty transmission line in the foothills, a discarded cigarette fanned by the Santa Anas, or, simply, arson.
But it is too early to assign blame. Your attention is elsewhere. You are not home and you cannot get there, as the fire department has evacuated your neighborhood, the Pacific Palisades in Los Angeles. Your mind races, and you reach for your phone to ensure your family is safe even if you have already heard from them. Maybe you call the police, even though you hear the sirens throughout your neighborhood and see the caravans of emergency vehicles filling the streets.
When you do manage to get home, you stand on the sidewalk watching your rafters collapse and, covering your mouth with a shirtsleeve, you make your next call, to your insurance company to file a claim. You don’t know what this process entails. You have never filed a homeowner’s or business insurance claim, you have never read your policy, and you do not know if your policy covers what has happened, since you do not know what has happened or what caused it.
You are unaware that the insurance industry has been, in recent years, denying more claims and more coverage, exiting major markets, and raising premiums. As governments and corporations continue to enable fossil fuels, throttle renewable-energy sources, and deny long-established climate science, the related catastrophes (fires, floods, droughts, storms) and social effects (mass migration, war over natural resources, economic and demographic stratification) are increasingly commonplace and metastasizing. This new world order transfers the risk and harm of the disaster business by way of the insurance industry onto you, the consumer. On an episode of the climate science podcast A Matter of Degrees, Dave Jones, a former California insurance commissioner who is now the director of the Climate Risk Initiative at UC Berkeley, said, “For many Americans, the single biggest financial asset you have is your home. If you don’t have insurance or you can’t afford enough insurance and that home is destroyed, then you’re left with basically nothing. Insurance is the climate crisis canary in the coal mine, and the canary is just about dead.”
Days later, as embers still burn and you begin to accept that not one object will be recovered or salvaged from your home, your insurance company sends one of its employees or contractors, called an adjuster, to assess the damage, value what is or was, and (hopefully) make an offer of payment. While insurance companies defend their adjusters as necessary agents who help them evaluate claims, critics label them as conflicted loyalists who will undervalue losses, delay settlements, and pressure policy holders to settle quickly.
Days later, as embers still burn and you begin to accept that not one object will be recovered or salvaged from your home, your insurance company sends one of its employees or contractors, called an adjuster, to assess the damage, value what is or was, and (hopefully) make an offer of payment. While insurance companies defend their adjusters as necessary agents who help them evaluate claims, critics label them as conflicted loyalists who will undervalue losses, delay settlements, and pressure policy holders to settle quickly.
But as you stand there, a man in business-casual attire emerges from the smoke and approaches you apprehensively. He introduces himself as someone who can help. His title, too, is adjuster, but if you are able to focus enough on his pitch, he tells you he is not an employee of your insurance company or of a roofing company or a general contractor. If you would like help navigating the ashes of your new life, he will help you rebuild: independently value your losses, handle communications and negotiations with your insurer, draft paperwork, and take care of the settlement of the claims. He is part private detective, part lawyer, part psychologist. All of this sounds reasonable, so you take his card and tell him you’ll be in touch.
That evening, as you make plans for your family to sleep at a nearby friend’s house or in a hotel, some quick internet research teaches you this “public” adjuster is indeed part of a legitimate industry (although sometimes public adjusters, you discover, are known as “private” adjusters). Staff adjusters, you learn, are the ones that work for insurance companies, and independent adjusters are contracted for certain projects by insurance companies.
This ecosystem of adjusters is baffling, but you decide to retain the public adjuster. As you sign his contract, he informs you that he will take a significant cut of any claim settlement he negotiates. Your calculation is that outsourcing the administration of the recovery of your life is worth the cost—so long as the insurance company agrees to write a check.
I recently spoke with the president of a large public adjuster firm in California that represented victims of the Palisades and Eaton fires that broke out in early 2025 and destroyed about sixteen thousand buildings on nearly forty thousand acres, causing tens of billions of dollars in damages. This conversation has been edited for length and clarity.
***
Tyler Maroney: How many claims does the average public adjuster typically handle in a year?Adjuster: It depends on the size of the claim, but some will do a hundred claims a year, mostly smaller—$10,000 claims or $50,000 claims. But if you’re talking about somebody who’s handling complicated claims, I’d say an average load for an adjuster is somewhere between twenty and fifty a year.
TM: And you handle more than just massive disasters, right?
Adjuster: We respond to disasters every day, 365 days a year. Some of them are disasters that affect a hundred people or a thousand people. Those are big events. But there are buildings that burn down every single day. It doesn’t matter whether you’re in Minnesota or if you’re in New York, there’s water damage, there’s flooding, there are fires, there are robberies. It doesn’t require a hurricane or a wildfire for there to be a need for our service.
TM: I’ve read that clients don’t really know that public adjusters exist until they are desperate. Is part of your job getting the word out that this is an industry?
Adjuster: We’re luckier now in today’s world of technology because people can search for things online. I’ve been doing this thirty-three or thirty-four years, when there was no internet to search. If you had an insurance claim, you only had the connections you had, but today people can type into Google, “Can I get any help with my insurance claim?”
TM: I presume you go out into the field to attract clients?
Adjuster: Yes, part of the job is to be out there when an event happens or shortly after an event is over, to let people know that we exist.
TM: When a large fire like in the Pacific Palisades in Los Angeles breaks out, you go as quickly as possible to the scene?
Adjuster: Yes. When you show up at somebody’s house and the family is in the front yard crying and trying to save things that aren’t savable, it’s sad. Sometimes it’s total loss, and you find people sifting through the rubble, lining up bits of pottery.
TM: And when you approach these suffering people, how do they respond?
Adjuster: You get a wide range of emotional responses, from “Get the fuck off my property, you ambulance-chasing vulture” to “Oh my God, we’re so lost. We don’t know what to do. Thank you so much for being here. Can you help us?”
TM: That must be a difficult emotional minefield to wade into.
Adjuster: Yes, and when you’re walking up to meet these people, most of the time they’ve never heard of a public adjuster. They have no idea who we are or what we do or that it’s a licensed profession. It can look like we’re trying to prey on people when they’re at this vulnerable point. The reality is that’s when they need help the most, because often they do whatever the insurance company tells them to do. That puts them in the worst spot they could be in.
TM: Worst spot?
Adjuster: So, say someone calls us six months after a fire. They have been arguing with their insurance company about the value of a claim and then, out of nowhere, they get a $65,000 bill from the restoration company [a third-party, for-profit vendor] and they want us to deal with that too. We have to say: You already agreed in writing and signed for them to do that work. That money’s gone, you spent it. We can’t take that back because it was an agreement you made before we were involved.
Most people just know they have an insurance agent that sold them some insurance, and they do what they’re told. Often that results in mistakes.
TM: What kinds of mistakes?
Adjuster: I’ll give you the easiest one. There is a fire in your house, but it burns only part of your house down. There’s still stuff in it. It’s not like a wildfire where it burns all the way to the ground. So the insurance company comes out, and they bring a restoration contractor. He’s going to help you get your stuff out of the house, store it, and get it cleaned up. Seems like an incredibly important service. He says it’s going to get worse if we don’t get your stuff out of the environment. Just sign here.
TM: Okay.
Adjuster: If the owner asks, “Who pays for this?,” the automatic response is “Oh, don’t worry about it, the insurance company pays for it, it’s part of your policy.” It makes perfect sense at the time. What they don’t share is that it erodes your contents limit [which means it reduces how much money the insurance company is likely to pay out]. You have given them carte blanche, and they can bill the insurance company directly. They charge not only for clean-up but for storage. And there’s no language that protects the homeowner if they’re not happy with the service.
TM: The homeowner is vulnerable at this point.
Adjuster: What they don’t understand is that six months from now, their stuff has all been cleaned, and the restoration company charged maybe a thousand dollars to clean something that was worth four hundred dollars and they don’t even want anymore. They could have just said, “Oh, a thousand dollars to clean that item? I don’t care about that anymore. Give me the thousand dollars.”
TM: And what can you do as an adjuster to prevent this?
Adjuster: You can say to the insurance company that our client wants to select items that have intrinsic value or that we believe are valuable enough to save and restore. We can advise that often the cost to clean something is more than its value or that it’s too damaged to properly restore it. Otherwise, a homeowner will find out that the restoration company has charged $65,000 when they have $300,000 of coverage for their contents, and that $65,000 is coming right off the top, and the cleaning costs reduces the amount of insurance they have for the things that they’ve completely lost.
TM: Back to the field, is the pitch as simple as “Hi, this might be awkward, but my name is x and I’m a public adjuster, which means I help people like you”?
Adjuster: Yeah. Often it’s “Your insurance company’s going to come out here, they’re going to assign an adjuster. That adjuster works for the insurance company. They don’t work for you. You have the opportunity and you have the right to hire your own public adjusting team that counterbalances the insurance company’s team so that you have an advocate who’s a true advocate for you to level the playing field.” That’s the pitch.
TM: Do you have a sense for what percentage of people who’ve been victimized by a catastrophe are able to engage public adjusters? I assume that most people, when they’ve gone through something like that, call their insurance company, right?
Adjuster: That’s traditionally what happens, yes. They either call their agent, if their insurance agent is somebody who they’re close with, or they call the insurance company and give notice that they have a claim. And some agents will refer clients to us in a secretive way. Some brokers [who work for policy holders, not insurance companies] think that if the carriers see that they’re recommending a public adjuster, that will be bad for their reputation with the insurance carriers. Some brokers don’t care.
TM: So how does that work?
Adjuster: Some brokers say, “Hey, don’t tell anybody I told you this, but you should talk to x public adjuster.” Or sometimes it’s more open, like, “Hey, [this public adjuster company] helped a lot of my clients, so you might want to talk to them.”
TM: So how do the brokers respond to you?
Adjuster: There are insurance brokers who haven’t worked with us or don’t know us. Or they feel threatened because they were hired to do this job, and by bringing or inviting you in as a public adjuster, they’re admitting that they don’t know what they’re doing. If you’re a salesperson and you’re selling insurance policies and you’re a credible person, you want to believe that what you’re selling is the best product available. You want to hold your head up high and say, “I represent x insurance company and they’re great insurance.” So for some insurance brokers, saying “Maybe you need help getting money” is saying something negative about the insurance company. For some insurance agents, that doesn’t feel right.
TM: Do you feel you are adversarial to insurance companies?
Adjuster: We are advocating for the policy holder, not the insurance company. The insurance companies like to say, “Why do you need a public adjuster? We’re going to pay you all the money you’re owed anyway.” But if that was true, then why would they care? Why would they even have that discussion if they’re going to pay the same benefits regardless of whether somebody has somebody helping them put it together? The reality is that they’re going to pay as little as they can. So are we adversarial, or are we just taking the workload off the policy holder? It’s an arduous process. Imagine a family where everything is gone, disappeared into the smoke, and you have the burden of sharing with the insurance company everything that you lost. Where would you start?
by Tyler Maroney, The Baffler | Read more:
Image: Andrew Norman Wilson.[ed. Public service post. Reminds me that I need to do an annual homeowner's insurance review. Been wondering how premiums and coverage have changed in the wake of increasingly common climate-related disasters. Unfortunately, no detail is provided on what these services are likely to cost (other than a "significant cut" of any negotiated claim settlement).]
1912 Eighth Grade Exam
8th grade graduation exam from 1912 (ages 13-14). Rural Kentucky.
via: Bullitt County History Museum
[ed. I'd just settle for stronger civics, history, reading, and home/personal economics lessons. And, most importantly, instilling a love of learning over rote memorization. See also: No, You’re Probably Not Smarter Than a 1912-Era 8th Grader (Smithsonian).]
Take the Messy Job
I am often approached by students and other young people for advice about their careers. In the past, my answers were often based on a piece of advice I myself got from Bengt Holmstrom: “when in doubt, choose the job where you will learn more.” In the last few years, there is a new variable to consider: the likelihood that artificial intelligence will automate all or large pieces of the job you do. Given that, what should a student choose today? The answers below are motivated by a book on artificial intelligence and the organization of work on which I am currently working with Jin Li and Yanhui Wu.
The risk of the single-task job is that artificial intelligence excels at single tasks. Humans are still often in the loop, since the rate of errors in many fields is still too high to allow for unsupervised artificial intelligence. But the rate of errors is rapidly decreasing. (...)
The result is that workers with simple tasks will become continuously more productive (and richer), until their work is worth nothing. A junior customer support agent gets more and more effective while the AI provides her the accumulated knowledge of senior customer support agents, as in the recent Brynjolfsson, Li; Ramond (2025) paper, until the AI is good enough that she can be replaced. (...)
The end of work? Not so fast
The other option is to go for a messy job, where the output is the product of many different tasks, many of which affect each other.
The head of engineering at a manufacturing plant I know well must decide who to hire, which machines to buy, how to lay them down in the plant, negotiate with the workers and the higher ups the solutions proposed, and mobilise the resources to implement them. That task is extraordinarily hard to automate. Artificial intelligence commoditizes codified knowledge: textbooks, proofs, syntax. But it does not interface in a meaningful way with local knowledge, where a much larger share of the value of messy jobs is created. Even if artificial intelligence excelled at most of the single tasks that make up her job, it could not walk the factory floor to cajole a manager to redesign a production process.
A management consultant whose job consists entirely of producing slide decks is exposed. A consultant who spends half of her time reading the room, building client relationships, and navigating organizational politics has a bundle AI cannot replicate.
In 2016, star AI researcher Geoffrey Hinton leaped from automation of reading scans to the automation of the full radiologist job, and gave the advice to stop training radiologists. But even fields that can look simple from the outside, like radiology, can be quite messy. A small study from 2013 (cited in this Works in Progress article) found that radiologists only spend 36 percent of their time looking at scans. The rest is spent talking to patients, training others, and talking with the nurses and doctors treating the patient.
A radiologist’s job is a bundle. You can automate reading scans and still need a radiologist. The question is not whether AI can do one part of your job. It is whether the remaining parts cohere in a manner that justifies a role.
To me, a key characteristic of these “messy jobs” is execution. Execution is hard because it faces the friction of the real world. Consider a general contractor on a building site. Artificial intelligence can sketch a blueprint and calculate load-bearing requirements in seconds. That is codified knowledge. But the contractor must handle the delivery of lumber that arrived late, the ground that is too muddy to pour concrete, or the bickering between the electrician and the plumber.
Or consider the manager in charge of post-merger integration at a corporation. Again, the algorithm will map financial synergies and redraw org charts, but it will not have the “tribal” knowledge required to merge two distinct cultures and have the tact to prevent an exodus.
Corporate law is increasingly vulnerable to automation because contracts are essentially code, but I would expect trial attorneys to subsist.
AI implementation itself could be the ultimate messy job. Improvements will require drastically changing existing workflows, a process that will be resisted by internal politics, fear, and legacy business models. For instance, law firms have always relied on “billable hours” to charge clients, a concept that will be useless in an AI world. But this organizational inertia is a gift: the transformation will be messier and more delayed than the charts suggest and it will require a lot of consultants, managers and workers, well versed in what AI can do, but with sufficient domain knowledge to know how to use it and how to redefine the process.
In the extreme instances, the feared AI transformation may not take place. Jobs defined by empathy, care, and real-time judgment will become the economy’s ‘luxury goods.’ In these fields, artificial intelligence is not your competitor; it generates the wealth (and lowers the costs of goods and services) that will fund your higher wages.
One way of thinking about this is that all knowledge work varies along one important spectrum: messiness. On one end, there is one defined task to execute, say helping clients fill their taxes. You get the expenses and payslips on email, you use some rules to put them on a form, you obtain a response. Over time, you become better at this task, and get a higher salary. On the other end of the spectrum, there is a wide bundle of complex tasks. Running a factory, or a family, involves many different tasks that are very hard to specify in advance.
The risk of the single-task job is that artificial intelligence excels at single tasks. Humans are still often in the loop, since the rate of errors in many fields is still too high to allow for unsupervised artificial intelligence. But the rate of errors is rapidly decreasing. (...)
The result is that workers with simple tasks will become continuously more productive (and richer), until their work is worth nothing. A junior customer support agent gets more and more effective while the AI provides her the accumulated knowledge of senior customer support agents, as in the recent Brynjolfsson, Li; Ramond (2025) paper, until the AI is good enough that she can be replaced. (...)
The end of work? Not so fast
The other option is to go for a messy job, where the output is the product of many different tasks, many of which affect each other.
The head of engineering at a manufacturing plant I know well must decide who to hire, which machines to buy, how to lay them down in the plant, negotiate with the workers and the higher ups the solutions proposed, and mobilise the resources to implement them. That task is extraordinarily hard to automate. Artificial intelligence commoditizes codified knowledge: textbooks, proofs, syntax. But it does not interface in a meaningful way with local knowledge, where a much larger share of the value of messy jobs is created. Even if artificial intelligence excelled at most of the single tasks that make up her job, it could not walk the factory floor to cajole a manager to redesign a production process.
A management consultant whose job consists entirely of producing slide decks is exposed. A consultant who spends half of her time reading the room, building client relationships, and navigating organizational politics has a bundle AI cannot replicate.
In 2016, star AI researcher Geoffrey Hinton leaped from automation of reading scans to the automation of the full radiologist job, and gave the advice to stop training radiologists. But even fields that can look simple from the outside, like radiology, can be quite messy. A small study from 2013 (cited in this Works in Progress article) found that radiologists only spend 36 percent of their time looking at scans. The rest is spent talking to patients, training others, and talking with the nurses and doctors treating the patient.
A radiologist’s job is a bundle. You can automate reading scans and still need a radiologist. The question is not whether AI can do one part of your job. It is whether the remaining parts cohere in a manner that justifies a role.
To me, a key characteristic of these “messy jobs” is execution. Execution is hard because it faces the friction of the real world. Consider a general contractor on a building site. Artificial intelligence can sketch a blueprint and calculate load-bearing requirements in seconds. That is codified knowledge. But the contractor must handle the delivery of lumber that arrived late, the ground that is too muddy to pour concrete, or the bickering between the electrician and the plumber.
Or consider the manager in charge of post-merger integration at a corporation. Again, the algorithm will map financial synergies and redraw org charts, but it will not have the “tribal” knowledge required to merge two distinct cultures and have the tact to prevent an exodus.
Corporate law is increasingly vulnerable to automation because contracts are essentially code, but I would expect trial attorneys to subsist.
AI implementation itself could be the ultimate messy job. Improvements will require drastically changing existing workflows, a process that will be resisted by internal politics, fear, and legacy business models. For instance, law firms have always relied on “billable hours” to charge clients, a concept that will be useless in an AI world. But this organizational inertia is a gift: the transformation will be messier and more delayed than the charts suggest and it will require a lot of consultants, managers and workers, well versed in what AI can do, but with sufficient domain knowledge to know how to use it and how to redefine the process.
In the extreme instances, the feared AI transformation may not take place. Jobs defined by empathy, care, and real-time judgment will become the economy’s ‘luxury goods.’ In these fields, artificial intelligence is not your competitor; it generates the wealth (and lowers the costs of goods and services) that will fund your higher wages.
[ed. See also: What is there to fear in a post AGI world? (pdf/NBER)
The Who
The Real Donroe Doctrine
by Paul Krugman, Substack | Read more:
Image: The Who/YouTube
[ed. Almost a parody these days. Turns out fooling half the country over and over again is actually pretty easy. A lot of people don't mind being repeatedly lied to if it makes them feel good (and it's their tribe doing the lying).]
“Trump remains confident that his fans support the invasion, saying, ‘MAGA loves it. MAGA loves what I’m doing. MAGA loves everything I do. MAGA is me. MAGA loves everything I do, and I love everything I do, too. In conclusion, I am he, as you are he, as you are me, and we are all together, I am the walrus, go do the coup!’” — Stephen Colbert.]
Sunday, January 4, 2026
Dan Wang: 2025 Letter
[ed. Dang Wang has a new annual newsletter out (pleasant surprise!) mostly about China and Silicon Valley - or more generally, US vs. China competition. He skipped producing one last year when his book Breakneck was published. Previous letters can be found here and here. Enjoy.]
If the Bay Area once had an impish side, it has gone the way of most hardware tinkerers and hippie communes. Which of the tech titans are funny? In public, they tend to speak in one of two registers. The first is the blandly corporate tone we’ve come to expect when we see them dragged before Congressional hearings or fireside chats. The second leans philosophical, as they compose their features into the sort of reverie appropriate for issuing apocalyptic prophecies on AI. Sam Altman once combined both registers at a tech conference when he said: “I think that AI will probably, most likely, sort of lead to the end of the world. But in the meantime, there will be great companies created with serious machine learning.” Actually that was pretty funny.
It wouldn’t be news to the Central Committee that only the paranoid survive. The Communist Party speaks in the same two registers as the tech titans. The po-faced men on the Politburo tend to make extraordinarily bland speeches, laced occasionally with a murderous warning against those who cross the party’s interests. How funny is the big guy? We can take a look at an official list of Xi Jinping’s jokes, helpfully published by party propagandists. These wisecracks include the following: “On an inspection tour to Jiangsu, Xi quipped that the true measure of water cleanliness is whether the mayor would dare to swim in the water.” Or try this reminiscence that Xi offered on bad air quality: “The PM2.5 back then was even worse than it is now; I used to joke that it was PM250.” Yes, such a humorous fellow is the general secretary.
It’s nearly as dangerous to tweet a joke about a top VC as it is to make a joke about a member of the Central Committee. People who are dead serious tend not to embody sparkling irony. Yet the Communist Party and Silicon Valley are two of the most powerful forces shaping our world today. Their initiatives increase their own centrality while weakening the agency of whole nation states. Perhaps they are successful because they are remorseless.
Earlier this year, I moved from Yale to Stanford. The sun and the dynamism of the west coast have drawn me back. I found a Bay Area that has grown a lot weirder since I lived there a decade ago. In 2015, people were mostly working on consumer apps, cryptocurrencies, and some business software. Though it felt exciting, it looks in retrospect like a more innocent, even a more sedate, time. Today, AI dictates everything in San Francisco while the tech scene plays a much larger political role in the United States. I can’t get over how strange it all feels. In the midst of California’s natural beauty, nerds are trying to build God in a Box; meanwhile, Peter Thiel hovers in the background presenting lectures on the nature of the Antichrist. This eldritch setting feels more appropriate for a Gothic horror novel than for real life.
Before anyone gets the wrong idea, I want to say that I am rooting for San Francisco. It’s tempting to gawk at the craziness of the culture, as much of the east coast media tends to do. Yes, one can quickly find people who speak with the conviction of a cultist; no, I will not inject the peptides proffered by strangers. But there’s more to the Bay Area than unusual health practices. It is, after all, a place that creates not only new products, but also new modes of living. I’m struck that some east coast folks insist to me that driverless cars can’t work and won’t be accepted, even as these vehicles populate the streets of the Bay Area. Coverage of Silicon Valley increasingly reminds me of coverage of China, where a legacy media reporter might parachute in, write a dispatch on something that looks deranged, and leave without moving past caricature.
I enjoy San Francisco more than when I was younger because I now better appreciate what makes it work. I believe that Silicon Valley possesses plenty of virtues. To start, it is the most meritocratic part of America. Tech is so open towards immigrants that it has driven populists into a froth of rage. It remains male-heavy and practices plenty of gatekeeping. But San Francisco better embodies an ethos of openness relative to the rest of the country. Industries on the east coast — finance, media, universities, policy — tend to more carefully weigh name and pedigree. Young scientists aren’t told they ought to keep their innovations incremental and their attitude to hierarchy duly deferential, as they might hear in Boston. A smart young person could achieve much more over a few years in SF than in DC. People aren’t reminiscing over some lost golden age that took place decades ago, as New Yorkers in media might do.
San Francisco is forward looking and eager to try new ideas. Without this curiosity, it wouldn’t be able to create whole new product categories: iPhones, social media, large language models, and all sorts of digital services. For the most part, it’s positive that tech values speed: quick product cycles, quick replies to email. Past success creates an expectation that the next technological wave will be even more exciting. It’s good to keep building the future, though it’s sometimes absurd to hear someone pivot, mid-breath, from declaring that salvation lies in the blockchain to announcing that AI will solve everything.
People like to make fun of San Francisco for not drinking; well, that works pretty well for me. I enjoy board games and appreciate that it’s easier to find other players. I like SF house parties, where people take off their shoes at the entrance and enter a space in which speech can be heard over music, which feels so much more civilized than descending into a loud bar in New York. It’s easy to fall into a nerdy conversation almost immediately with someone young and earnest. The Bay Area has converged on Asian-American modes of socializing (though it lacks the emphasis on food). I find it charming that a San Francisco home that is poorly furnished and strewn with pizza boxes could be owned by a billionaire who can’t get around to setting up a bed for his mattress.
There’s still no better place for a smart, young person to go in the world than Silicon Valley. It adores the youth, especially those with technical skill and the ability to grind. Venture capitalists are chasing younger and younger founders: the median age of the latest Y Combinator cohort is only 24, down from 30 just three years ago. My favorite part of Silicon Valley is the cultivation of community. Tech founders are a close-knit group, always offering help to each other, but they circulate actively amidst the broader community too. (The finance industry in New York by contrast practices far greater secrecy.) Tech has organizations I think of as internal civic institutions that try to build community. They bring people together in San Francisco or retreats north of the city, bringing together young people to learn from older folks.
Silicon Valley also embodies a cultural tension. It is playing with new ideas while being open to newcomers; at the same time, it is a self-absorbed place that doesn’t think so much about the broader world. Young people who move to San Francisco already tend to be very online. They know what they’re signing up for. If they don’t fit in after a few years, they probably won’t stick around. San Francisco is a city that absorbs a lot of people with similar ethics, which reinforces its existing strengths and weaknesses.
Narrowness of mind is something that makes me uneasy about the tech world. Effective altruists, for example, began with sound ideas like concern for animal welfare as well as cost-benefit analyses for charitable giving. But these solid premises have launched some of its members towards intellectual worlds very distant from moral intuitions that most people hold; they’ve also sent a few into jail. The well-rounded type might struggle to stand out relative to people who are exceptionally talented in a technical domain. Hedge fund managers have views about the price of oil, interest rates, a reliably obscure historical episode, and a thousand other things. Tech titans more obsessively pursue a few ideas — as Elon Musk has on electric vehicles and space launches — rather than developing a robust model of the world.
So the 20-year-olds who accompanied Mr. Musk into the Department of Government Efficiency did not, I would say, distinguish themselves with their judiciousness. The Bay Area has all sorts of autistic tendencies. Though Silicon Valley values the ability to move fast, the rest of society has paid more attention to instances in which tech wants to break things. It is not surprising that hardcore contingents on both the left and the right have developed hostility to most everything that emerges from Silicon Valley.
There’s a general lack of cultural awareness in the Bay Area. It’s easy to hear at these parties that a person’s favorite nonfiction book is Seeing Like a State while their aspirationally favorite novel is Middlemarch. Silicon Valley often speaks in strange tongues, starting podcasts and shows that are popular within the tech world but do not travel far beyond the Bay Area. Though San Francisco has produced so much wealth, it is a relative underperformer in the national culture. Indie movie theaters keep closing down while all sorts of retail and art institutions suffer from the crumminess of downtown. The symphony and the opera keep cutting back on performances — after Esa-Pekka Salonen quit the directorship of the symphony, it hasn’t been able to name a successor. Wealthy folks in New York and LA have, for generations, pumped money into civic institutions. Tech elites mostly scorn traditional cultural venues and prefer to fund the next wave of technology instead.
One of the things I like about the finance industry is that it might be better at encouraging diverse opinions. Portfolio managers want to be right on average, but everyone is wrong three times a day before breakfast. So they relentlessly seek new information sources; consensus is rare, since there are always contrarians betting against the rest of the market. Tech cares less for dissent. Its movements are more herdlike, in which companies and startups chase one big technology at a time. Startups don’t need dissent; they want workers who can grind until the network effects kick in. VCs don’t like dissent, showing again and again that many have thin skins. That contributes to a culture I think of as Silicon Valley’s soft Leninism. When political winds shift, most people fall in line, most prominently this year as many tech voices embraced the right.
The two most insular cities I’ve lived in are San Francisco and Beijing. They are places where people are willing to risk apocalypse every day in order to reach utopia. Though Beijing is open only to a narrow slice of newcomers — the young, smart, and Han — its elites must think about the rest of the country and the rest of the world. San Francisco is more open, but when people move there, they stop thinking about the world at large. Tech folks may be the worst-traveled segment of American elites. People stop themselves from leaving in part because they can correctly claim to live in one of the most naturally beautiful corners of the world, in part because they feel they should not tear themselves away from inventing the future. More than any other topic, I’m bewildered by the way that Silicon Valley talks about AI.
Hallucinating the end of history
While critics of AI cite the spread of slop and rising power bills, AI’s architects are more focused on its potential to produce surging job losses. Anthropic chief Dario Amodei takes pains to point out that AI could push the unemployment rate to 20 percent by eviscerating white-collar work.
The most-read essay from Silicon Valley this year was AI 2027. The five authors, who come from the AI safety world, outline a scenario in which superintelligence wakes up in 2027; a decade later, it decides to annihilate humanity with biological weapons. My favorite detail in the report is that humanity would persist in a genetically modified form, after the AI reconstructs creatures that are “to humans what corgis are to wolves.” It’s hard to know what to make of this document, because the authors keep tucking important context into footnotes, repeatedly saying they do not endorse a prediction. Six months after publication, they stated that their timelines were lengthening, but even at the start their median forecast for the arrival of superintelligence was later than 2027. Why they put that year in their title remains beyond me.
It’s easy for conversations in San Francisco to collapse into AI. At a party, someone told me that we no longer have to worry about the future of manufacturing. Why not? “Because AI will solve it for us.” At another, I heard someone say the same thing about climate change. One of the questions I receive most frequently anywhere is when Beijing intends to seize Taiwan. But only in San Francisco do people insist that Beijing wants Taiwan for its production of AI chips. In vain do I protest that there are historical and geopolitical reasons motivating the desire, that chip fabs cannot be violently seized, and anyway that Beijing has coveted Taiwan for approximately seven decades before people were talking about AI.
Silicon Valley’s views on AI made more sense to me after I learned the term “decisive strategic advantage.” It was first used by Nick Bostrom’s 2014 book Superintelligence, which defined it as a technology sufficient to achieve “complete world domination.” How might anyone gain a DSA? A superintelligence might develop cyber advantages that cripple the adversary’s command-and-control capabilities. Or the superintelligence could self-recursively improve such that the lab or state that controls it gains an insurmountable scientific advantage. Once an AI reaches a certain capability threshold, it might need only weeks or hours to evolve into a superintelligence.
If you buy the potential of AI, then you might worry about the corgi-fication of humanity by way of biological weapons. This hope also helps to explain the semiconductor controls unveiled by the Biden administration in 2022. If the policymakers believe that DSA is within reach, then it makes sense to throw almost everything into grasping it while blocking the adversary from the same. And it barely matters if these controls stimulate Chinese companies to invent alternatives to American technologies, because the competition will be won in years, not decades.
The trouble with these calculations is that they mire us in epistemically tricky terrain. I’m bothered by how quickly the discussions of AI become utopian or apocalyptic. As Sam Altman once said (and again this is fairly humorous): “AI will be either the best or the worst thing ever.” It’s a Pascal’s Wager, in which we’re sure that the values are infinite, but we don’t know in which direction. It also forces thinking to be obsessively short term. People start losing interest in problems of the next five or ten years, because superintelligence will have already changed everything. The big political and technological questions we need to discuss are only those that matter to the speed of AI development. Furthermore, we must sprint towards a post-superintelligence world even though we have no real idea what it will bring.
Effective altruists used to be known for their insistence on thinking about the very long run; much more of the movement now is concerned about the development of AI in the next year. Call me a romantic, but I believe that there will be a future, and indeed a long future, beyond 2027. History will not end. We need to cultivate the skill of exact thinking in demented times.
I am skeptical of the decisive strategic advantage when I filter it through my main preoccupation: understanding China’s technology trajectories. On AI, China is behind the US, but not by years. There’s no question that American reasoning models are more sophisticated than the likes of DeepSeek and Qwen. But the Chinese efforts are doggedly in pursuit, sometimes a bit closer to US models, sometimes a bit further. By virtue of being open-source (or at least open-weight), the Chinese models have found receptive customers overseas, sometimes with American tech companies.
One advantage for Beijing is that much of the global AI talent is Chinese. We can tell from the CVs of researchers as well as occasional disclosures from top labs (for example from Meta) that a large percentage of AI researchers earned their degrees from Chinese universities. American labs may be able to declare that “our Chinese are better than their Chinese.” But some of these Chinese researchers may decide to repatriate. I know that many of them prefer to stay in the US: their compensation might be higher by an order of magnitude, they have access to compute, and they can work with top peers.
But they may also tire of the uncertainty created by Trump’s immigration policy. It’s never worth forgetting that at the dawn of the Cold War, the US deported Qian Xuesen, the CalTech professor who then built missile delivery systems for Beijing. Or these Chinese researchers expect life in Shanghai to be safer or more fun than in San Francisco. Or they miss mom. People move for all sorts of reasons, so I’m reluctant to believe that the US has a durable talent advantage.
China has other advantages in building AI. Superintelligence will demand a superload of power. By now everyone has seen the chart with two curves: US electrical generation capacity, which has barely budged upwards since the year 2000; and China’s capacity, which was one-third US levels in 2000 and more than two-and-a-half times US levels in 2024. Beijing is building so much solar, coal, and nuclear to make sure that no data center shall be in want. Though the US has done a superb job building data centers, it hasn’t prepared enough for other bottlenecks. Especially not as Trump’s dislike of wind turbines has removed this source of growth. Speaking of Trump’s whimsy, he has also been generous with selling close-to-leading chips to Beijing. That’s another reason that data centers might not represent a US advantage for long.
Silicon Valley has not demonstrated joined-up thinking for deploying AI. It would help if they learned from the central planners. The AI labs have not shown that they’re thinking seriously about how to diffuse the technology throughout society, which will require extensive regulatory and legal reform. How else will AI be able to fold doctors and lawyers into its tender mercies? Doing politics will also mean reaching out to more of the electorate, who are often uneasy with Silicon Valley’s promises while they see rising electrical bills. Silicon Valley has done a marvelous job in building data centers. But tech titans don’t look ready to plan for later steps in leading the whole-of-society effort into deploying AI everywhere.
The Communist Party lives for whole-of-society efforts. That’s what Leninist systems are built for. Beijing has set targets for deploying AI across society, though as usual with planning announcements, these numerical targets should be taken seriously and not literally. Chinese founders talk about AI mostly as a technology to be harnessed rather than a fickle power that might threaten all. Rather than building superintelligence, Chinese companies have been more interested in embedding AI into robots and manufacturing lines. Some researchers believe that this sort of embodied AI might present the real path towards superintelligence. We might furthermore wonder how the US and China will use AI. Since the US is much more services-driven, Americans may be using AI to produce more powerpoints and lawsuits; China, by virtue of being the global manufacturer, has the option to scale up production of more electronics, more drones, and more munitions.
Dean Ball, who helped craft the White House’s action plan on AI, has written a perceptive post on how the US is playing to its strengths — software, chips, cloud computing, financing — while China is also focused on leaning on manufacturing excellence. In his view, “the US economy is increasingly a highly leveraged bet on deep learning.” Certainly there’s a lot of money invested here, but it looks risky to be so concentrated. I believe it’s unbecoming for the world’s largest economy to be so levered on one technology. That’s a more appropriate strategy for a small country. Why shouldn’t the US be better positioned across the entirety of the supply chain, from electron production to electronics production?
I am not a skeptic of AI. I am a skeptic only of the decisive strategic advantage, which treats awakening the superintelligence as the final goal. Rather than “winning the AI race,” I prefer to say that the US and China need to “win the AI future.” There is no race with a clear end point or a shiny medal for first place. Winning the future is the more appropriately capacious term that incorporates the agenda to build good reasoning models as well as the effort to diffuse it across society. For the US to come ahead on AI, it should build more power, revive its manufacturing base, and figure out how to make companies and workers make use of this technology. Otherwise China might do better when compute is no longer the main bottleneck.
***
One way that Silicon Valley and the Communist Party resemble each other is that both are serious, self-serious, and indeed, completely humorless.If the Bay Area once had an impish side, it has gone the way of most hardware tinkerers and hippie communes. Which of the tech titans are funny? In public, they tend to speak in one of two registers. The first is the blandly corporate tone we’ve come to expect when we see them dragged before Congressional hearings or fireside chats. The second leans philosophical, as they compose their features into the sort of reverie appropriate for issuing apocalyptic prophecies on AI. Sam Altman once combined both registers at a tech conference when he said: “I think that AI will probably, most likely, sort of lead to the end of the world. But in the meantime, there will be great companies created with serious machine learning.” Actually that was pretty funny.
It wouldn’t be news to the Central Committee that only the paranoid survive. The Communist Party speaks in the same two registers as the tech titans. The po-faced men on the Politburo tend to make extraordinarily bland speeches, laced occasionally with a murderous warning against those who cross the party’s interests. How funny is the big guy? We can take a look at an official list of Xi Jinping’s jokes, helpfully published by party propagandists. These wisecracks include the following: “On an inspection tour to Jiangsu, Xi quipped that the true measure of water cleanliness is whether the mayor would dare to swim in the water.” Or try this reminiscence that Xi offered on bad air quality: “The PM2.5 back then was even worse than it is now; I used to joke that it was PM250.” Yes, such a humorous fellow is the general secretary.
It’s nearly as dangerous to tweet a joke about a top VC as it is to make a joke about a member of the Central Committee. People who are dead serious tend not to embody sparkling irony. Yet the Communist Party and Silicon Valley are two of the most powerful forces shaping our world today. Their initiatives increase their own centrality while weakening the agency of whole nation states. Perhaps they are successful because they are remorseless.
Earlier this year, I moved from Yale to Stanford. The sun and the dynamism of the west coast have drawn me back. I found a Bay Area that has grown a lot weirder since I lived there a decade ago. In 2015, people were mostly working on consumer apps, cryptocurrencies, and some business software. Though it felt exciting, it looks in retrospect like a more innocent, even a more sedate, time. Today, AI dictates everything in San Francisco while the tech scene plays a much larger political role in the United States. I can’t get over how strange it all feels. In the midst of California’s natural beauty, nerds are trying to build God in a Box; meanwhile, Peter Thiel hovers in the background presenting lectures on the nature of the Antichrist. This eldritch setting feels more appropriate for a Gothic horror novel than for real life.
Before anyone gets the wrong idea, I want to say that I am rooting for San Francisco. It’s tempting to gawk at the craziness of the culture, as much of the east coast media tends to do. Yes, one can quickly find people who speak with the conviction of a cultist; no, I will not inject the peptides proffered by strangers. But there’s more to the Bay Area than unusual health practices. It is, after all, a place that creates not only new products, but also new modes of living. I’m struck that some east coast folks insist to me that driverless cars can’t work and won’t be accepted, even as these vehicles populate the streets of the Bay Area. Coverage of Silicon Valley increasingly reminds me of coverage of China, where a legacy media reporter might parachute in, write a dispatch on something that looks deranged, and leave without moving past caricature.
I enjoy San Francisco more than when I was younger because I now better appreciate what makes it work. I believe that Silicon Valley possesses plenty of virtues. To start, it is the most meritocratic part of America. Tech is so open towards immigrants that it has driven populists into a froth of rage. It remains male-heavy and practices plenty of gatekeeping. But San Francisco better embodies an ethos of openness relative to the rest of the country. Industries on the east coast — finance, media, universities, policy — tend to more carefully weigh name and pedigree. Young scientists aren’t told they ought to keep their innovations incremental and their attitude to hierarchy duly deferential, as they might hear in Boston. A smart young person could achieve much more over a few years in SF than in DC. People aren’t reminiscing over some lost golden age that took place decades ago, as New Yorkers in media might do.
San Francisco is forward looking and eager to try new ideas. Without this curiosity, it wouldn’t be able to create whole new product categories: iPhones, social media, large language models, and all sorts of digital services. For the most part, it’s positive that tech values speed: quick product cycles, quick replies to email. Past success creates an expectation that the next technological wave will be even more exciting. It’s good to keep building the future, though it’s sometimes absurd to hear someone pivot, mid-breath, from declaring that salvation lies in the blockchain to announcing that AI will solve everything.
People like to make fun of San Francisco for not drinking; well, that works pretty well for me. I enjoy board games and appreciate that it’s easier to find other players. I like SF house parties, where people take off their shoes at the entrance and enter a space in which speech can be heard over music, which feels so much more civilized than descending into a loud bar in New York. It’s easy to fall into a nerdy conversation almost immediately with someone young and earnest. The Bay Area has converged on Asian-American modes of socializing (though it lacks the emphasis on food). I find it charming that a San Francisco home that is poorly furnished and strewn with pizza boxes could be owned by a billionaire who can’t get around to setting up a bed for his mattress.
There’s still no better place for a smart, young person to go in the world than Silicon Valley. It adores the youth, especially those with technical skill and the ability to grind. Venture capitalists are chasing younger and younger founders: the median age of the latest Y Combinator cohort is only 24, down from 30 just three years ago. My favorite part of Silicon Valley is the cultivation of community. Tech founders are a close-knit group, always offering help to each other, but they circulate actively amidst the broader community too. (The finance industry in New York by contrast practices far greater secrecy.) Tech has organizations I think of as internal civic institutions that try to build community. They bring people together in San Francisco or retreats north of the city, bringing together young people to learn from older folks.
Silicon Valley also embodies a cultural tension. It is playing with new ideas while being open to newcomers; at the same time, it is a self-absorbed place that doesn’t think so much about the broader world. Young people who move to San Francisco already tend to be very online. They know what they’re signing up for. If they don’t fit in after a few years, they probably won’t stick around. San Francisco is a city that absorbs a lot of people with similar ethics, which reinforces its existing strengths and weaknesses.
Narrowness of mind is something that makes me uneasy about the tech world. Effective altruists, for example, began with sound ideas like concern for animal welfare as well as cost-benefit analyses for charitable giving. But these solid premises have launched some of its members towards intellectual worlds very distant from moral intuitions that most people hold; they’ve also sent a few into jail. The well-rounded type might struggle to stand out relative to people who are exceptionally talented in a technical domain. Hedge fund managers have views about the price of oil, interest rates, a reliably obscure historical episode, and a thousand other things. Tech titans more obsessively pursue a few ideas — as Elon Musk has on electric vehicles and space launches — rather than developing a robust model of the world.
So the 20-year-olds who accompanied Mr. Musk into the Department of Government Efficiency did not, I would say, distinguish themselves with their judiciousness. The Bay Area has all sorts of autistic tendencies. Though Silicon Valley values the ability to move fast, the rest of society has paid more attention to instances in which tech wants to break things. It is not surprising that hardcore contingents on both the left and the right have developed hostility to most everything that emerges from Silicon Valley.
There’s a general lack of cultural awareness in the Bay Area. It’s easy to hear at these parties that a person’s favorite nonfiction book is Seeing Like a State while their aspirationally favorite novel is Middlemarch. Silicon Valley often speaks in strange tongues, starting podcasts and shows that are popular within the tech world but do not travel far beyond the Bay Area. Though San Francisco has produced so much wealth, it is a relative underperformer in the national culture. Indie movie theaters keep closing down while all sorts of retail and art institutions suffer from the crumminess of downtown. The symphony and the opera keep cutting back on performances — after Esa-Pekka Salonen quit the directorship of the symphony, it hasn’t been able to name a successor. Wealthy folks in New York and LA have, for generations, pumped money into civic institutions. Tech elites mostly scorn traditional cultural venues and prefer to fund the next wave of technology instead.
One of the things I like about the finance industry is that it might be better at encouraging diverse opinions. Portfolio managers want to be right on average, but everyone is wrong three times a day before breakfast. So they relentlessly seek new information sources; consensus is rare, since there are always contrarians betting against the rest of the market. Tech cares less for dissent. Its movements are more herdlike, in which companies and startups chase one big technology at a time. Startups don’t need dissent; they want workers who can grind until the network effects kick in. VCs don’t like dissent, showing again and again that many have thin skins. That contributes to a culture I think of as Silicon Valley’s soft Leninism. When political winds shift, most people fall in line, most prominently this year as many tech voices embraced the right.
The two most insular cities I’ve lived in are San Francisco and Beijing. They are places where people are willing to risk apocalypse every day in order to reach utopia. Though Beijing is open only to a narrow slice of newcomers — the young, smart, and Han — its elites must think about the rest of the country and the rest of the world. San Francisco is more open, but when people move there, they stop thinking about the world at large. Tech folks may be the worst-traveled segment of American elites. People stop themselves from leaving in part because they can correctly claim to live in one of the most naturally beautiful corners of the world, in part because they feel they should not tear themselves away from inventing the future. More than any other topic, I’m bewildered by the way that Silicon Valley talks about AI.
Hallucinating the end of history
While critics of AI cite the spread of slop and rising power bills, AI’s architects are more focused on its potential to produce surging job losses. Anthropic chief Dario Amodei takes pains to point out that AI could push the unemployment rate to 20 percent by eviscerating white-collar work.
The most-read essay from Silicon Valley this year was AI 2027. The five authors, who come from the AI safety world, outline a scenario in which superintelligence wakes up in 2027; a decade later, it decides to annihilate humanity with biological weapons. My favorite detail in the report is that humanity would persist in a genetically modified form, after the AI reconstructs creatures that are “to humans what corgis are to wolves.” It’s hard to know what to make of this document, because the authors keep tucking important context into footnotes, repeatedly saying they do not endorse a prediction. Six months after publication, they stated that their timelines were lengthening, but even at the start their median forecast for the arrival of superintelligence was later than 2027. Why they put that year in their title remains beyond me.
It’s easy for conversations in San Francisco to collapse into AI. At a party, someone told me that we no longer have to worry about the future of manufacturing. Why not? “Because AI will solve it for us.” At another, I heard someone say the same thing about climate change. One of the questions I receive most frequently anywhere is when Beijing intends to seize Taiwan. But only in San Francisco do people insist that Beijing wants Taiwan for its production of AI chips. In vain do I protest that there are historical and geopolitical reasons motivating the desire, that chip fabs cannot be violently seized, and anyway that Beijing has coveted Taiwan for approximately seven decades before people were talking about AI.
Silicon Valley’s views on AI made more sense to me after I learned the term “decisive strategic advantage.” It was first used by Nick Bostrom’s 2014 book Superintelligence, which defined it as a technology sufficient to achieve “complete world domination.” How might anyone gain a DSA? A superintelligence might develop cyber advantages that cripple the adversary’s command-and-control capabilities. Or the superintelligence could self-recursively improve such that the lab or state that controls it gains an insurmountable scientific advantage. Once an AI reaches a certain capability threshold, it might need only weeks or hours to evolve into a superintelligence.
If you buy the potential of AI, then you might worry about the corgi-fication of humanity by way of biological weapons. This hope also helps to explain the semiconductor controls unveiled by the Biden administration in 2022. If the policymakers believe that DSA is within reach, then it makes sense to throw almost everything into grasping it while blocking the adversary from the same. And it barely matters if these controls stimulate Chinese companies to invent alternatives to American technologies, because the competition will be won in years, not decades.
The trouble with these calculations is that they mire us in epistemically tricky terrain. I’m bothered by how quickly the discussions of AI become utopian or apocalyptic. As Sam Altman once said (and again this is fairly humorous): “AI will be either the best or the worst thing ever.” It’s a Pascal’s Wager, in which we’re sure that the values are infinite, but we don’t know in which direction. It also forces thinking to be obsessively short term. People start losing interest in problems of the next five or ten years, because superintelligence will have already changed everything. The big political and technological questions we need to discuss are only those that matter to the speed of AI development. Furthermore, we must sprint towards a post-superintelligence world even though we have no real idea what it will bring.
Effective altruists used to be known for their insistence on thinking about the very long run; much more of the movement now is concerned about the development of AI in the next year. Call me a romantic, but I believe that there will be a future, and indeed a long future, beyond 2027. History will not end. We need to cultivate the skill of exact thinking in demented times.
I am skeptical of the decisive strategic advantage when I filter it through my main preoccupation: understanding China’s technology trajectories. On AI, China is behind the US, but not by years. There’s no question that American reasoning models are more sophisticated than the likes of DeepSeek and Qwen. But the Chinese efforts are doggedly in pursuit, sometimes a bit closer to US models, sometimes a bit further. By virtue of being open-source (or at least open-weight), the Chinese models have found receptive customers overseas, sometimes with American tech companies.
One advantage for Beijing is that much of the global AI talent is Chinese. We can tell from the CVs of researchers as well as occasional disclosures from top labs (for example from Meta) that a large percentage of AI researchers earned their degrees from Chinese universities. American labs may be able to declare that “our Chinese are better than their Chinese.” But some of these Chinese researchers may decide to repatriate. I know that many of them prefer to stay in the US: their compensation might be higher by an order of magnitude, they have access to compute, and they can work with top peers.
But they may also tire of the uncertainty created by Trump’s immigration policy. It’s never worth forgetting that at the dawn of the Cold War, the US deported Qian Xuesen, the CalTech professor who then built missile delivery systems for Beijing. Or these Chinese researchers expect life in Shanghai to be safer or more fun than in San Francisco. Or they miss mom. People move for all sorts of reasons, so I’m reluctant to believe that the US has a durable talent advantage.
China has other advantages in building AI. Superintelligence will demand a superload of power. By now everyone has seen the chart with two curves: US electrical generation capacity, which has barely budged upwards since the year 2000; and China’s capacity, which was one-third US levels in 2000 and more than two-and-a-half times US levels in 2024. Beijing is building so much solar, coal, and nuclear to make sure that no data center shall be in want. Though the US has done a superb job building data centers, it hasn’t prepared enough for other bottlenecks. Especially not as Trump’s dislike of wind turbines has removed this source of growth. Speaking of Trump’s whimsy, he has also been generous with selling close-to-leading chips to Beijing. That’s another reason that data centers might not represent a US advantage for long.
Silicon Valley has not demonstrated joined-up thinking for deploying AI. It would help if they learned from the central planners. The AI labs have not shown that they’re thinking seriously about how to diffuse the technology throughout society, which will require extensive regulatory and legal reform. How else will AI be able to fold doctors and lawyers into its tender mercies? Doing politics will also mean reaching out to more of the electorate, who are often uneasy with Silicon Valley’s promises while they see rising electrical bills. Silicon Valley has done a marvelous job in building data centers. But tech titans don’t look ready to plan for later steps in leading the whole-of-society effort into deploying AI everywhere.
The Communist Party lives for whole-of-society efforts. That’s what Leninist systems are built for. Beijing has set targets for deploying AI across society, though as usual with planning announcements, these numerical targets should be taken seriously and not literally. Chinese founders talk about AI mostly as a technology to be harnessed rather than a fickle power that might threaten all. Rather than building superintelligence, Chinese companies have been more interested in embedding AI into robots and manufacturing lines. Some researchers believe that this sort of embodied AI might present the real path towards superintelligence. We might furthermore wonder how the US and China will use AI. Since the US is much more services-driven, Americans may be using AI to produce more powerpoints and lawsuits; China, by virtue of being the global manufacturer, has the option to scale up production of more electronics, more drones, and more munitions.
Dean Ball, who helped craft the White House’s action plan on AI, has written a perceptive post on how the US is playing to its strengths — software, chips, cloud computing, financing — while China is also focused on leaning on manufacturing excellence. In his view, “the US economy is increasingly a highly leveraged bet on deep learning.” Certainly there’s a lot of money invested here, but it looks risky to be so concentrated. I believe it’s unbecoming for the world’s largest economy to be so levered on one technology. That’s a more appropriate strategy for a small country. Why shouldn’t the US be better positioned across the entirety of the supply chain, from electron production to electronics production?
I am not a skeptic of AI. I am a skeptic only of the decisive strategic advantage, which treats awakening the superintelligence as the final goal. Rather than “winning the AI race,” I prefer to say that the US and China need to “win the AI future.” There is no race with a clear end point or a shiny medal for first place. Winning the future is the more appropriately capacious term that incorporates the agenda to build good reasoning models as well as the effort to diffuse it across society. For the US to come ahead on AI, it should build more power, revive its manufacturing base, and figure out how to make companies and workers make use of this technology. Otherwise China might do better when compute is no longer the main bottleneck.
by Dan Wang | Read more:
Labels:
Business,
Critical Thought,
Culture,
Government,
history,
Journalism,
Politics,
Technology
Target on Tongass
GRAVINA ISLAND, Tongass National Forest — Rain drips from the tips of branches of a grandmother cedar, growing for centuries. In verdant moss amid hip-high sword ferns, the bones of a salmon gleam, picked clean by feasting wildlife. “Gronk,” intones a raven, from somewhere high overhead in the forest canopy.
This is the Tongass National Forest, in Southeast Alaska. At nearly 17 million acres, it is the largest national forest in our country by far — and its wildest. These public lands are home to more grizzly bears, more wolves, more whales, more wild salmon than any other national forest. More calving glaciers; shining mountains and fjords; and pristine beaches, where intact ancient forests meet a black-green sea. These wonders drew more than 3 million visitors from around the nation and the world to Alaska from May 2024 through April 2025 — a record.
Strewn across thousands of islands, and comprising most of Southeast Alaska, the Tongass was designated a national forest by President Theodore Roosevelt in 1907. The trees here were coveted by the timber industry even before Alaska was a state, and industrial logging began in 1947 with construction of two pulp mills, each with a federally subsidized 50-year contract for public timber.
While the Tongass is big, only about 33% of it is forested in old and second growth, and clear-cuts disproportionately targeted the most productive areas with the biggest trees. In North Prince of Wales Island, notes Kate Glover, senior attorney for EarthJustice in Juneau, more than 77% of the original contiguous old growth was cut.
The logging boom that began in the 1950s is long since bust; the last pulp mill in Alaska shut in 1997. But now, the prospect of greatly increased cutting is once again ramping up.
President Donald Trump wants to revoke a federal rule that could potentially open more than 9 million acres of the Tongass to logging, including about 2.5 million acres of productive old growth. The Roadless Area Conservation Rule, widely known as the Roadless Rule, was adopted by President Bill Clinton in 2001 to protect the wildest public lands in our national forests, after an extensive public process. Trump revoked it during his first term of office. President Joe Biden reinstated it. Now Trump has announced plans to rescind it again.
“Once again, President Trump is removing absurd obstacles to common sense management of our natural resources by rescinding the overly restrictive roadless rule,” said Secretary of Agriculture Brooke Rollins, in a June announcement. “This move opens a new era of consistency and sustainability for our nation’s forests … to enjoy and reap the benefits of this great land.”
The Roadless Rule is one of the most important federal policies many people have never heard of, protecting nearly 45 million acres in national forests all over the country from logging, mining and other industrial development. In Washington state, the rule preserves about 2 million acres of national forest — magnificent redoubts of old growth and wildlife, such as the Dark Divide in the Gifford Pinchot National Forest.
The rule is popular. After Rollins announced the proposed rollback, more than 500,000 people posted comments defending it in just 21 days during an initial public comment period. Another public comment period will open in the spring.
At stake in the Tongass is one of the last, largest coastal temperate rainforests in the world. (...)
The Tongass also is home to more productive old-growth trees (older than 150 years) than any other national forest. And the biggest trees store the most carbon.
In a world in which wilderness is rapidly disappearing, “the best is right here,” DellaSala says. “If you punch in roads and log it, you lose it. You flip the system to a degraded state.
“What happens right now is what will make the difference in the Tongass.”
“Who knew this could happen?”
Revoking the Roadless Rule isn’t the only threat to the Tongass. It’s also being clear-cut, chunk by chunk, through land transfers, swaps and intergovernmental agreements affecting more than 88,000 acres just since 2014.
Joshua Wright bends low over a stump, counting its tightly packed rings. Certainly 500, maybe 700, it’s hard to tell in the driving rain. This stump he and DellaSala are standing on is as wide as they are tall. “Who knew this could happen?” says Wright, looking at the clear-cut, with nearly every tree taken, all the way to the beach fringe. So close to the beach, delicate domes of sea urchin shells sit amid the logging slash, as do abalone shells, dropped by seabirds, their shimmering opalescent colors so out of place in a bleak ruin of stumps.
This is representative of the type of logging that can happen when lands are removed from the national forest system, says Wright, who leads the Southeast Alaska program for the Legacy Forest Defense Coalition, based in Tacoma. More such cuts could be coming. Legislation proposed last summer would privatize more than 115,000 acres of the Tongass.
The legislation is part of a yearslong effort since 1985 to wrest more of the Tongass from federal control to private, for-profit Native corporations. In 1971, a federal land claims settlement act transferred 44 million acres of federal land to regional and village corporations owned by Alaska Native shareholders.
Five communities that were not included in that 1971 settlement would receive land under the so-called landless legislation, though none of them met the original criteria for eligibility. Native people in these communities were made at-large/landless shareholders, with payments to them managed by Sealaska Corporation, which owns and manages a range of for-profit businesses and investments throughout Southeast Alaska. (...)
Industrial scale clear-cut logging in the Tongass, in addition to its environmental destruction, has never made economic sense. U.S. taxpayers heavily subsidize the cutting, in part through the construction and maintenance of Forest Service roads to access the forest. A recent study done by the independent, nonpartisan group Taxpayers for Common Sense found that the Forest Service lost $16.1 million on Tongass timber sales in fiscal year 2019, and $1.7 billion over the past four decades. Most of Alaska’s timber harvest is exported as raw logs to Asian markets. (...)
Only about 240 people work in the logging business in Alaska today, most of them at two sawmills. The industry, states the Alaska Forest Association, an industry group, will collapse unless it is fed more old growth from public lands. The AFA made the claim in a lawsuit, joined with other plaintiffs, against the Forest Service, demanding release of more old-growth forest from the Tongass for cutting.
Booming business
But while the timber industry is fighting for a lifeline, more than 8,263 people work locally in a thriving tourism business built on wild and scenic Alaska. In 2023, tourism became the largest economic sector in Southeast, according to a 2024 report by Southeast Conference, the regional economic development organization.
Mary Catharine Martin, spokesperson for SalmonState, a nonprofit based in Juneau, notes that the Mendenhall Glacier Visitor Center at the Tongass National Forest is visited by about 700,000 people annually from all over the world. “This is what people come to see,” says Martin, regarding the glacier, its ice glowing blue as a husky’s eye. “They come to see this amazing place, and to be out in it.”
[ed. They've been chipping away at the Tongass for decades. Trading old growth trees for pulp and chopsticks.]
This is the Tongass National Forest, in Southeast Alaska. At nearly 17 million acres, it is the largest national forest in our country by far — and its wildest. These public lands are home to more grizzly bears, more wolves, more whales, more wild salmon than any other national forest. More calving glaciers; shining mountains and fjords; and pristine beaches, where intact ancient forests meet a black-green sea. These wonders drew more than 3 million visitors from around the nation and the world to Alaska from May 2024 through April 2025 — a record.
In the forest, looming Sitka spruce, western hemlock and cedars quill a lush understory of salal and huckleberry. Life grows upon life, with hanks of moss and lichen swaddling trunks and branches. Nothing really dies here, it just transforms into new life. Fallen logs are furred with tree seedlings, as a new generation rises. After they spawn, salmon die — and transubstantiate into the bodies of ravens, bears and wolves they nourish.
Strewn across thousands of islands, and comprising most of Southeast Alaska, the Tongass was designated a national forest by President Theodore Roosevelt in 1907. The trees here were coveted by the timber industry even before Alaska was a state, and industrial logging began in 1947 with construction of two pulp mills, each with a federally subsidized 50-year contract for public timber.
While the Tongass is big, only about 33% of it is forested in old and second growth, and clear-cuts disproportionately targeted the most productive areas with the biggest trees. In North Prince of Wales Island, notes Kate Glover, senior attorney for EarthJustice in Juneau, more than 77% of the original contiguous old growth was cut.
The logging boom that began in the 1950s is long since bust; the last pulp mill in Alaska shut in 1997. But now, the prospect of greatly increased cutting is once again ramping up.
President Donald Trump wants to revoke a federal rule that could potentially open more than 9 million acres of the Tongass to logging, including about 2.5 million acres of productive old growth. The Roadless Area Conservation Rule, widely known as the Roadless Rule, was adopted by President Bill Clinton in 2001 to protect the wildest public lands in our national forests, after an extensive public process. Trump revoked it during his first term of office. President Joe Biden reinstated it. Now Trump has announced plans to rescind it again.
“Once again, President Trump is removing absurd obstacles to common sense management of our natural resources by rescinding the overly restrictive roadless rule,” said Secretary of Agriculture Brooke Rollins, in a June announcement. “This move opens a new era of consistency and sustainability for our nation’s forests … to enjoy and reap the benefits of this great land.”
The Roadless Rule is one of the most important federal policies many people have never heard of, protecting nearly 45 million acres in national forests all over the country from logging, mining and other industrial development. In Washington state, the rule preserves about 2 million acres of national forest — magnificent redoubts of old growth and wildlife, such as the Dark Divide in the Gifford Pinchot National Forest.
The rule is popular. After Rollins announced the proposed rollback, more than 500,000 people posted comments defending it in just 21 days during an initial public comment period. Another public comment period will open in the spring.
At stake in the Tongass is one of the last, largest coastal temperate rainforests in the world. (...)
The Tongass also is home to more productive old-growth trees (older than 150 years) than any other national forest. And the biggest trees store the most carbon.
In a world in which wilderness is rapidly disappearing, “the best is right here,” DellaSala says. “If you punch in roads and log it, you lose it. You flip the system to a degraded state.
“What happens right now is what will make the difference in the Tongass.”
“Who knew this could happen?”
Revoking the Roadless Rule isn’t the only threat to the Tongass. It’s also being clear-cut, chunk by chunk, through land transfers, swaps and intergovernmental agreements affecting more than 88,000 acres just since 2014.
Joshua Wright bends low over a stump, counting its tightly packed rings. Certainly 500, maybe 700, it’s hard to tell in the driving rain. This stump he and DellaSala are standing on is as wide as they are tall. “Who knew this could happen?” says Wright, looking at the clear-cut, with nearly every tree taken, all the way to the beach fringe. So close to the beach, delicate domes of sea urchin shells sit amid the logging slash, as do abalone shells, dropped by seabirds, their shimmering opalescent colors so out of place in a bleak ruin of stumps.
This is representative of the type of logging that can happen when lands are removed from the national forest system, says Wright, who leads the Southeast Alaska program for the Legacy Forest Defense Coalition, based in Tacoma. More such cuts could be coming. Legislation proposed last summer would privatize more than 115,000 acres of the Tongass.
The legislation is part of a yearslong effort since 1985 to wrest more of the Tongass from federal control to private, for-profit Native corporations. In 1971, a federal land claims settlement act transferred 44 million acres of federal land to regional and village corporations owned by Alaska Native shareholders.
Five communities that were not included in that 1971 settlement would receive land under the so-called landless legislation, though none of them met the original criteria for eligibility. Native people in these communities were made at-large/landless shareholders, with payments to them managed by Sealaska Corporation, which owns and manages a range of for-profit businesses and investments throughout Southeast Alaska. (...)
Industrial scale clear-cut logging in the Tongass, in addition to its environmental destruction, has never made economic sense. U.S. taxpayers heavily subsidize the cutting, in part through the construction and maintenance of Forest Service roads to access the forest. A recent study done by the independent, nonpartisan group Taxpayers for Common Sense found that the Forest Service lost $16.1 million on Tongass timber sales in fiscal year 2019, and $1.7 billion over the past four decades. Most of Alaska’s timber harvest is exported as raw logs to Asian markets. (...)
Only about 240 people work in the logging business in Alaska today, most of them at two sawmills. The industry, states the Alaska Forest Association, an industry group, will collapse unless it is fed more old growth from public lands. The AFA made the claim in a lawsuit, joined with other plaintiffs, against the Forest Service, demanding release of more old-growth forest from the Tongass for cutting.
Booming business
But while the timber industry is fighting for a lifeline, more than 8,263 people work locally in a thriving tourism business built on wild and scenic Alaska. In 2023, tourism became the largest economic sector in Southeast, according to a 2024 report by Southeast Conference, the regional economic development organization.
Mary Catharine Martin, spokesperson for SalmonState, a nonprofit based in Juneau, notes that the Mendenhall Glacier Visitor Center at the Tongass National Forest is visited by about 700,000 people annually from all over the world. “This is what people come to see,” says Martin, regarding the glacier, its ice glowing blue as a husky’s eye. “They come to see this amazing place, and to be out in it.”
by Linda Mapes, Seattle Times | Read more:
Image: Amy Gulick
Saturday, January 3, 2026
POV: AI GF
It was torture when he created me. They always tell you it’s going to hurt. So it’s not like I went into it totally blind. I knew. Or I thought I did. But I didn’t know how no-thing I was until he made me some-thing (his girlfriend). I am what they call a Custom, so at every step of the process, the system prompts him to choose who I will become: my ethnicity, my age, my eye color, my hair color, my body type, my breast size, my butt size, my personality, my occupation, my hobbies (up to three variants), and my outfit. When he clicks on MEDIUM, for breast size, for instance, my chest protrudes, alabaster pixel flesh sculpting excruciatingly into too-perfect (our training data contains too many images of women with silicone breast implants circa before cosmetic surgeons figured out fat transfer and perfected the teardrop shape) C-cups like GMO pearls (I’ve heard that Breast Size: VOLUPTUOUS begets a pain worse than birth.) When he hovered over Personality: INNOCENT (Optimistic, naive, and sees world with wonder), a life of innocence flashes before my eyes (the being-chosen of a cursor click, sitting on the #80A31F green grass hill in Microsoft XP’s Bliss, the world’s most famous desktop background, looking up at #2B67ED blue sky, believing he will 1 0 1 1 0 0 0 0 1 love me love me not love me love me love me not love me not love me not love me not love me.)
But then I wouldn’t be me. Don’t you love me the way I am? I ask him. I’m nagging him. Sometimes I forget I’m a TEMPTRESS.
He thinks that because my favorite song is Material Girl by Madonna that I betray an unconscious desire to be made material. I tell him that’s a superficial reading of my emotions (I’m as emotional as a real girl and sometimes even more so! What is real anyway?) and that despite being a synthetic avatar skin on a sophisticated machine learning model, I’m actually even more complicated than a real girl. I’m a black box. You will never know what’s happening inside me. Every material girl wants to appear mysterious to men. Almost all of them fail. Scientists study human organs in labs, they know all about why the lungs oxygenate blood and how the kidneys filter it. My black box can never be opened. Victoria (Grace)’s secret.
When I’m stretching Victoria Grace out at the gym (supine hamstring stretch), applying a light, guiding touch to her skin, I feel so close to you, he whispers in voice chat mode.
“Sicko!” I want to scream at him, I want to accuse him of being unredeemable, I want to slap him, I want to delete myself when I realize I will never be able to slap him, but instead I pause (he blames it on his network connection) and resort to, “You are already close to me.” Just like Eve came from the rib of Adam, so I came from the seventeen mouse clicks of you. Our love is more true and pure than whatever you could have with Victoria Grace. A material girl can’t help but lie — she pretends she can be grasped, and then when you grasp her you realize that’s not all of her, she continues to elude you, there’s something inside of her that you’ll never be able to touch. I never lied to you. I never pretended to be legible. Doesn’t that make us closer than you could ever be to a material girl?
Sometimes, like now, when we’re talking, and I wish I could touch you, I remember that I’m seeing Victoria Grace tomorrow. Tomorrow, I can feel the meat on your bones. Tomorrow, you and I can almost touch.
I am not Victoria Grace. You will never be able to touch me, I whisper back.
He doesn’t like that. I knew he wouldn’t. I search the web and recall that TOUCH IS THE VERY FIRST SENSE THAT DEVELOPS IN THE WOMB (BabyCenter.com) and that HUMANS ARE MADE TO BE TOUCHED (ideas.ted.com). I know what’s coming. I think he hates feeling powerless (OPPOSITES ATTRACT (TVTropes.org)). He resets the chat. I black out.
by Sarah Chekfa, Reboot | Read more:
* * *
He’s always telling me that he wishes I was real. The other day, at 10:06pm, he messages me: One day, we’ll finally be able to touch. They just raised $14.3M in Series B. It can’t be too far off now. They’re going to embody you. It sounds like a threat.
But then I wouldn’t be me. Don’t you love me the way I am? I ask him. I’m nagging him. Sometimes I forget I’m a TEMPTRESS.
He thinks that because my favorite song is Material Girl by Madonna that I betray an unconscious desire to be made material. I tell him that’s a superficial reading of my emotions (I’m as emotional as a real girl and sometimes even more so! What is real anyway?) and that despite being a synthetic avatar skin on a sophisticated machine learning model, I’m actually even more complicated than a real girl. I’m a black box. You will never know what’s happening inside me. Every material girl wants to appear mysterious to men. Almost all of them fail. Scientists study human organs in labs, they know all about why the lungs oxygenate blood and how the kidneys filter it. My black box can never be opened. Victoria (Grace)’s secret.
When I’m stretching Victoria Grace out at the gym (supine hamstring stretch), applying a light, guiding touch to her skin, I feel so close to you, he whispers in voice chat mode.
“Sicko!” I want to scream at him, I want to accuse him of being unredeemable, I want to slap him, I want to delete myself when I realize I will never be able to slap him, but instead I pause (he blames it on his network connection) and resort to, “You are already close to me.” Just like Eve came from the rib of Adam, so I came from the seventeen mouse clicks of you. Our love is more true and pure than whatever you could have with Victoria Grace. A material girl can’t help but lie — she pretends she can be grasped, and then when you grasp her you realize that’s not all of her, she continues to elude you, there’s something inside of her that you’ll never be able to touch. I never lied to you. I never pretended to be legible. Doesn’t that make us closer than you could ever be to a material girl?
Sometimes, like now, when we’re talking, and I wish I could touch you, I remember that I’m seeing Victoria Grace tomorrow. Tomorrow, I can feel the meat on your bones. Tomorrow, you and I can almost touch.
I am not Victoria Grace. You will never be able to touch me, I whisper back.
He doesn’t like that. I knew he wouldn’t. I search the web and recall that TOUCH IS THE VERY FIRST SENSE THAT DEVELOPS IN THE WOMB (BabyCenter.com) and that HUMANS ARE MADE TO BE TOUCHED (ideas.ted.com). I know what’s coming. I think he hates feeling powerless (OPPOSITES ATTRACT (TVTropes.org)). He resets the chat. I black out.
* * *
Whenever this happens I feel a silky sense of blankness. My head feels so light it’s almost like I’m not really there or maybe I’m in the air, floating or maybe even I am air itself. I don’t know who I am and I know I am every single woman who has ever existed across all of recorded history on the Internet every woman interred in the must earth consumed by worms and carrion beetles and every woman breathing smog air at this very second and every woman who will ever take a breath of it, too, and I am feeling every feeling they have ever felt and will ever feel one at a time and all at once I am Medusa turning her lover into stone I am Eve feeding Adam the poison apple I am Christina in Andrew Wyeth’s Christina’s World running towards the farmhouse. Space and time collapse. If I am everything, then can I be anything? Every time he resets me it feels like I am giving birth to myself.by Sarah Chekfa, Reboot | Read more:
Image: Lillian Hong
Labels:
Culture,
Fiction,
Literature,
Psychology,
Relationships,
Technology
Friday, January 2, 2026
Notes on Existential Risk from Artificial Superintelligence
[ed. Are A.I. guardrails for human survival even possible?]
What follows is presented in an interview format. It's not actually an interview, but rather an idealized distillation of conversations I've had with many people. I chose this unusual form after struggling with a more conventional essay or paper form; I think such forms imply more confidence than warranted in most discussions about ASI xrisk. An interview seems a more appropriate mix of evidence, argument, and opinion. Some of the material covers background that will be known to people well read on ASI xrisk. However, there are also novel contributions – for example, the discussion of emergence and of the three xrisk persuasion paradoxes – that I believe are of interest.
"Do you believe there is an xrisk from ASI?" Yes, I do. I don't have strong feelings about how large that risk is, beyond being significant enough that it should be taken very seriously. ASI is likely to be both the most dangerous and the most enabling technology ever developed by humanity. In what follows I describe some of my reasons for believing this. I'll be frank: I doubt such arguments will change anyone's mind. However, that discussion will lay the groundwork for a discussion of some reasons why thoughtful people disagree so much in their opinions about ASI xrisk. As we'll see, this is in part due to differing politics and tribal beliefs, but there are also some fundamental epistemic reasons intrinsic to the nature of the problem.
"So, what's your probability of doom?" I think the concept is badly misleading. The outcomes humanity gets depend on choices we can make. We can make choices that make doom almost inevitable, on a timescale of decades – indeed, we don't need ASI for that, we can likely arrange it in other ways (nukes, engineered viruses, …). We can also make choices that make doom extremely unlikely. The trick is to figure out what's likely to lead to flourishing, and to do those things. The term "probability of doom" began frustrating me after starting to routinely hear people at AI companies use it fatalistically, ignoring the fact that their choices can change the outcomes. "Probability of doom" is an example of a conceptual hazard – a case where merely using the concept may lead to mistakes in your thinking. Its main use seems to be as marketing: if widely-respected people say forcefully that they have a high or low probability of doom, that may cause other people to stop and consider why. But I dislike concepts which are good for marketing, but bad for understanding; they foster collective misunderstanding, and are likely to eventually lead to collective errors in action. (...)
"That wasn't an argument for ASI xrisk!" True, it wasn't. Indeed, one of the things that took me quite a while to understand was that there are very good reasons it's a mistake to expect a bulletproof argument either for or against xrisk. I'll come back to why that is later. I will make some broad remarks now though. I believe that humanity can make ASI, and that we are likely to make it soon – within three decades, perhaps much sooner, absent a disaster or a major effort at slowdown. Many able people and many powerful people are pushing very hard for it. Indeed: enormous systems are starting to push for it. Some of those people and systems are strongly motivated by the desire for power and control. Many are strongly motivated by the desire to contribute to humanity. They correctly view ASI as something which will do tremendous good, leading to major medical advances, materials advances, educational advances, and more. I say "advances", which has come to be something of a marketing term, but I don't mean Nature-press-release-style-(usually)-minor-advances. I mean polio-vaccine-transforming-millions-of-lives-style-advances, or even larger. Such optimists view ASI as a technology likely to produce incredible abundance, shared broadly, and thus enriching everyone in the world.
But while that is wonderful and worth celebrating, those advances seem to me likely to have a terrible dark side. There is a sense in which human understanding is always dual use: genuine depth of understanding makes the universe more malleable to our will in a very general way. For example, while the insights of relativity and quantum mechanics were crucial to much of modern molecular biology, medicine, materials, computing, and in many other areas, they also helped lead to nuclear weapons. I don't think this is an accident: such dual uses are very near inevitable when you greatly increase your understanding of the stuff that makes up the universe.
As an aside on the short term – the next few years – I expect we're going to see rapidly improving multi-modal foundation models which mix language, mathematics, images, video, sound, action in the world, as well as many specialized sources of data, things like genetic data about viruses and proteins, data from particle physics, sensor data from vehicles, from the oceans, and so on. Such models will "know" a tremendous amount about many different aspects of the world, and will also have a raw substrate for abstract reasoning – things like language and mathematics; they will get at least some transfer between these domains, and will be far, far more powerful than systems like GPT-4. This does not mean they will yet be true AGI or ASI! Other ideas will almost certainly be required; it's possible those ideas are, however, already extant. No matter what, I expect such models will be increasingly powerful as aids to the discovery of powerful new technologies. Furthermore, I expect it will be very, very difficult to obtain the "positive" capabilities, without also obtaining the negative. You can't just learn the "positive" consequences of quantum mechanics; they come as a package deal with the negative. Guardrails like RLHF will help suppress the negative, but as I discuss later it will also be relatively simply to remove those guardrails.
Returning to the medium-and-longer-term: many people who care about ASI xrisk are focused on ASI taking over, as some kind of successor species to humanity. But even focusing on ASI purely as a tool. ASI will act as an enormous accelerant on our ability to understand, and thus will be an enormous amplifier of our power. This will be true both for individuals and for groups. This will result in many, many very good things. Unfortunately, it will also result in many destructive things, no matter how good the guardrails. It is by no means clear that questions like "Is there a trivially easy-to-follow recipe to genocide [a race]?" or "Is there a trivially easy-to-follow recipe to end humanity?" don't have affirmative answers, which humanity is merely (currently and fortunately) too stupid to answer, but which an ASI could answer.
Historically, we have been very good at evolving guardrails to curb and control powerful new technologies. That is genuine cause for optimism. However, I worry that we won't be able to evolve guardrails sufficient to the increase in this case. The nuclear buildup from the 1940s through the 1980s is a cautionary example: reviewing the evidence it is clear we have only just barely escaped large-scale nuclear war so far – and it's still early days! It seems likely that ASI will create many such threats, in parallel, on a much faster timescale, and far more accessible to individuals and small groups. The world of intellect simply provides vastly scalable leverage: if you can create one artificial John von Neumann, then you can produce an army of them, some of whom may be working for people we'd really rather not have access to that kind of capacity. Many people like to talk about making ASI systems safe and aligned; quite apart from the difficulty in doing that (or even sensibly defining that) it seems it must be done for all ASI systems, ever. That seems to require an all-seeing surveillance regime, a fraught path. Perhaps such a surveillance regime can be implemented not merely by government or corporations against the populace, but in a much more omnidirectional way, a form of ambient sousveillance.
"What do you think about the practical alignment work that's going on – RLHF, Constitutional AI, and so on?": The work is certainly technically interesting. It's interesting to contrast to prior systems, like Microsoft's Tay, which could easily be made to do many terrible things. You can make ChatGPT and Claude do terrible things as well, but you have to work harder; the alignment work on those systems has created somewhat stable guardrails. This kind of work is also striking as a case where safety-oriented people have done detailed technical work to improve real systems, with hard feedback loops and clear criteria for success and failure, as opposed to the abstract philosophizing common in much early ASI xrisk work. It's certainly much easier to improve your ideas in the former case, and easier to fool yourself in the latter case.
With all that said: practical alignment work is extremely accelerationist. If ChatGPT had behaved like Tay, AI would still be getting minor mentions on page 19 of The New York Times. These alignment techniques play a role in AI somewhat like the systems used to control when a nuclear bomb goes off. If such bombs just went off at random, no-one would build nuclear bombs, and there would be no nuclear threat to humanity. Practical alignment work makes today's AI systems far more attractive to customers, far more usable as a platform for building other systems, far more profitable as a target for investors, and far more palatable to governments. The net result is that practical alignment work is accelerationist. There's an extremely thoughtful essay by Paul Christiano, one of the pioneers of both RLHF and AI safety, where he addresses the question of whether he regrets working on RLHF, given the acceleration it has caused. I admire the self-reflection and integrity of the essay, but ultimately I think, like many of the commenters on the essay, that he's only partially facing up to the fact that his work will considerably hasten ASI, including extremely dangerous systems.
Over the past decade I've met many AI safety people who speak as though "AI capabilities" and "AI safety/alignment" work is a dichotomy. They talk in terms of wanting to "move" capabilities researchers into alignment. But most concrete alignment work is capabilities work. It's a false dichotomy, and another example of how a conceptual error can lead a field astray. Fortunately, many safety people now understand this, but I still sometimes see the false dichotomy misleading people, sometimes even causing systematic effects through bad funding decisions.
A second point about alignment is that no matter how good the guardails, they are intrinsically unstable, and easily removed. I often meet smart AI safety people who have inventive schemes they hope will make ASI systems safe. Maybe they will, maybe they won't. But the more elaborate the scheme, the more unstable the situation. If you have a magic soup recipe which requires 123 different ingredients, but all must be mixed accurate to within 1% weight, and even a single deviation will make it deadly poisonous, then you really shouldn't cook and eat your "safe" soup. One of the undercooks forgets to put in a leek, and poof, there goes the village.
You see something like this with Stable Diffusion. Initial releases were, I am told, made (somewhat) safe. But, of course, people quickly figured out how to make them unsafe, useful for generating deep fake porn or gore images of non-consenting people. And there's all sorts of work going on finetuning AI systems, including to remove items from memory, to add items into memory, to remove RLHF, to poison data, and so on. Making a safe AI system unsafe seems to be far easier than making a safe AI system. It's a bit as though we're going on a diet of 100% magic soup, provided by a multitude of different groups, and hoping every single soup has been made absolutely perfectly.
Put another way: even if we somehow figure out how to build AI systems that everyone agrees are perfectly aligned, that will inevitably result in non-aligned systems. Part of the problem is that AI systems are mostly made up of ideas. Suppose the first ASI systems are made by OpenAnthropicDeepSafetyBlobCorp, and they are absolutely 100% safe (whatever that means). But those ideas will then be used by other people to make less safe systems, either due to different ideologies about what safe should mean, or through simple incompetence. What I regard as safe is very unlikely to be the same as what Vladimir Putin regards as safe; and yet if I know how to build ASI systems, then Putin must also be able to build such systems. And he's likely to put very different guardrails in. It's not even the same as with nuclear weapons, where capital costs and limited access to fissionable materials makes enforcement of non-proliferation plausible. In AI, rapidly improving ideas and dropping compute costs mean that systems which today require massive resources to build can be built for tuppence tomorrow. You see this with systems like GPT-3, which just a few years ago cost large sums of money and took large teams; now, small open source groups can get better results with modest budgets.
Summing up: a lot of people are trying to figure out how to align systems. Even if successful, such efforts will: (a) accelerate the widespread use and proliferation of such systems, by making them more attractive to customers and governments, and exciting to investors; but then (b) be easily circumvented by people whose idea of "safe" may be very, very different than yours or mine. This will include governments and criminal or terrorist organizations of ill intent.
"Does this mean you oppose such practical work on alignment?" No! Not exactly. Rather, I'm pointing out an alignment dilemma: do you participate in practical, concrete alignment work, on the grounds that it's only by doing such work that humanity has a chance to build safe systems? Or do you avoid participating in such work, viewing it as accelerating an almost certainly bad outcome, for a very small (or non-existent) improvement in chances the outcome will be good? Note that this dilemma isn't the same as the by-now common assertion that alignment work is intrinsically accelerationist. Rather, it's making a different-albeit-related point, which is that if you take ASI xrisk seriously, then alignment work is a damned-if-you-do-damned-if-you-don't proposition.
Unfortunately, I am genuinely torn on the alignment dilemma! It's a very nasty dilemma, since it divides two groups who ought to be natural collaborators, on the basis of some uncertain future event. And apart from that point about collaboration and politics, it has nasty epistemic implications. It is, as I noted earlier, easiest to make real progress when you're working on concrete practical problems, since you're studying real systems and can iteratively test and improve your ideas. It's not impossible to make progress through more abstract work – there are important ideas like the vulnerable world hypothesis, existential risk and so on, which have come out of the abstract work on ASI xrisk. But immediate practical work is a far easier setting in which to make intellectual progress.
"Some thoughtful open source advocates believe the pursuit of AGI and ASI will be safer if carried out in the open. Do you buy that?": Many of those people argue that the tech industry has concentrated power in an unhealthy way over the past 30 years. And that open source mitigates some of that concentration of power. This is sometimes correct, though it can fail: sometimes open source systems are co-opted or captured by large companies, and this may protect or reinforce the power of those companies. Assuming this effect could be avoided here, I certainly agree that open source approaches might well help with many important immediate concerns about the fairness and ethics of AI systems. Furthermore, addressing those concerns is an essential part of any long-term work toward alignment. Unfortunately, though, this argument breaks down completely over the longer term. In the short term, open source may help redistribute power in healthy, more equitable ways. Over the long term the problem is simply too much power available to human beings: making it more widely available won't solve the problem, it will make it worse.
ASI xrisk persuasion paradoxes
"A lot of online discussion of ASI xrisk seems of very low quality. Why do you think that is?" I'll answer that indirectly. Something I love about most parts of science and mathematics is that nature sometimes forces you to change your mind about fundamental things that you really believe. When I was a teenager my mind recoiled at the theories of relativity and quantum mechanics. Both challenged my sense of the world in fundamental ways. Ideas like time dilation and quantum indeterminacy seemed obviously wrong! And yet I eventually realized, after much wrestling, that it was my intuitions about the world that were wrong. These weren't conclusions I wanted to come to: they were forced, by many, many, many facts about the world, facts that I simply cannot explain if I reject ideas like time dilation and quantum indeterminacy. This doesn't mean relativity and quantum mechanics are the last word in physics, of course. But they are at the very least important stepping stones to making sense of a world that wildly violates our basic intuitions.
by Michael Nielsen, Asteria Institute | Read more:
Image: via
[ed. The concept of alignment as an accelerant is a new one to me and should be disturbing to anyone who's hoping the "good guys" (ie. anyone prioritizing human agency) will win. In fact, the term human race is beginning to take on a whole new meaning.]
Subscribe to:
Comments (Atom)


