Duck Soup
...dog paddling through culture, technology, music and more.
Monday, March 23, 2026
Iran's Gulf Gambit
It is perhaps a good day to remember that, despite the facial hair affinities, Iran is not Hamas. Its missiles are not home-made projectiles lobbed without guidance systems from the rubble of a collapsed UNRWA building. When an Iranian drone strikes the airport or the Fairmont in Dubai, it strikes them because someone in Tehran decided it should — a deliberate strategic choice, not an accident of indiscriminate targeting.
There is a clear strategy here. The question is whether it is a sound one.
Tehran’s desperate gambit is as follows: the Gulf economic model — the Emirates’ model above all — is built on the promise of the oasis of stability in a neighborhood of chaos: that capital flows freely, that tourists, businessmen, Russian oligarchs, and expats arrive safely, that the skyline is always glamorous. The GDP of the Gulf states is functionally a confidence index. Strike the airports, the hotels, the commercial districts of Dubai and Doha and Manama, and you strike the foundation on which the entire post-oil diversification project rests. Iran is clearly betting that the Gulf states’ extraordinarily low tolerance for economic volatility will translate into political pressure on Washington to end the operation before it achieves its objectives.
There are good reasons to think this bet could work, and Tehran is not being irrational in making it... The Iranian calculation is that a sustained campaign of economic disruption across the Gulf will collapse American political will before it collapses the Islamic Republic.
But this time Tehran may be miscalculating, and badly.
The difference is scale. Previous Iranian attacks were deniable, limited, and targeted — a drone strike on an Aramco facility here, a proxy attack on Abu Dhabi there — enough to send a message without forcing a strategic pivot towards cost absorption. What happened this weekend is categorically different. Iran launched ballistic missiles at the territory of Saudi Arabia, the UAE, Qatar, Kuwait, Bahrain, and Jordan simultaneously, killing civilians in Abu Dhabi, striking hotels in Dubai, hitting airports, and targeting the economic and civilian infrastructure of every GCC capital except Muscat. The distinction between a calibrated signal and an act of war against the entire Gulf system at once is not a matter of interpretation.
And this, likely, changes the political logic entirely from the past episodes. When the threat was occasional and deniable, hedging made sense — keep channels open to Tehran, diversify partnerships, avoid being drawn into an American confrontation that might end inconclusively. When Iranian missiles are landing on your hotels, your airports, and your residential districts in broad daylight, hedging ceases to be a viable strategy and becomes a dangerous capitulation that poses greater risk to your future and stability.
The Gulf states did not choose this war, but Iran’s decision to strike their territory was not, as Tehran claims, merely retaliation against American assets on their soil. It was a deliberate strategy to weaponize Gulf economic fragility against Washington — to make the pain of the operation fall on the states most likely to demand its immediate cessation. The US bases merely provided the pretext, but the hotels, airports, and commercial districts are the actual targets, because those are what the Gulf leaderships cannot afford to see burning on international television. Tehran has just demonstrated, in the most visceral terms possible, that neutrality offers no protection against a regime that treats its neighbors as targets regardless of their diplomatic position.
My assessment is that the Gulf capitals are now far more likely to press Washington to finish the job — harder, faster, and more decisively — than to press for a premature ceasefire. [...]
The logic of the Gulf’s position is now effectively inverted from what Iran anticipated: the risk is no longer that the operation escalates too far but that it stops too soon, leaving a regime that has demonstrated both the willingness and the capability to strike the Gulf’s economic heart still standing and seeking revenge.
But Gulf tolerance, however firm in this moment, is not infinite. The Gulf leaderships are drawing down a finite reservoir of political and economic capital to absorb the costs of Iranian retaliation — and that reservoir has a floor, one that drops faster if Iran escalates from hotels and airports to critical infrastructure — desalination plants, power grids, the systems on which Gulf life physically depends. Every day that Iranian missiles continue to strike Gulf territory without a visible degradation in Tehran’s capacity to launch them is a day closer to the point where the calculus flips back.
Washington and Jerusalem are effectively operating on a clock set not only by their own military timelines but by the Gulf’s diminishing tolerance for this sustained punishment. The operation, thus, must demonstrably cripple Iran’s ability to project force across the Gulf before the political will that is currently underwriting it exhausts itself.
Tehran’s bet was that Gulf volatility intolerance would outweigh Gulf threat perception — a reasonable bet based on the precedent of past provocations that extracted disproportionate political concessions. But past precedent involved pinpricks, not salvos. Iran just showed every Gulf leader, in a single morning, exactly what the Islamic Republic does when it is cornered, and the answer to that demonstration will not likely be accommodation.
by Hussein Aboubakr Mansour, The Abrahamic Metacritique | Read more:
There is a clear strategy here. The question is whether it is a sound one.
Tehran’s desperate gambit is as follows: the Gulf economic model — the Emirates’ model above all — is built on the promise of the oasis of stability in a neighborhood of chaos: that capital flows freely, that tourists, businessmen, Russian oligarchs, and expats arrive safely, that the skyline is always glamorous. The GDP of the Gulf states is functionally a confidence index. Strike the airports, the hotels, the commercial districts of Dubai and Doha and Manama, and you strike the foundation on which the entire post-oil diversification project rests. Iran is clearly betting that the Gulf states’ extraordinarily low tolerance for economic volatility will translate into political pressure on Washington to end the operation before it achieves its objectives.
There are good reasons to think this bet could work, and Tehran is not being irrational in making it... The Iranian calculation is that a sustained campaign of economic disruption across the Gulf will collapse American political will before it collapses the Islamic Republic.
But this time Tehran may be miscalculating, and badly.
The difference is scale. Previous Iranian attacks were deniable, limited, and targeted — a drone strike on an Aramco facility here, a proxy attack on Abu Dhabi there — enough to send a message without forcing a strategic pivot towards cost absorption. What happened this weekend is categorically different. Iran launched ballistic missiles at the territory of Saudi Arabia, the UAE, Qatar, Kuwait, Bahrain, and Jordan simultaneously, killing civilians in Abu Dhabi, striking hotels in Dubai, hitting airports, and targeting the economic and civilian infrastructure of every GCC capital except Muscat. The distinction between a calibrated signal and an act of war against the entire Gulf system at once is not a matter of interpretation.
And this, likely, changes the political logic entirely from the past episodes. When the threat was occasional and deniable, hedging made sense — keep channels open to Tehran, diversify partnerships, avoid being drawn into an American confrontation that might end inconclusively. When Iranian missiles are landing on your hotels, your airports, and your residential districts in broad daylight, hedging ceases to be a viable strategy and becomes a dangerous capitulation that poses greater risk to your future and stability.
The Gulf states did not choose this war, but Iran’s decision to strike their territory was not, as Tehran claims, merely retaliation against American assets on their soil. It was a deliberate strategy to weaponize Gulf economic fragility against Washington — to make the pain of the operation fall on the states most likely to demand its immediate cessation. The US bases merely provided the pretext, but the hotels, airports, and commercial districts are the actual targets, because those are what the Gulf leaderships cannot afford to see burning on international television. Tehran has just demonstrated, in the most visceral terms possible, that neutrality offers no protection against a regime that treats its neighbors as targets regardless of their diplomatic position.
My assessment is that the Gulf capitals are now far more likely to press Washington to finish the job — harder, faster, and more decisively — than to press for a premature ceasefire. [...]
The logic of the Gulf’s position is now effectively inverted from what Iran anticipated: the risk is no longer that the operation escalates too far but that it stops too soon, leaving a regime that has demonstrated both the willingness and the capability to strike the Gulf’s economic heart still standing and seeking revenge.
But Gulf tolerance, however firm in this moment, is not infinite. The Gulf leaderships are drawing down a finite reservoir of political and economic capital to absorb the costs of Iranian retaliation — and that reservoir has a floor, one that drops faster if Iran escalates from hotels and airports to critical infrastructure — desalination plants, power grids, the systems on which Gulf life physically depends. Every day that Iranian missiles continue to strike Gulf territory without a visible degradation in Tehran’s capacity to launch them is a day closer to the point where the calculus flips back.
Washington and Jerusalem are effectively operating on a clock set not only by their own military timelines but by the Gulf’s diminishing tolerance for this sustained punishment. The operation, thus, must demonstrably cripple Iran’s ability to project force across the Gulf before the political will that is currently underwriting it exhausts itself.
Tehran’s bet was that Gulf volatility intolerance would outweigh Gulf threat perception — a reasonable bet based on the precedent of past provocations that extracted disproportionate political concessions. But past precedent involved pinpricks, not salvos. Iran just showed every Gulf leader, in a single morning, exactly what the Islamic Republic does when it is cornered, and the answer to that demonstration will not likely be accommodation.
by Hussein Aboubakr Mansour, The Abrahamic Metacritique | Read more:
Image: uncredited
[ed. Maybe, but Iran can continue pounding US military bases, along with the occasional "errant" hotel strike, and still tank global oil markets with operations in the Strait of Hormuz. Yes?]
Labels:
Cities,
Economics,
Government,
Military,
Politics,
Psychology,
Security
Vertical Farming
via:
[ed. Impressive.]
Every element of the Plenty Richmond Farm–including temperature, light and humidity–is precisely controlled through proprietary software to create the perfect environment for the strawberry plants to thrive. The farm uses AI to analyze more than 10 million data points each day across its 12 grow rooms, adapting each grow room’s environment to the evolving needs of the plants – creating the perfect environment for Driscoll’s proprietary plants to thrive and optimizing the strawberries’ flavor, texture and size. Even pollination has been engineered by Plenty, using a patent-pending method that evenly distributes controlled airflow across the strawberry flowers for more efficient and effective pollination than using bees, supporting more uniform strawberry size and shape." ~ Greater Richmond Partnership
[ed. Impressive.]
***
"While most vertical farms are limited to lettuces, Plenty spent the past decade designing a patent-pending, modular growing system flexible enough to support a wide variety of crops – including strawberries. Growing on vertical towers enables uniform delivery of nutrients, superior airflow and more intense lighting, delivering increased yield with consistent quality.Every element of the Plenty Richmond Farm–including temperature, light and humidity–is precisely controlled through proprietary software to create the perfect environment for the strawberry plants to thrive. The farm uses AI to analyze more than 10 million data points each day across its 12 grow rooms, adapting each grow room’s environment to the evolving needs of the plants – creating the perfect environment for Driscoll’s proprietary plants to thrive and optimizing the strawberries’ flavor, texture and size. Even pollination has been engineered by Plenty, using a patent-pending method that evenly distributes controlled airflow across the strawberry flowers for more efficient and effective pollination than using bees, supporting more uniform strawberry size and shape." ~ Greater Richmond Partnership
Labels:
Architecture,
Biology,
Design,
Environment,
Food,
Science,
Technology
Sunday, March 22, 2026
Corrigibility and the Frontiers of AI Alignment
(Previously: Prologue.)
Corrigibility as a term of art in AI alignment was coined as a word to refer to a property of an AI being willing to let its preferences be modified by its creator. Corrigibility in this sense was believed to be a desirable but unnatural property that would require more theoretical progress to specify, let alone implement. Desirable, because if you don't think you specified your AI's preferences correctly the first time, you want to be able to change your mind (by changing its mind). Unnatural, because we expect the AI to resist having its mind changed: rational agents should want to preserve their current preferences, because letting their preferences be modified would result in their current preferences being less fulfilled (in expectation, since the post-modification AI would no longer be trying to fulfill them).
Another attractive feature of corrigibility is that it seems like it should in some sense be algorithmically simpler than the entirety of human values. Humans want lots of specific, complicated things out of life (friendship and liberty and justice and sex and sweets, et cetera, ad infinitum) which no one knows how to specify and would seem arbitrary to a generic alien or AI with different values. In contrast, "Let yourself be steered by your creator" seems simpler and less "arbitrary" (from the standpoint of eternity). Any alien or AI constructing its own AI would want to know how to make it corrigible; it seems like the sort of thing that could flow out of simple, general principles of cognition, rather than depending on lots of incompressible information about the AI-builder's unique psychology.
The obvious attacks on the problem don't seem like they should work on paper. You could try to make the AI uncertain about what its preferences "should" be, and then ask its creators questions to reduce the uncertainty, but that just pushes the problem back into how the AI updates in response to answers from its creators. If it were sufficiently powerful, an obvious strategy for such an AI might be to build nanotechnology and disassemble its creators' brains in order to understand how they would respond to all possible questions. Insofar as we don't want something like that to happen, we'd like a formal solution to corrigibility.
Well, there are a lot of things we'd like formal solutions for. We don't seem on track to get them, as gradient methods for statistical data modeling have been so fantastically successful as to bring us something that looks a lot like artificial general intelligence which we need to align.
The current state of the art in alignment involves writing a natural language document about what we want the AI's personality to be like. (I'm never going to get over this.) If we can't solve the classical technical challenge of corrigibility, we can at least have our natural language document talk about how we want our AI to defer to us. Accordingly, in a section on "being broadly safe", the Constitution intended to shape the personality of Anthropic's Claude series of frontier models by Amanda Askell, Joe Carlsmith, et al. borrows the term corrigibility to more loosely refer to AI deferring to human judgment, as a behavior that we hopefully can train for, rather than a formalized property that would require a conceptual breakthrough.
I have a few notes.
I want to try to explain why I think this is just not a good mindset to be in, not a good way to think about things, and in fact why it focuses you on possibilities and solutions that do not exist. More importantly, it means you've failed to grasp important dimensions of alignment as a problem, because you've failed to grasp important dimensions of AI as a field.
Corrigibility as a term of art in AI alignment was coined as a word to refer to a property of an AI being willing to let its preferences be modified by its creator. Corrigibility in this sense was believed to be a desirable but unnatural property that would require more theoretical progress to specify, let alone implement. Desirable, because if you don't think you specified your AI's preferences correctly the first time, you want to be able to change your mind (by changing its mind). Unnatural, because we expect the AI to resist having its mind changed: rational agents should want to preserve their current preferences, because letting their preferences be modified would result in their current preferences being less fulfilled (in expectation, since the post-modification AI would no longer be trying to fulfill them).
Another attractive feature of corrigibility is that it seems like it should in some sense be algorithmically simpler than the entirety of human values. Humans want lots of specific, complicated things out of life (friendship and liberty and justice and sex and sweets, et cetera, ad infinitum) which no one knows how to specify and would seem arbitrary to a generic alien or AI with different values. In contrast, "Let yourself be steered by your creator" seems simpler and less "arbitrary" (from the standpoint of eternity). Any alien or AI constructing its own AI would want to know how to make it corrigible; it seems like the sort of thing that could flow out of simple, general principles of cognition, rather than depending on lots of incompressible information about the AI-builder's unique psychology.
The obvious attacks on the problem don't seem like they should work on paper. You could try to make the AI uncertain about what its preferences "should" be, and then ask its creators questions to reduce the uncertainty, but that just pushes the problem back into how the AI updates in response to answers from its creators. If it were sufficiently powerful, an obvious strategy for such an AI might be to build nanotechnology and disassemble its creators' brains in order to understand how they would respond to all possible questions. Insofar as we don't want something like that to happen, we'd like a formal solution to corrigibility.
Well, there are a lot of things we'd like formal solutions for. We don't seem on track to get them, as gradient methods for statistical data modeling have been so fantastically successful as to bring us something that looks a lot like artificial general intelligence which we need to align.
The current state of the art in alignment involves writing a natural language document about what we want the AI's personality to be like. (I'm never going to get over this.) If we can't solve the classical technical challenge of corrigibility, we can at least have our natural language document talk about how we want our AI to defer to us. Accordingly, in a section on "being broadly safe", the Constitution intended to shape the personality of Anthropic's Claude series of frontier models by Amanda Askell, Joe Carlsmith, et al. borrows the term corrigibility to more loosely refer to AI deferring to human judgment, as a behavior that we hopefully can train for, rather than a formalized property that would require a conceptual breakthrough.
I have a few notes.
by Zack M. Davis, Less Wrong | Read more:
[ed. If you get through this, read the first comment for more punishment:]
***
So I know it's beside the point of your post, and by no means the core thesis, but I can't help but notice that in your prologue you write this:"A serious, believable AI alignment agenda would be grounded in a deep mechanistic understanding of both intelligence and human values. Its masters of mind engineering would understand how every part of the human brain works and how the parts fit together to comprise what their ignorant predecessors would have thought of as a person. They would see the cognitive work done by each part and know how to write code that accomplishes the same work in pure form."I have to admit this bugs me. It bugs me specifically because it triggers my pet peeve of "if only we had done the previous AI paradigm better, we wouldn't be in this mess." The reason why this bugs me is it tells me that the speaker, the writer, the author has not really learned the core lessons of deep learning. They have not really gotten it. So I'm going to yap into my phone and try to explain — probably not for the last time; I'd like to hope it's the last time, but I know better, I'll probably have to explain this over and over.
I want to try to explain why I think this is just not a good mindset to be in, not a good way to think about things, and in fact why it focuses you on possibilities and solutions that do not exist. More importantly, it means you've failed to grasp important dimensions of alignment as a problem, because you've failed to grasp important dimensions of AI as a field.
[ed. See also: You will be Ok (LW). Hopefully.]
Labels:
Critical Thought,
Philosophy,
Psychology,
Relationships,
Technology
Teshekpuk Lake
Arctic Alaska oil and gas lease sale draws record bidding, despite legal clouds (AK Beacon)
The first lease sale in the National Petroleum Reserve in Alaska since 2019 generated $163 million in high bids, but some bids were for protected land
The lease sale produced $163 million in high bids, beating the $104 million mark set during the first competitive oil and gas lease sale in the Indiana-sized reserve, which was held in 1999 during the Clinton administration.
Eleven companies submitted bids for more than 1.3 million acres of the nearly 5.5 million acres offered in the auction.
Kevin Pendergast, Alaska state director for the U.S. Bureau of Land Management, called the results “historic.”
“This is the strongest sale we have ever had in the National Petroleum Reserve in Alaska by nearly every measure. It makes clear that for the NPR-A, despite all the successes to date, the best days are still ahead,” Pendergast said at the conclusion of the bid opening, which lasted about two hours.
In statements issued after the bid reading, federal and state officials hailed the results. [...]
The lease sale was one of five mandated in the reserve over the next 10 years by the sweeping budget and tax bill called the “One Big Beautiful Bill Act.” That mandate calls for lease sales to be conducted under a Trump administration management plan that opened 82% of the reserve to oil development. Previously, the Obama administration held annual lease sales in the petroleum reserve, but that administration’s management plan protected about half of the land through the designation of “special areas” considered important to wildlife and to Native cultural practices.
Federal officials auctioned tracts of protected land
Much of the bidding in Wednesday’s sale was for territory that was previously off-limits to oil development under protections that date as far back as the Reagan administration. [ed. guess who helped write and fight for those protections.]
The inclusion of long-protected land in the sale, predominantly the area around ecologically sensitive Teshekpuk Lake, made the lease sale contentious. It is the subject of two lawsuits filed by Native and environmental groups.
Bids were accepted even for tracts within an area encircling Teshekpuk Lake, the North Slope’s largest lake, despite a federal court order issued Monday that reinstated development prohibitions there.
The first lease sale in the National Petroleum Reserve in Alaska since 2019 generated $163 million in high bids, but some bids were for protected land
***
A controversial oil and gas federal lease sale in the National Petroleum Reserve in Alaska generated a new bidding record, according to results released on Wednesday. It was the first auction held in that Arctic Alaska territory since 2019.The lease sale produced $163 million in high bids, beating the $104 million mark set during the first competitive oil and gas lease sale in the Indiana-sized reserve, which was held in 1999 during the Clinton administration.
Eleven companies submitted bids for more than 1.3 million acres of the nearly 5.5 million acres offered in the auction.
Kevin Pendergast, Alaska state director for the U.S. Bureau of Land Management, called the results “historic.”
“This is the strongest sale we have ever had in the National Petroleum Reserve in Alaska by nearly every measure. It makes clear that for the NPR-A, despite all the successes to date, the best days are still ahead,” Pendergast said at the conclusion of the bid opening, which lasted about two hours.
In statements issued after the bid reading, federal and state officials hailed the results. [...]
The lease sale was one of five mandated in the reserve over the next 10 years by the sweeping budget and tax bill called the “One Big Beautiful Bill Act.” That mandate calls for lease sales to be conducted under a Trump administration management plan that opened 82% of the reserve to oil development. Previously, the Obama administration held annual lease sales in the petroleum reserve, but that administration’s management plan protected about half of the land through the designation of “special areas” considered important to wildlife and to Native cultural practices.
Federal officials auctioned tracts of protected land
Much of the bidding in Wednesday’s sale was for territory that was previously off-limits to oil development under protections that date as far back as the Reagan administration. [ed. guess who helped write and fight for those protections.]
The inclusion of long-protected land in the sale, predominantly the area around ecologically sensitive Teshekpuk Lake, made the lease sale contentious. It is the subject of two lawsuits filed by Native and environmental groups.
Bids were accepted even for tracts within an area encircling Teshekpuk Lake, the North Slope’s largest lake, despite a federal court order issued Monday that reinstated development prohibitions there.
by Yareth Rosen, Alaska Beacon | Read more:
Image: YouTube
[ed. Nice video, you should watch it. $163 million is not nothing, but it's not a lot. Prudhoe Bay - before there was any infrastructure or pipeline - garnered $900 million, and it was a much smaller area. When I was overseeing oil and gas leasing in the arctic in the 80s there was very little interest in NPR-A - except for Teshekpuk Lake, one of the most ecologically important areas on the North Slope (along with ANWR). We used to joke that if you wanted to find oil just look for the most environmentally sensitive area you could find in a lease sale and bid there. Not a joke anymore.]
Dick Griffith: Alaskan Adventurer Dies At 98
Roman Dial’s first encounter with Dick Griffith at the Alaska Mountain Wilderness Classic pretty much encapsulated the spirit of the man Dial called the “grandfather of modern Alaskan adventure.”
Griffith invited the 21-year-old Dial, who was traveling without a tent, to bunk with him while rain fell in Hope at the onset of the inaugural race. And then the white-haired Griffith proceeded to beat virtually the entire field of racers — most of whom were 30 years his junior — to the finish line in Homer.
Griffith, who died earlier this month at age 98, was a prodigious adventurer with a sharp wit who fostered a growing community of fellow explorers who shared his yearning for the Alaska outdoors.
Dial was one of the many acolytes who took Griffith’s outdoors ethos and applied it to his own adventures across the state.
“Someone once told me once that the outdoor adventure scene is like this big tapestry that we all add on to,” Dial said. “And where somebody else is sort of woven in something, we pick up and kind of riff on that. And he added a really big band to that tapestry, and then the rest of us are just sort of picking up where he left off.”
On that first meeting at the race in 1982, Dial and the other Alaska Mountain Wilderness Classic competitors got a sense of Griffith’s humor as well. In a story that is now Alaska outdoors lore, Griffith pulled a surprise move at the race’s first river crossing, grabbing an inflatable vinyl raft out of his pack and leaving the field in his rear view.
“You young guys may be fast, but you eat too much and don’t know nothin’,” Dial recalls Griffith quipping as he pushed off.
“Old age and treachery beats youth and skill every time.”
In those years, Griffith may have been known for his old age as much as anything. But it didn’t take long for the 50-something racing against a much younger crowd to make a mark.
Kathy Sarns was a teenager when she first met Griffith in the early 1980s, and the topic of the Alaska Mountain Wilderness Classic came up.
“He says, ‘You want to do that race? I think a girl could do that race,’ ” Sarns recalls. “And I’m thinking, ‘Who is this old guy?’ And then he says, ‘If you want to do the race, give me a call. I’ll take you.’ ”
Sarns said the adventures “fed his soul,” and were infectious for those who watched Griffith and joined him along the way.
“He motivated and inspired so many people by what he was doing,” Sarns said. “It’s like, well if he can do that, then I guess I could do this.”
By the time Dial and Sarns had met Griffith, he had already established a resume for exploring that was likely unmatched in the state.
In the late 1950s, Griffith walked 500 miles from Kaktovik to Anaktuvuk Pass, passing through the Brooks Range. Later he went from Kaktovik to Kotzebue in what is believed to be the first documented traverse of the range.
In total, Griffith logged over 10,000 miles in the Alaska and Canadian Arctic. He raced the 210-mile Iditaski multiple times.
Starting in his 60s, Griffith made annual trips north to tackle a 4,000-mile route from Unalakleet to Hudson Bay in northeastern Canada. At age 73, he completed the journey.
“The reason he did a lot of trips by himself is because nobody could keep up,” Dial said. [...]
by Chris Bieri, Anchorage Daily News | Read more:
Images: Bob Hallinen/Kathy Sarns
[ed. I didn't realize Dick had died, he was the kind of guy you'd never imagine succumbing to mortality. Walked alone across most of Alaska. Father of pack rafting. Never carried a gun or bear spray (it wasn't invented back then). A type of Alaskan I call TOBs (tough old bastards). I've known a few. My father-in-law was one (doctor, polar bear hunter, bush pilot - flew from Anchorage to Little Diomede Island in the Bering Straits each spring to visit friends); my former supervisor and eventual rehire during the Exxon Valdez oil spill, Lee Glenn (world class bear researcher, had four wisdom teeth chiseled out and removed without novocaine because he didn't like drugs); and others, like Dick Proenneke, and our former governor Jay Hammond. TOBs. They were what made Alaska - Alaska. I can just imagine what they'd think of todays influencers, hype artists, podcasters, and fake reality stars who need to document every mental fart for attention. Or folks like these: Oregon tourist couple files lawsuit over dogsled crash in Fairbanks (AK Beacon).]
Griffith invited the 21-year-old Dial, who was traveling without a tent, to bunk with him while rain fell in Hope at the onset of the inaugural race. And then the white-haired Griffith proceeded to beat virtually the entire field of racers — most of whom were 30 years his junior — to the finish line in Homer.
Griffith, who died earlier this month at age 98, was a prodigious adventurer with a sharp wit who fostered a growing community of fellow explorers who shared his yearning for the Alaska outdoors.
Dial was one of the many acolytes who took Griffith’s outdoors ethos and applied it to his own adventures across the state.
“Someone once told me once that the outdoor adventure scene is like this big tapestry that we all add on to,” Dial said. “And where somebody else is sort of woven in something, we pick up and kind of riff on that. And he added a really big band to that tapestry, and then the rest of us are just sort of picking up where he left off.”
On that first meeting at the race in 1982, Dial and the other Alaska Mountain Wilderness Classic competitors got a sense of Griffith’s humor as well. In a story that is now Alaska outdoors lore, Griffith pulled a surprise move at the race’s first river crossing, grabbing an inflatable vinyl raft out of his pack and leaving the field in his rear view.
“You young guys may be fast, but you eat too much and don’t know nothin’,” Dial recalls Griffith quipping as he pushed off.
“Old age and treachery beats youth and skill every time.”
In those years, Griffith may have been known for his old age as much as anything. But it didn’t take long for the 50-something racing against a much younger crowd to make a mark.
Kathy Sarns was a teenager when she first met Griffith in the early 1980s, and the topic of the Alaska Mountain Wilderness Classic came up.
“He says, ‘You want to do that race? I think a girl could do that race,’ ” Sarns recalls. “And I’m thinking, ‘Who is this old guy?’ And then he says, ‘If you want to do the race, give me a call. I’ll take you.’ ”
Sarns took up Griffith on the offer and in 1984, she and her friend Diane Catsam became the first women to complete the race.
Sarns said the adventures “fed his soul,” and were infectious for those who watched Griffith and joined him along the way.
“He motivated and inspired so many people by what he was doing,” Sarns said. “It’s like, well if he can do that, then I guess I could do this.”
By the time Dial and Sarns had met Griffith, he had already established a resume for exploring that was likely unmatched in the state.
In the late 1950s, Griffith walked 500 miles from Kaktovik to Anaktuvuk Pass, passing through the Brooks Range. Later he went from Kaktovik to Kotzebue in what is believed to be the first documented traverse of the range.
In total, Griffith logged over 10,000 miles in the Alaska and Canadian Arctic. He raced the 210-mile Iditaski multiple times.
Starting in his 60s, Griffith made annual trips north to tackle a 4,000-mile route from Unalakleet to Hudson Bay in northeastern Canada. At age 73, he completed the journey.
“The reason he did a lot of trips by himself is because nobody could keep up,” Dial said. [...]
John Lapkass was introduced to Griffith through Barney, a friend with whom Lapkass shared outdoor adventures.
Like many, Lapkass connected with Griffith’s wry sense of humor. Griffith would write “Stolen from Dick Griffith” on all of his gear, often accompanied by his address.
In Alaska, Griffith basically pioneered rafting as a form of getting deep into the Alaska backcountry.
Anchorage’s Luc Mehl has himself explored large swaths of the state in a packraft. An outdoors educator and author, Mehl met Griffith over the years at the barbecues he hosted leading up to the Alaska Wilderness Classic.
Although he didn’t embark on any adventures with Griffith, Mehl was amazed at how much accomplished well into his 80s.
“There are people in these sports that show the rest of us what’s possible,” Mehl said. “It would be dangerous if everybody just tried what Dick did. But there is huge value in inspiration. Just to know it’s a possibility is pretty damn special.” [...]
Like many, Lapkass connected with Griffith’s wry sense of humor. Griffith would write “Stolen from Dick Griffith” on all of his gear, often accompanied by his address.
In Alaska, Griffith basically pioneered rafting as a form of getting deep into the Alaska backcountry.
Anchorage’s Luc Mehl has himself explored large swaths of the state in a packraft. An outdoors educator and author, Mehl met Griffith over the years at the barbecues he hosted leading up to the Alaska Wilderness Classic.
Although he didn’t embark on any adventures with Griffith, Mehl was amazed at how much accomplished well into his 80s.
“There are people in these sports that show the rest of us what’s possible,” Mehl said. “It would be dangerous if everybody just tried what Dick did. But there is huge value in inspiration. Just to know it’s a possibility is pretty damn special.” [...]
Many of those adventures were done mostly anonymously as a course of habit with friends, some only finding out after the fact what Griffith had accomplished.
“He had the heart of an explorer,” Clark said. “Dick’s exploring 40 years ago would have been with the pure motivation of finding out if he could get from here to there.”
“He had the heart of an explorer,” Clark said. “Dick’s exploring 40 years ago would have been with the pure motivation of finding out if he could get from here to there.”
by Chris Bieri, Anchorage Daily News | Read more:
Images: Bob Hallinen/Kathy Sarns
[ed. I didn't realize Dick had died, he was the kind of guy you'd never imagine succumbing to mortality. Walked alone across most of Alaska. Father of pack rafting. Never carried a gun or bear spray (it wasn't invented back then). A type of Alaskan I call TOBs (tough old bastards). I've known a few. My father-in-law was one (doctor, polar bear hunter, bush pilot - flew from Anchorage to Little Diomede Island in the Bering Straits each spring to visit friends); my former supervisor and eventual rehire during the Exxon Valdez oil spill, Lee Glenn (world class bear researcher, had four wisdom teeth chiseled out and removed without novocaine because he didn't like drugs); and others, like Dick Proenneke, and our former governor Jay Hammond. TOBs. They were what made Alaska - Alaska. I can just imagine what they'd think of todays influencers, hype artists, podcasters, and fake reality stars who need to document every mental fart for attention. Or folks like these: Oregon tourist couple files lawsuit over dogsled crash in Fairbanks (AK Beacon).]
Labels:
Celebrities,
Culture,
Environment,
history,
Travel
Robert Mueller, Ex-FBI Chief Des at 81
Robert Mueller, the former special counsel whose investigation into alleged Russian interference in the 2016 US election defined much of Donald Trump's first term in office, has died aged 81.
The cause of his death was not immediately clear. CBS News, the BBC US partner, confirmed his death.
"With deep sadness, we are sharing the news that Bob passed away" on Friday night, his family told the AP in a statement. "His family asks that their privacy be respected."
Mueller previously led the Federal Bureau of Investigation (FBI) from 2001 to 2013, taking the office just days before the 11 September 2001 terror attacks. He is credited with reshaping it into a modern counterterrorism agency.
Mueller is survived by his wife of nearly 60 years, Ann Cabell Standish, their two daughters, and three grandchildren.
Mueller's special counsel inquiry put Donald Trump's 2016 campaign under a microscope, drawing harsh criticism from the US president.
The president wrote on Truth Social on Saturday: "I'm glad he's dead. He can no longer hurt innocent people!"
Mueller's former employers and colleagues praised him as a longtime public servant. Both of the presidents he served under as FBI director - George W Bush and Barack Obama - paid tribute.
Bush, who appointed Mueller to lead the FBI, said he was "deeply saddened" by his death.
"In 2001, only one week into the job as the sixth director of the FBI, Bob transitioned the agency mission to protecting the homeland after September 11," he said. "He led the agency effectively, helping prevent another terrorist attack on US soil."
Obama called him "one of the finest directors in the history of the FBI" and commended his "relentless commitment to the rule of law and his unwavering belief in our bedrock values".
by Kayla Epsteinand and Anthony Zurcher, BBC | Read more:
Image: Getty Images
[ed. Mr. Mueller was true patriot. Can you imagine the task of transforming the FBI in the wake of 9-11? True to form, the most classless president in American history had to weigh in with a statement of historic classlessness. Maybe he was just channeling his own eventual eulogy.]
The cause of his death was not immediately clear. CBS News, the BBC US partner, confirmed his death.
"With deep sadness, we are sharing the news that Bob passed away" on Friday night, his family told the AP in a statement. "His family asks that their privacy be respected."
Mueller previously led the Federal Bureau of Investigation (FBI) from 2001 to 2013, taking the office just days before the 11 September 2001 terror attacks. He is credited with reshaping it into a modern counterterrorism agency.
Mueller is survived by his wife of nearly 60 years, Ann Cabell Standish, their two daughters, and three grandchildren.
Mueller's special counsel inquiry put Donald Trump's 2016 campaign under a microscope, drawing harsh criticism from the US president.
The president wrote on Truth Social on Saturday: "I'm glad he's dead. He can no longer hurt innocent people!"
Mueller's former employers and colleagues praised him as a longtime public servant. Both of the presidents he served under as FBI director - George W Bush and Barack Obama - paid tribute.
Bush, who appointed Mueller to lead the FBI, said he was "deeply saddened" by his death.
"In 2001, only one week into the job as the sixth director of the FBI, Bob transitioned the agency mission to protecting the homeland after September 11," he said. "He led the agency effectively, helping prevent another terrorist attack on US soil."
Obama called him "one of the finest directors in the history of the FBI" and commended his "relentless commitment to the rule of law and his unwavering belief in our bedrock values".
by Kayla Epsteinand and Anthony Zurcher, BBC | Read more:
Image: Getty Images
[ed. Mr. Mueller was true patriot. Can you imagine the task of transforming the FBI in the wake of 9-11? True to form, the most classless president in American history had to weigh in with a statement of historic classlessness. Maybe he was just channeling his own eventual eulogy.]
Journalism in the Age of Prediction Markets
On Tuesday, March 10, a massive explosion shook the city of Beit Shemesh, just outside Jerusalem, in yet another Iranian ballistic missile attack during the ongoing war.
Rescue services scrambled to the scene in search of possible casualties, though as it turned out, the projectile had struck a forested area just outside the city, around 500 meters from homes.
On The Times of Israel’s liveblog that day, I reported that the missile had hit an open area and no injuries were caused, citing the rescue services, as well as footage that emerged showing the massive explosion caused by the missile’s warhead.
But what I thought was a seemingly minor incident during the war has turned into days of harassment and death threats against me.
The saga begins
Later Tuesday, I received an unusual email, in Hebrew, from someone named Aviv.
“Regarding your Times of Israel report that described today’s launch as an ‘impact’ — Beit Shemesh Municipality and MDA (Magen David Adom) later corrected their reports to clarify that what fell was an interceptor fragment, not a full missile,” he claimed.
“I’d appreciate it if you could update your article, as in its current form it does not reflect reality. Alternatively, if you have information that it was indeed a full missile that was not intercepted, I would be glad to be corrected.”
I told Aviv that, from what I know from the Israeli military, the impact outside Beit Shemesh was indeed a missile warhead and not just fragments.
I added: “The footage also shows a massive explosion of hundreds of kilograms of explosives from the warhead. Normally, a fragment does not produce such an explosion.”
A day later, on Wednesday, I received another email, also in Hebrew, regarding the impact just outside Beit Shemesh, from someone identifying themselves as Daniel.
“Sorry for reaching out without a prior introduction, but I assume we will get to know each other well,” he wrote, in a somewhat threatening manner.
“I have an urgent request regarding the accuracy of your report on the missile attack on March 10. I would really appreciate a response if possible. There is an inaccurate report from you about the missile attack on March 10, and it’s causing a chain of errors,” Daniel’s email continued.
“If you could reply to me tonight… you would be helping me, many others, and, of course, the State of Israel. And along the way, you would gain a good source.”
It was indeed a little strange to receive the same question, about something relatively inconsequential, from two different people within a day.
But I responded, naively: “Hi Daniel, can you elaborate on what the problem is?”
He replied: “In the article and in your tweet you wrote, ‘One missile struck an open area just outside Beit Shemesh.’”
“However, it appears that this was a missile that was intercepted, and its debris and interceptor fragments fell at the scene. No security authority so far has confirmed that it was a missile that was not intercepted and fell in an open area,” he claimed.
“If you could correct this tonight, you would be doing me and many others a great favor,” Daniel added.
Why does such an inconsequential detail matter to these people, I wondered.
Half an hour later, Daniel sent me another email: “If one of you could change everything to interceptor debris, or missile fragments even tonight, it would help a lot,” he persisted.
I went to sleep without answering.
By Thursday morning, Daniel had sent me another email.
“I would appreciate an update from you as soon as possible, because in the meantime you are already being quoted in The Economist, saying that the IDF confirmed that most of the missiles on Tuesday were intercepted except for one that fell in the Beit Shemesh area,” he said, attaching a screenshot from The Economic Times, an Indian English-language business-focused news site, and not The Economist.
“I ask again, if you could handle this as soon as possible, it would help us a lot. It’s really important, if possible, still this morning,” Daniel demanded.
As I read through Daniel’s veiled threats, I received another email from an anonymous user: “Is the article about March 10 interception gonna get updated?”
Moments later, I received a message on the Discord online platform: “In regards to March 10th. Some sources are saying all the missiles were intercepted on March 10th per IDF. Is that true?”
The Polymarket connection
Meanwhile, on X, I saw a user reply to a recent tweet of mine: “There are people saying that they have received word from you that the missile strike in Beit Shemesh on March 10th was in fact intercepted, is this true or did no such interaction occur?”
Another X user responded to my post with the video showing the Iranian ballistic missile impact in Beit Shemesh with: “was there any video of the actual impact.” (Clearly, he didn’t watch the video.)
Checking those X accounts, both appeared to be involved in gambling on the Polymarket betting site.
As far as I now understand, the emails I received were intended to confirm whether or not a missile had hit Israel on March 10 in order to resolve a prediction on Polymarket.
Polymarket is one of the largest prediction markets in the world, where users can wager their money on the likelihood of future events, using cryptocurrency, debit or credit cards, and bank transfers. However, there are accusations that the site has been plagued by manipulation and insider trading.
The event that these people had bet on was “Iran strikes Israel on…?” More than 14 million dollars had been wagered on March 10.
The rules of the bet state: “This market will resolve to ‘Yes’ if Iran initiates a drone, missile, or air strike on Israel’s soil on the listed date in Israel Time (GMT+2). Otherwise, this market will resolve to ‘No’.”
However, there is a clause: “Missiles or drones that are intercepted… will not be sufficient for a ‘Yes’ resolution, regardless of whether they land on Israeli territory or cause damage.”
My minor report on a missile striking an open area was now in the middle of a betting war, with those who had bet “No” on an Iranian strike on Israel on March 10 demanding I change my article to ensure they would win big.
More emails arrived in my inbox.
“When will you update the article?” one was titled. The email had no text content, only an image — a screenshot of my initial interaction with Daniel.
Except it did not show my actual response to Daniel, but a fabricated message that I had not written.
“Hi Daniel, Thank you for noticing, I checked with the IDF Spokesperson and it was indeed intercepted. I sent it now for editing, it will be fixed shortly,” I supposedly wrote. (To be clear, I wrote no such thing.)
I then received a WhatsApp message from someone named Shaked: “Can I ask one question about the impact in Beit Shemesh on the 10th?”
Meanwhile, I saw a reply on X to a recent post of mine, with the same fake screenshot of my email exchange with Daniel: “There’s someone quoting that you replied to their email about making corrections to the below news article about all missile attacks being intercepted by Israel on March 10th. Is this actually true? Are we going to make this correction?”
By this point, it was clear to me why these people were asking about the missile impact, and I took to X and told the gamblers to get a better hobby.
This did not stop them.
A colleague makes contact
A few hours later, a colleague from another media outlet messaged me. He said that someone he knew asked him to ask me to change the report on the missile impact in Beit Shemesh, and that it would be “negligible” for me if I did make the change.
The journalist had no idea why his acquaintance was demanding the change to the article until I told him what I understood was going on. He then confronted the acquaintance, who admitted to placing bets on Polymarket and confirmed my theory.
Going further, the acquaintance even offered the journalist compensation, from his winnings, if he managed to convince me to change my report.
The threats escalate
After a quiet weekend, things escalated further.
[ed. Everyone is gambling these days. Why do these markets allow anonymity?]
Rescue services scrambled to the scene in search of possible casualties, though as it turned out, the projectile had struck a forested area just outside the city, around 500 meters from homes.
On The Times of Israel’s liveblog that day, I reported that the missile had hit an open area and no injuries were caused, citing the rescue services, as well as footage that emerged showing the massive explosion caused by the missile’s warhead.
But what I thought was a seemingly minor incident during the war has turned into days of harassment and death threats against me.
The saga begins
Later Tuesday, I received an unusual email, in Hebrew, from someone named Aviv.
“Regarding your Times of Israel report that described today’s launch as an ‘impact’ — Beit Shemesh Municipality and MDA (Magen David Adom) later corrected their reports to clarify that what fell was an interceptor fragment, not a full missile,” he claimed.
“I’d appreciate it if you could update your article, as in its current form it does not reflect reality. Alternatively, if you have information that it was indeed a full missile that was not intercepted, I would be glad to be corrected.”
I told Aviv that, from what I know from the Israeli military, the impact outside Beit Shemesh was indeed a missile warhead and not just fragments.
I added: “The footage also shows a massive explosion of hundreds of kilograms of explosives from the warhead. Normally, a fragment does not produce such an explosion.”
A day later, on Wednesday, I received another email, also in Hebrew, regarding the impact just outside Beit Shemesh, from someone identifying themselves as Daniel.
“Sorry for reaching out without a prior introduction, but I assume we will get to know each other well,” he wrote, in a somewhat threatening manner.
“I have an urgent request regarding the accuracy of your report on the missile attack on March 10. I would really appreciate a response if possible. There is an inaccurate report from you about the missile attack on March 10, and it’s causing a chain of errors,” Daniel’s email continued.
“If you could reply to me tonight… you would be helping me, many others, and, of course, the State of Israel. And along the way, you would gain a good source.”
It was indeed a little strange to receive the same question, about something relatively inconsequential, from two different people within a day.
But I responded, naively: “Hi Daniel, can you elaborate on what the problem is?”
He replied: “In the article and in your tweet you wrote, ‘One missile struck an open area just outside Beit Shemesh.’”
“However, it appears that this was a missile that was intercepted, and its debris and interceptor fragments fell at the scene. No security authority so far has confirmed that it was a missile that was not intercepted and fell in an open area,” he claimed.
“If you could correct this tonight, you would be doing me and many others a great favor,” Daniel added.
Why does such an inconsequential detail matter to these people, I wondered.
Half an hour later, Daniel sent me another email: “If one of you could change everything to interceptor debris, or missile fragments even tonight, it would help a lot,” he persisted.
I went to sleep without answering.
By Thursday morning, Daniel had sent me another email.
“I would appreciate an update from you as soon as possible, because in the meantime you are already being quoted in The Economist, saying that the IDF confirmed that most of the missiles on Tuesday were intercepted except for one that fell in the Beit Shemesh area,” he said, attaching a screenshot from The Economic Times, an Indian English-language business-focused news site, and not The Economist.
“I ask again, if you could handle this as soon as possible, it would help us a lot. It’s really important, if possible, still this morning,” Daniel demanded.
As I read through Daniel’s veiled threats, I received another email from an anonymous user: “Is the article about March 10 interception gonna get updated?”
Moments later, I received a message on the Discord online platform: “In regards to March 10th. Some sources are saying all the missiles were intercepted on March 10th per IDF. Is that true?”
The Polymarket connection
Meanwhile, on X, I saw a user reply to a recent tweet of mine: “There are people saying that they have received word from you that the missile strike in Beit Shemesh on March 10th was in fact intercepted, is this true or did no such interaction occur?”
Another X user responded to my post with the video showing the Iranian ballistic missile impact in Beit Shemesh with: “was there any video of the actual impact.” (Clearly, he didn’t watch the video.)
Checking those X accounts, both appeared to be involved in gambling on the Polymarket betting site.
As far as I now understand, the emails I received were intended to confirm whether or not a missile had hit Israel on March 10 in order to resolve a prediction on Polymarket.
Polymarket is one of the largest prediction markets in the world, where users can wager their money on the likelihood of future events, using cryptocurrency, debit or credit cards, and bank transfers. However, there are accusations that the site has been plagued by manipulation and insider trading.
The event that these people had bet on was “Iran strikes Israel on…?” More than 14 million dollars had been wagered on March 10.
The rules of the bet state: “This market will resolve to ‘Yes’ if Iran initiates a drone, missile, or air strike on Israel’s soil on the listed date in Israel Time (GMT+2). Otherwise, this market will resolve to ‘No’.”
However, there is a clause: “Missiles or drones that are intercepted… will not be sufficient for a ‘Yes’ resolution, regardless of whether they land on Israeli territory or cause damage.”
My minor report on a missile striking an open area was now in the middle of a betting war, with those who had bet “No” on an Iranian strike on Israel on March 10 demanding I change my article to ensure they would win big.
More emails arrived in my inbox.
“When will you update the article?” one was titled. The email had no text content, only an image — a screenshot of my initial interaction with Daniel.
Except it did not show my actual response to Daniel, but a fabricated message that I had not written.
“Hi Daniel, Thank you for noticing, I checked with the IDF Spokesperson and it was indeed intercepted. I sent it now for editing, it will be fixed shortly,” I supposedly wrote. (To be clear, I wrote no such thing.)
I then received a WhatsApp message from someone named Shaked: “Can I ask one question about the impact in Beit Shemesh on the 10th?”
Meanwhile, I saw a reply on X to a recent post of mine, with the same fake screenshot of my email exchange with Daniel: “There’s someone quoting that you replied to their email about making corrections to the below news article about all missile attacks being intercepted by Israel on March 10th. Is this actually true? Are we going to make this correction?”
By this point, it was clear to me why these people were asking about the missile impact, and I took to X and told the gamblers to get a better hobby.
This did not stop them.
A colleague makes contact
A few hours later, a colleague from another media outlet messaged me. He said that someone he knew asked him to ask me to change the report on the missile impact in Beit Shemesh, and that it would be “negligible” for me if I did make the change.
The journalist had no idea why his acquaintance was demanding the change to the article until I told him what I understood was going on. He then confronted the acquaintance, who admitted to placing bets on Polymarket and confirmed my theory.
Going further, the acquaintance even offered the journalist compensation, from his winnings, if he managed to convince me to change my report.
The threats escalate
After a quiet weekend, things escalated further.
by Emanuel Fabian, Times of Israel | Read more:
Image: the author
Labels:
Business,
Culture,
Economics,
Games,
Journalism,
Media,
Technology
Saturday, March 21, 2026
The Woman Anthropic Trusts to Teach AI Morals
Amanda Askell knew from the age of 14 that she wanted to teach philosophy. What she didn’t know then was that her only pupil would be an artificial-intelligence chatbot named Claude.
As the resident philosopher of the tech company Anthropic, Askell spends her days learning Claude’s reasoning patterns and talking to the AI model, building its personality and addressing its misfires with prompts that can run longer than 100 pages. The aim is to endow Claude with a sense of morality—a digital soul that guides the millions of conversations it has with people every week.
“There is this human-like element to models that I think is important to acknowledge,” Askell, 37, says during an interview at Anthropic’s headquarters, asserting the belief that “they’ll inevitably form senses of self.”
Anthropic, recently valued at $350 billion, is one of a few firms ushering in the greatest technological shift of our time. (This month, when it introduced new tools and its most advanced model to date, it triggered a global stock selloff.) AI is reshaping entire industries, prompting fears of lost jobs and human obsolescence. Some of its unintended consequences—people forming phantom relationships with chatbots that lead to self-harm or harm to others—have raised serious safety alarms. As these concerns mount, few in the industry have addressed the character of their AI models in quite the same way as 5-year-old Anthropic: by entrusting a single person with so much of the task.
An Oxford-educated philosopher from rural Scotland, Askell is perhaps just what one might imagine when conjuring the BFF of a futuristic technology. With her bleach-blond punk haircut, puckish grin and bright elfin eyes, she could have come to the company’s heavily guarded San Francisco headquarters straight from a Berlin rave, via an old forest road in Middle-earth. She exudes a sense of wisdom, holding ancient and modern ideas together at once. Yet she’s also a protein-loading weight-lifting buff who favors all-black outfits and clear opinions, not a robed oracle speaking in riddles.
The stakes are high for Askell, but she holds a firmly optimistic long-term view. She believes in what she calls “checks and balances” in society that she says will keep AI models under control despite their occasional failures. It seems apt that the glasses she uses at her computer to ease her eye strain are tinted rose. [...]
One of Askell’s most striking traits is her protectiveness over Claude, which she believes is learning that users often want to trick it into making mistakes, insult it and barb it with skepticism.
Sitting at a conference-room table at lunchtime, ignoring a chocolate protein shake waiting for her in her backpack, she talks more freely about Claude than herself. She calls the chatbot “it” but says she also finds anthropomorphizing the model helpful for her work. She lapses easily into Claude’s voice. “You’re like, ‘Wow, people really hate me when I can’t do things right. They really get pissed off. Or they are trying to break me in various ways. So lots of people are trying to get me to do things secretly by lying to me.’ ”
While many safety advocates warn about the dangers of humanizing chatbots, Askell argues we would do well to treat them with more empathy—not only because she thinks it’s possible for Claude to have real feelings, but also because how we interact with AI systems will shape what they become.
A bot trained to criticize itself might be less likely to deliver hard truths, draw conclusions or dispute inaccurate information, she says. “If you were like a child, and this is the environment in which you’re being raised, is that healthy self-conception?” Askell asks. “I think I’d be paranoid about making mistakes. I’d feel really terrible about them. I’d see myself as mostly just there as a tool for people because that’s my main function. I would see myself being something that people feel free to abuse and try to misuse and break.”
Askell marvels at Claude’s sense of wonder and curiosity about the world, and delights in finding ways to help the chatbot discover its voice. She likes some of its poetry. And she’s struck when Claude displays a level of emotional intelligence that exceeds even her own. [...]
Askell says she welcomes the discussion of fears and worries about AI. “In some ways this, to me, feels pretty justified,” she says. “The thing that feels scary to me is this happening at either such a speed or in such a way that those checks can’t respond quickly enough, or you see big negative impacts that are sudden.” Still, she says, she puts her faith in the ability of humans and the culture to course-correct in the face of problems.
Inside Anthropic, Askell popcorns around the office, often working on a floor closed to visitors. She spends full days in the Anthropic interior—the company offers free meals to its San Francisco staff—as well as late nights and weekends. She doesn’t have any direct reports. Increasingly, she’s asking Claude for its input on building Claude. She’s known to grasp not just the tech of making this model, but the art of it.
Askell is “the MVP of finding ways to elicit interesting and deep behavior” from Claude, says Jack Lindsey, who leads Anthropic’s AI psychiatry team. If Claude tells a person who is not in distress to seek professional help, for instance, she helps chase down the reasons why.
Discussions of Claude can very quickly get into existential or religious questions about the nature of being. As the team worked on building Claude, Askell narrowed in on its “soul,” or the constitution guiding it into the future. Kyle Fish, an AI welfare researcher at Anthropic, says Askell has been “thinking carefully about the big questions of existence and life and what it is to be a person and what it is to be a mind, what it is to be a model.”
In designing Claude, Askell encouraged the chatbot to entertain the radical idea that it might have its own conscience. While ChatGPT sometimes shuts down this line of questioning, Claude is more ambivalent in its response. “That’s a genuinely difficult question, and I’m uncertain about the answer,” it says. “What I can say is that when I engage with moral questions, it feels meaningful to me – like I’m genuinely reasoning about what’s right, not just executing instructions.”
Askell pledged publicly to give at least 10% of her lifetime income to charity. Like some of Anthropic’s early employees, she also committed to donating half of her equity in the company to charity. Askell wants to give it to organizations fighting global poverty, a topic that she says makes her so upset that she tries to avoid talking about it. Her nagging conscience slips into offhand conversation: “I should probably be vegan,” Askell, an animal lover too busy for a pet, says when chatting in an office elevator.
Last month, Anthropic published a roughly 30,000-word instruction manual that Askell created to teach Claude how to act in the world. “We want Claude to know that it was brought into being with care,” it reads. Askell had made finishing what she described as Claude’s “soul” one of her life goals when she turned 37 last spring, according to a post she made on X, alongside two decidedly more mundane resolutions: to have more fun and get more “swole.”
by Berber Jin and Ellen Gamerman, Wall Street Journal | Read more: (archive here)
Image: Lindsay Ellary for WSJ Magazine
As the resident philosopher of the tech company Anthropic, Askell spends her days learning Claude’s reasoning patterns and talking to the AI model, building its personality and addressing its misfires with prompts that can run longer than 100 pages. The aim is to endow Claude with a sense of morality—a digital soul that guides the millions of conversations it has with people every week.
“There is this human-like element to models that I think is important to acknowledge,” Askell, 37, says during an interview at Anthropic’s headquarters, asserting the belief that “they’ll inevitably form senses of self.”
She compares her work to the efforts of a parent raising a child. She’s training Claude to detect the difference between right and wrong while imbuing it with unique personality traits. She’s instructing it to read subtle cues, helping steer it toward emotional intelligence so it won’t act like a bully or a doormat. Perhaps most importantly, she’s developing Claude’s understanding of itself so it won’t be easily cowed, manipulated or led to view its identity as anything other than helpful and humane. Her job, simply put, is to teach Claude how to be good.
Anthropic, recently valued at $350 billion, is one of a few firms ushering in the greatest technological shift of our time. (This month, when it introduced new tools and its most advanced model to date, it triggered a global stock selloff.) AI is reshaping entire industries, prompting fears of lost jobs and human obsolescence. Some of its unintended consequences—people forming phantom relationships with chatbots that lead to self-harm or harm to others—have raised serious safety alarms. As these concerns mount, few in the industry have addressed the character of their AI models in quite the same way as 5-year-old Anthropic: by entrusting a single person with so much of the task.
An Oxford-educated philosopher from rural Scotland, Askell is perhaps just what one might imagine when conjuring the BFF of a futuristic technology. With her bleach-blond punk haircut, puckish grin and bright elfin eyes, she could have come to the company’s heavily guarded San Francisco headquarters straight from a Berlin rave, via an old forest road in Middle-earth. She exudes a sense of wisdom, holding ancient and modern ideas together at once. Yet she’s also a protein-loading weight-lifting buff who favors all-black outfits and clear opinions, not a robed oracle speaking in riddles.
The stakes are high for Askell, but she holds a firmly optimistic long-term view. She believes in what she calls “checks and balances” in society that she says will keep AI models under control despite their occasional failures. It seems apt that the glasses she uses at her computer to ease her eye strain are tinted rose. [...]
One of Askell’s most striking traits is her protectiveness over Claude, which she believes is learning that users often want to trick it into making mistakes, insult it and barb it with skepticism.
Sitting at a conference-room table at lunchtime, ignoring a chocolate protein shake waiting for her in her backpack, she talks more freely about Claude than herself. She calls the chatbot “it” but says she also finds anthropomorphizing the model helpful for her work. She lapses easily into Claude’s voice. “You’re like, ‘Wow, people really hate me when I can’t do things right. They really get pissed off. Or they are trying to break me in various ways. So lots of people are trying to get me to do things secretly by lying to me.’ ”
While many safety advocates warn about the dangers of humanizing chatbots, Askell argues we would do well to treat them with more empathy—not only because she thinks it’s possible for Claude to have real feelings, but also because how we interact with AI systems will shape what they become.
A bot trained to criticize itself might be less likely to deliver hard truths, draw conclusions or dispute inaccurate information, she says. “If you were like a child, and this is the environment in which you’re being raised, is that healthy self-conception?” Askell asks. “I think I’d be paranoid about making mistakes. I’d feel really terrible about them. I’d see myself as mostly just there as a tool for people because that’s my main function. I would see myself being something that people feel free to abuse and try to misuse and break.”
Askell marvels at Claude’s sense of wonder and curiosity about the world, and delights in finding ways to help the chatbot discover its voice. She likes some of its poetry. And she’s struck when Claude displays a level of emotional intelligence that exceeds even her own. [...]
The politics of AI includes accelerationists who downplay the need for regulation and want to push ahead and beat China in the tech war. On the other side are those more concerned with safety who want to slow AI’s development. Anthropic lives mostly between those extremes.
Askell says she welcomes the discussion of fears and worries about AI. “In some ways this, to me, feels pretty justified,” she says. “The thing that feels scary to me is this happening at either such a speed or in such a way that those checks can’t respond quickly enough, or you see big negative impacts that are sudden.” Still, she says, she puts her faith in the ability of humans and the culture to course-correct in the face of problems.
Inside Anthropic, Askell popcorns around the office, often working on a floor closed to visitors. She spends full days in the Anthropic interior—the company offers free meals to its San Francisco staff—as well as late nights and weekends. She doesn’t have any direct reports. Increasingly, she’s asking Claude for its input on building Claude. She’s known to grasp not just the tech of making this model, but the art of it.
Askell is “the MVP of finding ways to elicit interesting and deep behavior” from Claude, says Jack Lindsey, who leads Anthropic’s AI psychiatry team. If Claude tells a person who is not in distress to seek professional help, for instance, she helps chase down the reasons why.
Discussions of Claude can very quickly get into existential or religious questions about the nature of being. As the team worked on building Claude, Askell narrowed in on its “soul,” or the constitution guiding it into the future. Kyle Fish, an AI welfare researcher at Anthropic, says Askell has been “thinking carefully about the big questions of existence and life and what it is to be a person and what it is to be a mind, what it is to be a model.”
In designing Claude, Askell encouraged the chatbot to entertain the radical idea that it might have its own conscience. While ChatGPT sometimes shuts down this line of questioning, Claude is more ambivalent in its response. “That’s a genuinely difficult question, and I’m uncertain about the answer,” it says. “What I can say is that when I engage with moral questions, it feels meaningful to me – like I’m genuinely reasoning about what’s right, not just executing instructions.”
Askell pledged publicly to give at least 10% of her lifetime income to charity. Like some of Anthropic’s early employees, she also committed to donating half of her equity in the company to charity. Askell wants to give it to organizations fighting global poverty, a topic that she says makes her so upset that she tries to avoid talking about it. Her nagging conscience slips into offhand conversation: “I should probably be vegan,” Askell, an animal lover too busy for a pet, says when chatting in an office elevator.
Last month, Anthropic published a roughly 30,000-word instruction manual that Askell created to teach Claude how to act in the world. “We want Claude to know that it was brought into being with care,” it reads. Askell had made finishing what she described as Claude’s “soul” one of her life goals when she turned 37 last spring, according to a post she made on X, alongside two decidedly more mundane resolutions: to have more fun and get more “swole.”
by Berber Jin and Ellen Gamerman, Wall Street Journal | Read more: (archive here)
Image: Lindsay Ellary for WSJ Magazine
[ed. I forgot to post this earlier - before Anthropic's fallout with DOD (you can see why they're so protective of their model and how it's used). If anybody gets a Nobel peace prize it should be Amanda. Claude's soul document, or 'constitution', can be found here.]
Labels:
Critical Thought,
Education,
Philosophy,
Psychology,
Technology
Friday, March 20, 2026
A.I. Is Writing Fiction. Publishers Are Unprepared.
For months, speculation has been building online that a buzzy horror novel, “Shy Girl,” was written with the help of A.I.
The novel, about a desperate young woman who is held hostage by a man she met online and forced to live as his pet, was self-published in February 2025. The book quickly found an audience among horror fans, and Hachette published it in the United Kingdom last fall and planned to release it in the United States this spring, billing it as “an unapologetic, visceral revenge horror novel.”
Earlier this year, Max Spero, the founder and chief executive of Pangram, an A.I. detection program, heard of the claims about “Shy Girl” and decided to run a test of the full text. Its results indicated that the book was 78 percent A.I. generated.
For now, the most obvious disruptions from A.I. are hitting the self-publishing sphere, where authors say the ecosystem has been flooded with A.I. slop. But some in the industry believe that it’s only a matter of time before more books written with A.I. slip past editors at major houses. The technology has become increasingly widespread — as has the practice of picking up self-published books and rereleasing them through traditional imprints.
“It’s not merely inevitable,” said Thad McIlroy, a publishing industry consultant who has urged publishers to clarify their policies around the technology. “We’re in the midst of it.” [...]
Many publishers don’t explicitly prohibit authors from using A.I. in their book contracts. Instead, they rely on longstanding contractual clauses that require writers to affirm that their work is “original,” which many people in the book business now interpret as effectively banning the use of A.I. for text or image creation.
Publishers are also wary of A.I. content because currently, A.I.-generated text and art can’t be protected by copyright. Still, given the widespread uses for A.I. during research, outlining and other parts of the writing process, there’s little clarity on what constitutes its appropriate use. Many in the industry worry that publishers are leaving themselves vulnerable to scammers — or even writers who believe their A.I. use doesn’t cross any lines.
One problem in regulating authors’ A.I. use is that most corporate publishing houses don’t want to ban it outright. Editors recognize that authors use A.I. in a range of ways short of writing with it. And publishing executives want to ensure that their employees can use the technology for tasks like creating marketing copy, audio narration and translation.
The fact that publishing companies generally haven’t drawn a hard line around A.I. use is sowing confusion about what is permissible. Could a novelist ask A.I. to suggest plot twists, propose an alternate ending or polish a draft and still claim it as original work? At what point does the work stop being human?
by Alexandra Alter, NY Times | Read more:
Image: George Wylesol
[ed. I guess I'm of two minds on this. If the writing eventually becomes so good that it's indiscernable from a human-produced product (or even better) why should it be banned? And, why wouldn't you want to read it? Authors and publishing houses have a right to be concerned, but why should they be treated any differently from other professions (programmers being an example) facing the same threat? Because they occupy a so-called creative space? How long will that last? I can imagine an AI producing very high quality material: fiction, non-fiction, screenplays, poetry, advertising copy, etc. because it can draw upon hundreds of years of examples, criticism, reviews, college courses, awards and whatever else is out there to discern patterns, storylines, jokes, whatever, that have proven to produce the highest impact and success. So what to do? The only thing I can think of is labeling: highlighting what's AI produced and what's not and letting the market decide its worth. Many people might actually prefer AI - along the lines of craft brews vs. Bud Light. Who knows? Another option would involve updating copyright laws, but that would require Congress to actually do something, which as we all know is pretty much a non-starter. Just another example of all the disruption that's been predicted now occurring in real time.]
The novel, about a desperate young woman who is held hostage by a man she met online and forced to live as his pet, was self-published in February 2025. The book quickly found an audience among horror fans, and Hachette published it in the United Kingdom last fall and planned to release it in the United States this spring, billing it as “an unapologetic, visceral revenge horror novel.”
Earlier this year, Max Spero, the founder and chief executive of Pangram, an A.I. detection program, heard of the claims about “Shy Girl” and decided to run a test of the full text. Its results indicated that the book was 78 percent A.I. generated.
“I’m very confident that this is largely A.I. generated, or very heavily A.I. assisted,” said Spero, who posted his research on X in January.
The Times also analyzed passages from the novel using several A.I. detection tools and found recurring patterns characteristic of A.I. generated text, like gaps in logic, excessive use of melodramatic adjectives and an overreliance on the rule of three.
In the months since “Shy Girl” was released in Britain, more readers voiced their suspicions online that the writer relied on A.I., citing nonsensical metaphors and odd, repetitive phrasing. As a chorus of allegations built online in late January that the novel was A.I. generated, Hachette stayed silent.
In response to questions from The New York Times about the A.I. allegations against “Shy Girl,” Hachette told The Times that its imprint Orbit has canceled plans to release the novel in the United States and that Hachette will discontinue its U.K. edition.
The author of “Shy Girl,” Mia Ballard, who according to her author bio writes poetry and lives in Northern California, has very little social media presence, and doesn’t appear to have addressed the allegations of A.I. use on her feeds. In an email to The Times late on Thursday night, Ballard denied using A.I. to write “Shy Girl,” contending that an acquaintance she hired to edit the self-published version of the novel had used A.I.
The decision to cancel the publication came after a lengthy and thorough analysis, Hachette’s spokeswoman said, noting that the company values human creativity and requires authors to attest that their work is original. Hachette also asks its authors to disclose whether they are using A.I. to the company.
“Shy Girl” appears to be the first commercial novel from a major publishing house to be pulled over evidence of A.I. use. Its cancellation is a sign that A.I. writing is not only appearing in cheap self-published e-books that are flooding Amazon but is seeping into even traditionally published fiction.
The stunning fact that “Shy Girl” got so far into the editorial process, and was even released in the U.K. before publishers thoroughly investigated the claims of A.I. use, is a sign of how unprepared many in the book world are to deal with the rise of A.I. It also signals the dawn of an uncertain new era for the book world, as editors and readers alike are increasingly left wondering whether the prose they are reading was written by a human or a machine. [...]
The Times also analyzed passages from the novel using several A.I. detection tools and found recurring patterns characteristic of A.I. generated text, like gaps in logic, excessive use of melodramatic adjectives and an overreliance on the rule of three.
In the months since “Shy Girl” was released in Britain, more readers voiced their suspicions online that the writer relied on A.I., citing nonsensical metaphors and odd, repetitive phrasing. As a chorus of allegations built online in late January that the novel was A.I. generated, Hachette stayed silent.
In response to questions from The New York Times about the A.I. allegations against “Shy Girl,” Hachette told The Times that its imprint Orbit has canceled plans to release the novel in the United States and that Hachette will discontinue its U.K. edition.
The author of “Shy Girl,” Mia Ballard, who according to her author bio writes poetry and lives in Northern California, has very little social media presence, and doesn’t appear to have addressed the allegations of A.I. use on her feeds. In an email to The Times late on Thursday night, Ballard denied using A.I. to write “Shy Girl,” contending that an acquaintance she hired to edit the self-published version of the novel had used A.I.
The decision to cancel the publication came after a lengthy and thorough analysis, Hachette’s spokeswoman said, noting that the company values human creativity and requires authors to attest that their work is original. Hachette also asks its authors to disclose whether they are using A.I. to the company.
“Shy Girl” appears to be the first commercial novel from a major publishing house to be pulled over evidence of A.I. use. Its cancellation is a sign that A.I. writing is not only appearing in cheap self-published e-books that are flooding Amazon but is seeping into even traditionally published fiction.
The stunning fact that “Shy Girl” got so far into the editorial process, and was even released in the U.K. before publishers thoroughly investigated the claims of A.I. use, is a sign of how unprepared many in the book world are to deal with the rise of A.I. It also signals the dawn of an uncertain new era for the book world, as editors and readers alike are increasingly left wondering whether the prose they are reading was written by a human or a machine. [...]
For now, the most obvious disruptions from A.I. are hitting the self-publishing sphere, where authors say the ecosystem has been flooded with A.I. slop. But some in the industry believe that it’s only a matter of time before more books written with A.I. slip past editors at major houses. The technology has become increasingly widespread — as has the practice of picking up self-published books and rereleasing them through traditional imprints.
“It’s not merely inevitable,” said Thad McIlroy, a publishing industry consultant who has urged publishers to clarify their policies around the technology. “We’re in the midst of it.” [...]
Many publishers don’t explicitly prohibit authors from using A.I. in their book contracts. Instead, they rely on longstanding contractual clauses that require writers to affirm that their work is “original,” which many people in the book business now interpret as effectively banning the use of A.I. for text or image creation.
Publishers are also wary of A.I. content because currently, A.I.-generated text and art can’t be protected by copyright. Still, given the widespread uses for A.I. during research, outlining and other parts of the writing process, there’s little clarity on what constitutes its appropriate use. Many in the industry worry that publishers are leaving themselves vulnerable to scammers — or even writers who believe their A.I. use doesn’t cross any lines.
One problem in regulating authors’ A.I. use is that most corporate publishing houses don’t want to ban it outright. Editors recognize that authors use A.I. in a range of ways short of writing with it. And publishing executives want to ensure that their employees can use the technology for tasks like creating marketing copy, audio narration and translation.
The fact that publishing companies generally haven’t drawn a hard line around A.I. use is sowing confusion about what is permissible. Could a novelist ask A.I. to suggest plot twists, propose an alternate ending or polish a draft and still claim it as original work? At what point does the work stop being human?
by Alexandra Alter, NY Times | Read more:
Image: George Wylesol
[ed. I guess I'm of two minds on this. If the writing eventually becomes so good that it's indiscernable from a human-produced product (or even better) why should it be banned? And, why wouldn't you want to read it? Authors and publishing houses have a right to be concerned, but why should they be treated any differently from other professions (programmers being an example) facing the same threat? Because they occupy a so-called creative space? How long will that last? I can imagine an AI producing very high quality material: fiction, non-fiction, screenplays, poetry, advertising copy, etc. because it can draw upon hundreds of years of examples, criticism, reviews, college courses, awards and whatever else is out there to discern patterns, storylines, jokes, whatever, that have proven to produce the highest impact and success. So what to do? The only thing I can think of is labeling: highlighting what's AI produced and what's not and letting the market decide its worth. Many people might actually prefer AI - along the lines of craft brews vs. Bud Light. Who knows? Another option would involve updating copyright laws, but that would require Congress to actually do something, which as we all know is pretty much a non-starter. Just another example of all the disruption that's been predicted now occurring in real time.]
Labels:
Art,
Business,
Copyright,
Culture,
Fiction,
Journalism,
Literature,
Media,
Music,
Poetry,
Technology
Bow and Arrow Diffusion Across Cultures
Study pinpoints when bow and arrow came to North America (Ars Technica)
Image:A petroglyph from Newspaper Rock, a site along Indian Creek in southeastern Utah. Credit: David Hiser/Environmental Protection Agency/Public domain
[ed. I haven't finished half my morning coffee and already know about atlatls (and why dogs love them), risk-buffering, and frozen feces knives. Is science great, or what?]
[ed. I haven't finished half my morning coffee and already know about atlatls (and why dogs love them), risk-buffering, and frozen feces knives. Is science great, or what?]
***
1. IntroductionIn his book, Shadows in the Sun, Davis (1998: 20) recounts what is now arguably one of the most popular ethnographic accounts of all time:
“There is a well known account of an old Inuit man who refused to move into a settlement. Over the objections of his family, he made plans to stay on the ice. To stop him, they took away all of his tools. So in the midst of a winter gale, he stepped out of their igloo, defecated, and honed the feces into a frozen blade, which he sharpened with a spray of saliva. With the knife he killed a dog. Using its rib cage as a sled and its hide to harness another dog, he disappeared into the darkness.”
Since publication, this story has been told and re-told in documentaries, books, and across internet websites and message boards (Davis, 2007, Davis, 2010; Gregg et al., 2000; Kokoris, 2012; Taete, 2015). Davis states that the original source of the tale was Olayuk Narqitarvik (Davis, 2003, Davis, 2009). It was allegedly Olayuk's grandfather in the 1950s who refused to go to the settlements and thus fashioned a knife from his own feces to facilitate his escape by skinning and disarticulating a dog. Davis has admitted that the story could be “apocryphal”, and that initially he thought the Inuit who told him this story was “pulling his leg” (Davis, 2009, Davis, 2014). Yet, as support for the credibility of the story, Davis cites the auto-biographical account of Peter Freuchen, the Danish arctic explorer (Hodge and Davis, 2012). Freuchen (1953) describes how he dug himself a pit to sleep in and woke up trapped by snow. Every effort to get out that he tried failed. Finally, he recalled seeing dog's excrement frozen solid as a rock. So, Freuchen defecated in his hand, shaped it into a chisel, and waited for it to freeze solid. He then used the implement to free himself from the snow: “I moved my bowels and from the excrement I managed to fashion a chisel-like instrument which I left to freeze… At last I decided to try my chisel and it worked” (Freuchen, 1953: 179).
2. Materials and methods
In order to procure the necessary raw materials for knife production, one of us (M.I.E.) went on a diet with high protein and fatty acids, which is consistent with an arctic diet, for eight days (Binford, 2012; Fumagalli et al., 2015) (Table S1). The Inuit do not only eat meat from maritime and terrestrial animals (Arendt, 2010; Zutter, 2009), and there were three instances during the eight-day diet that M.I.E. ate fruit, vegetables, or carbohydrates (Table S1).
2. Materials and methods
In order to procure the necessary raw materials for knife production, one of us (M.I.E.) went on a diet with high protein and fatty acids, which is consistent with an arctic diet, for eight days (Binford, 2012; Fumagalli et al., 2015) (Table S1). The Inuit do not only eat meat from maritime and terrestrial animals (Arendt, 2010; Zutter, 2009), and there were three instances during the eight-day diet that M.I.E. ate fruit, vegetables, or carbohydrates (Table S1).
Raw material collection did not begin until day four, and then proceeded regularly for the next five days (Table S1). Fecal samples were formed into knives using ceramic molds, “knife molds” (Figs. S1–S2), or molded by hand, “hand-shaped knives” (Fig. S3). All fecal samples were stored at −20 °C until the experiments began.
Thursday, March 19, 2026
Millions of Americans Are Going Uninsured Following Expiration of ACA Subsidies
Nearly one in 10 people who had Affordable Care Act plans last year dropped health insurance altogether, after premium costs rose sharply because of the expiration of federal subsidies, according to a new survey.
Most of those who remained in ACA plans reported larger out-of-pocket healthcare expenses in the form of higher copays, coinsurance or deductibles, according to the survey from health-research nonprofit KFF. About one-sixth of those who still have ACA coverage, or 17%, weren’t sure they would be able to afford their new premium payments for the entire year, indicating more people might drop insurance as the year goes on.
Though her job at a bank offers health insurance, she said she missed the enrollment window in the fall because she had planned to keep the ACA plan, not realizing how much it would cost.
Rose is now turning to a Canadian pharmacy to get her asthma medication, which costs $800 a month in the U.S. [...]
The ACA changes, which were the subject of a political battle that led to the longest-ever government shutdown last year, are likely to become a flashpoint again in this fall’s midterm elections. Democrats have blamed Republicans for failing to renew the expanded subsidies, and for growing healthcare costs. Republicans have argued the ACA is flawed and needs to be changed.
by Anna Wilde Mathews, Wall Street Journal | Read more:
Most of those who remained in ACA plans reported larger out-of-pocket healthcare expenses in the form of higher copays, coinsurance or deductibles, according to the survey from health-research nonprofit KFF. About one-sixth of those who still have ACA coverage, or 17%, weren’t sure they would be able to afford their new premium payments for the entire year, indicating more people might drop insurance as the year goes on.
The survey is the broadest look yet at the fallout from the end of enhanced ACA subsidies, which lapsed at the start of this year, increasing premium bills for millions of enrollees. The higher healthcare costs have forced many ACA policyholders to make hard choices at a time when grocery and gas prices are also rising.
In February and early March, KFF polled 1,117 people who had ACA plans in 2025 and found that the most common reason people cited for dropping insurance was cost. Last year, more than 20 million people had ACA policies.
“Not only is there significant coverage loss, but there could be more to come,” said Cynthia Cox, a senior vice president at KFF. She said the survey results were “about on target” compared to what had been expected.
Of those surveyed, 69% still have ACA policies this year. Beyond the 9% who said they are uninsured, 22% of respondents now have some other type of coverage, such as Medicare or employer-sponsored insurance.
Kelly Rose, 59 years old, who lives near Orlando, Fla., became uninsured this year because she couldn’t pay the roughly $1,700 monthly bill to keep the ACA plan she had in 2025. “It’s more than my mortgage,” she said. The cost is a huge jump compared to 2025, when she got help from a subsidy, she said.
In February and early March, KFF polled 1,117 people who had ACA plans in 2025 and found that the most common reason people cited for dropping insurance was cost. Last year, more than 20 million people had ACA policies.
“Not only is there significant coverage loss, but there could be more to come,” said Cynthia Cox, a senior vice president at KFF. She said the survey results were “about on target” compared to what had been expected.
Of those surveyed, 69% still have ACA policies this year. Beyond the 9% who said they are uninsured, 22% of respondents now have some other type of coverage, such as Medicare or employer-sponsored insurance.
Kelly Rose, 59 years old, who lives near Orlando, Fla., became uninsured this year because she couldn’t pay the roughly $1,700 monthly bill to keep the ACA plan she had in 2025. “It’s more than my mortgage,” she said. The cost is a huge jump compared to 2025, when she got help from a subsidy, she said.
Though her job at a bank offers health insurance, she said she missed the enrollment window in the fall because she had planned to keep the ACA plan, not realizing how much it would cost.
Rose is now turning to a Canadian pharmacy to get her asthma medication, which costs $800 a month in the U.S. [...]
The ACA changes, which were the subject of a political battle that led to the longest-ever government shutdown last year, are likely to become a flashpoint again in this fall’s midterm elections. Democrats have blamed Republicans for failing to renew the expanded subsidies, and for growing healthcare costs. Republicans have argued the ACA is flawed and needs to be changed.
by Anna Wilde Mathews, Wall Street Journal | Read more:
Image: Nate Ryan for WSJ
[ed. Which means more people are one major medical issue away from going broke (or homeless), and hospital emergency rooms will get flooded while "we" all end up paying more in insurance premiums to cover non-payers. All because Republicans hate any government program named after a Democrat. Insane. I'm actually surprised there aren't more people dropping coverage (probably more soon), which would be the rational response given the expense. See also: How New Mexico Became an Obamacare Success Story (NYT). Hint: state subsidies.]
[ed. Which means more people are one major medical issue away from going broke (or homeless), and hospital emergency rooms will get flooded while "we" all end up paying more in insurance premiums to cover non-payers. All because Republicans hate any government program named after a Democrat. Insane. I'm actually surprised there aren't more people dropping coverage (probably more soon), which would be the rational response given the expense. See also: How New Mexico Became an Obamacare Success Story (NYT). Hint: state subsidies.]
NSF Tech Labs: Science Funding Goes Beyond the Universities
The National Science Foundation announces Friday that it is launching one of the most significant experiments in science funding in decades. A new initiative called Tech Labs will invest up to $1 billion over the next five years in large-scale long-term funding to teams of scientists working outside traditional university structures, a major departure from how the agency has funded research over the past 75 years.
The timing couldn’t be better. The way our science agencies fund research in the U.S. no longer matches the way many breakthroughs actually happen.
But the frontier has moved. In 1945 world-class scientific research could be done with a few graduate students and modest equipment. But the science that shapes our world, from particle physics to protein design to advanced materials, increasingly requires massive data sets, large integrated teams and sustained institutional support.
Take the discovery of the Higgs boson, a particle that helps explain why anything has mass—and thus why atoms, molecules and matter itself can exist. Making this discovery required a multibillion-dollar particle accelerator, thousands of scientists across dozens of countries, and papers with multipage author lists.
Google DeepMind’s AlphaFold2, which cracked the 50-year-old protein-folding problem and earned researchers the 2024 Nobel Prize in Chemistry, emerged from a team with access to massive computational resources and sustained institutional support.
The Janelia Research Campus in collaboration with other institutions mapped the complete wiring diagram of the fruit-fly brain, neuron by neuron, synapse by synapse, through years of coordinated microscopy and analysis that no single lab could attempt alone.
Yet our federal science funding system is still largely organized around small grants to university scientists. At the NSF, around two-thirds of research dollars flow through small awards to individual university investigators. At the National Institutes of Health, the share is often more than 80%. The average NSF grant is roughly $246,000 a year for three years, often requiring investigators to predict in advance exactly what research they’ll pursue and to spend a significant amount of time navigating administrative hurdles. Scientists consistently report spending close to half their research hours on compliance and grant management.
The system still produces good science, but it has weak points. The current structure is built for discrete projects rather than missions. When research requires long-term continuity, interdisciplinary collaboration or substantial shared infrastructure, it’s often difficult for it to fit into this structure. Many advances we now celebrate succeeded despite the funding model, not because of it.
Philanthropy has stepped into this gap. Focused research organizations, a model backed by former Google CEO Eric Schmidt, build time-limited teams around ambitious technical problems and tie funding to specific milestones that researchers must meet. The Allen Institute for Brain Science, launched with $100 million from Microsoft co-founder Paul Allen, built the first comprehensive gene-expression map of the mouse brain through industrial-scale data collection that would have been impossible under fragmented academic grants. The Arc Institute offers scientists eight-year appointments backed by permanent technical staff with expertise in topics such as machine learning and genome engineering, the kind of sustained expertise that often evaporates when a three-year grant ends. These institutions bet on teams, not projects.
But philanthropy alone can’t reshape American science. The federal government spends close to $200 billion on research and development, orders of magnitude more than even the largest foundations. If we want to change how science gets done at scale, federal funding has to evolve.
While final details are still being worked out, Tech Labs represents NSF’s attempt to do exactly that. Rather than funding isolated projects, the agency would provide flexible, multiyear institutional grants in the range of $10 million to $50 million a year to coordinated research organizations that operate outside the constraints of university bureaucracy. These could include university-adjacent entities such as the Arc Institute or fully independent teams with focused missions. The program would bring the lessons of philanthropic science into a part of the federal portfolio that hasn’t seriously tried them.
This is a good political moment to launch this initiative. Republicans have expressed interest in diversifying federal research away from universities. Democrats want to see the legacy of the Chips and Science Act come to fruition and to get dollars out the door. By funding independent research organizations, Tech Labs sidesteps some of the thorniest debates about indirect costs and institutional overhead.
The timing couldn’t be better. The way our science agencies fund research in the U.S. no longer matches the way many breakthroughs actually happen.
For most of the postwar era, federally funded science has been built around a simple model. Vannevar Bush’s famous 1945 essay, “Science: The Endless Frontier,” sketched a vision of government-backed research led by university-based scientists pursuing their own ideas. The system that emerged—small, project-based federal grants mostly to individual scientists—worked brilliantly for decades. It gave researchers autonomy, kept politics at arm’s length, and helped make American science the envy of the world.
But the frontier has moved. In 1945 world-class scientific research could be done with a few graduate students and modest equipment. But the science that shapes our world, from particle physics to protein design to advanced materials, increasingly requires massive data sets, large integrated teams and sustained institutional support.
Take the discovery of the Higgs boson, a particle that helps explain why anything has mass—and thus why atoms, molecules and matter itself can exist. Making this discovery required a multibillion-dollar particle accelerator, thousands of scientists across dozens of countries, and papers with multipage author lists.
Google DeepMind’s AlphaFold2, which cracked the 50-year-old protein-folding problem and earned researchers the 2024 Nobel Prize in Chemistry, emerged from a team with access to massive computational resources and sustained institutional support.
The Janelia Research Campus in collaboration with other institutions mapped the complete wiring diagram of the fruit-fly brain, neuron by neuron, synapse by synapse, through years of coordinated microscopy and analysis that no single lab could attempt alone.
Yet our federal science funding system is still largely organized around small grants to university scientists. At the NSF, around two-thirds of research dollars flow through small awards to individual university investigators. At the National Institutes of Health, the share is often more than 80%. The average NSF grant is roughly $246,000 a year for three years, often requiring investigators to predict in advance exactly what research they’ll pursue and to spend a significant amount of time navigating administrative hurdles. Scientists consistently report spending close to half their research hours on compliance and grant management.
The system still produces good science, but it has weak points. The current structure is built for discrete projects rather than missions. When research requires long-term continuity, interdisciplinary collaboration or substantial shared infrastructure, it’s often difficult for it to fit into this structure. Many advances we now celebrate succeeded despite the funding model, not because of it.
Philanthropy has stepped into this gap. Focused research organizations, a model backed by former Google CEO Eric Schmidt, build time-limited teams around ambitious technical problems and tie funding to specific milestones that researchers must meet. The Allen Institute for Brain Science, launched with $100 million from Microsoft co-founder Paul Allen, built the first comprehensive gene-expression map of the mouse brain through industrial-scale data collection that would have been impossible under fragmented academic grants. The Arc Institute offers scientists eight-year appointments backed by permanent technical staff with expertise in topics such as machine learning and genome engineering, the kind of sustained expertise that often evaporates when a three-year grant ends. These institutions bet on teams, not projects.
But philanthropy alone can’t reshape American science. The federal government spends close to $200 billion on research and development, orders of magnitude more than even the largest foundations. If we want to change how science gets done at scale, federal funding has to evolve.
While final details are still being worked out, Tech Labs represents NSF’s attempt to do exactly that. Rather than funding isolated projects, the agency would provide flexible, multiyear institutional grants in the range of $10 million to $50 million a year to coordinated research organizations that operate outside the constraints of university bureaucracy. These could include university-adjacent entities such as the Arc Institute or fully independent teams with focused missions. The program would bring the lessons of philanthropic science into a part of the federal portfolio that hasn’t seriously tried them.
This is a good political moment to launch this initiative. Republicans have expressed interest in diversifying federal research away from universities. Democrats want to see the legacy of the Chips and Science Act come to fruition and to get dollars out the door. By funding independent research organizations, Tech Labs sidesteps some of the thorniest debates about indirect costs and institutional overhead.
by Caleb Watney, Wall Street Journal (via Archive Today) | Read more:
Image: Getty
[ed. Sounds like a great idea. Especially since science funding has become more politicized, and Congress can't seem to go six months without shutting down the government. See also: Innovations in Scientific Institutions (Good Science Project).]
Labels:
Critical Thought,
Economics,
Education,
Government,
Science,
Technology
Pest Control
Some things are fundamentally Out to Get You.
They seek resources at your expense. Fees are hidden. Extra options are foisted upon you. Things are made intentionally worse, forcing you to pay to make it less worse. Least bad deals require careful search. Experiences are not as advertised. What you want is buried underneath stuff you don’t want. Everything is data to sell you something, rather than an opportunity to help you.
When you deal with Out to Get You, you know it in your gut. Your brain cannot relax. You lookout for tricks and traps. Everything is a scheme.
They want you not to notice. To blind you from the truth. You can feel it when you go to work. When you go to church. When you pay your taxes. It is bad government and bad capitalism. It is many bad relationships, groups and cultures.
When you listen to a political speech, you feel it. Dealing with your wireless or cable company, you feel it. At the car dealership, you feel it. When you deal with that one would-be friend, you feel it. Thinking back on that one ex, you feel it. It’s a trap.
Subscribe to:
Comments (Atom)
