Wednesday, February 4, 2026

Why the Future of Movies Lives on Letterboxd

Karl von Randow and Matthew Buchanan created Letterboxd in 2011, but its popularity ballooned during the pandemic. It has grown exponentially ever since: Between 2020 and 2026, it grew to 26 million users from 1.7 million, adding more than nine million users since January 2025 alone. It’s not the only movie-rating platform out there: Rotten Tomatoes has become a fixture of movie advertising, with “100% Fresh” ratings emblazoned on movie posters and TV ads. But if Rotten Tomatoes has become a tool of Hollywood’s homogenizing marketing machinery, Letterboxd is something else: a cinephilic hive buzzing with authentic enthusiasm and heterogeneous tastes.

The platform highlights audiences with appetites more varied than the industry has previously imagined, and helps them find their way to movies that are substantial. Black-and-white classics, foreign masterpieces and forgotten gems are popular darlings, while major studio releases often fail to find their footing. In an online ecosystem dominated by the short, simple and obvious, Letterboxd encourages people to engage with demanding art. Amid grim pronouncements of film-industry doom and the collapse of professional criticism, the rise of Letterboxd suggests that the industry’s crisis may be distinct from the fate of film itself. Even as Hollywood continues to circle the drain, film culture is experiencing a broad resurgence.

Letterboxd’s success rests on its simplicity. It feels like the internet of the late ’90s and early 2000s, with message boards and blogs, simple interfaces and banner ads, web-famous writers whose readership was built on the back of wit and regularity — people you might read daily and still never know what they look like. A user’s “Top 4 Films” appears at the top of their profile pages, resembling the lo-fi personalization of MySpace. The website does not allow users to send direct messages to one another, and the interactivity is limited to following another user, liking their reviews and in some cases commenting on specific posts. There is no “dislike” button. In this way, good vibes are allowed to proliferate, while bad ones mostly dissipate over time.

The result — at a time when legacy publications have reduced serious coverage of the arts — is a new, democratic form of film criticism: a mélange of jokes, close readings and earnest nerding out. Users write reviews that range from ultrashort, off-the-cuff takes to gonzo film-theory-inflected texts that combine wide-ranging historical context with in-depth analysis. As other social media platforms devolve into bogs of A.I. slop, bots and advertising, Letterboxd is one of the rare places where discourse is not driving us apart or dumbing us down.

“There’s no right way to use it, which I think is super appealing,” Slim Kolowski, once an avid Letterboxd user and now its head of community, told me. “I know plenty of people that never write a review. They don’t care about reviews. They just want to, you know, give a rating or whatever. And I think that’s a big part of it, because there’s no right way to use it, and I think we work really hard to keep it about film discovery.”

But in the end, passionate enthusiasm for movies is simply a win for cinema at large. Richard Brody, the New Yorker film critic whose greatest professional worry is that a good film will fall through the cracks without getting its due from critics or audiences, sees the rise of Letterboxd as a bulwark against this fear, as well as part of a larger trend toward the democratization of criticism. “I think that film criticism is in better shape now than it has ever been,” he tells me, “not because there’s any one critic or any small group of critics writing who are necessarily the equals of the classical greats in the field, but because there are far more people writing with far more knowledge, and I might even add far more passion, about a far wider range of films than ever.”

Many users are watching greater amounts of cinema by volume. “Letterboxd gives you these stats, and you can see how many movies you’ve watched,” Wesley Sharer, a top reviewer, told me. “And I think that, for me definitely and maybe for other people as well, contributes to this sense of, like, I’m not watching enough movies, you know, I need to bump my numbers up.” But the platform also encourages users to expand their tastes by putting independent or foreign offerings right in front of them. While Sharer built his following on reviews of buzzy new releases, he now does deep dives into specific, often niche directors like Hong Sang-soo or Tsui Hark (luminaries of Korean and Hong Kong cinema, respectively) to introduce his followers to new movies they could watch...

All this is to say that an active, evolving culture around movies exists that can be grown, if studios can let go of some of their old ideas about what will motivate audiences to show up. Letterboxd is doing the work of cultivating a younger generation of moviegoers, pushing them to define the taste and values that fuel their consumption; a cinephile renaissance means more people might be willing, for example, to see an important movie in multiple formats — IMAX, VistaVision, 70 millimeter — generating greater profit from the same audience. Engaging with these platforms, where users are actively seeking out new films to fall in love with, updates a marketing playbook that hasn’t changed significantly since the 2000s, when studios first embraced the digital landscape.

by Alexandra Kleeman, NY Times | Read more:
Image: via:

Claude's New Constitution

We're publishing a new constitution for our AI model, Claude. It's a detailed description of Anthropic's vision for Claude's values and behavior; a holistic document that explains the context in which Claude operates and the kind of entity we would like Claude to be.

The constitution is a crucial part of our model training process, and its content directly shapes Claude's behavior. Training models is a difficult task, and Claude's outputs might not always adhere to the constitution's ideals. But we think that the way the new constitution is written—with a thorough explanation of our intentions and the reasons behind them—makes it more likely to cultivate good values during training.

In this post, we describe what we've included in the new constitution and some of the considerations that informed our approach.

We're releasing Claude's constitution in full under a Creative Commons CC0 1.0 Deed, meaning it can be freely used by anyone for any purpose without asking for permission.

What is Claude's Constitution?


Claude's constitution is the foundational document that both expresses and shapes who Claude is. It contains detailed explanations of the values we would like Claude to embody and the reasons why. In it, we explain what we think it means for Claude to be helpful while remaining broadly safe, ethical, and compliant with our guidelines. The constitution gives Claude information about its situation and offers advice for how to deal with difficult situations and tradeoffs, like balancing honesty with compassion and the protection of sensitive information. Although it might sound surprising, the constitution is written primarily for Claude. It is intended to give Claude the knowledge and understanding it needs to act well in the world.

We treat the constitution as the final authority on how we want Claude to be and to behave—that is, any other training or instruction given to Claude should be consistent with both its letter and its underlying spirit. This makes publishing the constitution particularly important from a transparency perspective: it lets people understand which of Claude's behaviors are intended versus unintended, to make informed choices, and to provide useful feedback. We think transparency of this kind will become ever more important as AIs start to exert more influence in society.

We use the constitution at various stages of the training process. This has grown out of training techniques we've been using since 2023, when we first began training Claude models using Constitutional AI. Our approach has evolved significantly since then, and the new constitution plays an even more central role in training.

Claude itself also uses the constitution to construct many kinds of synthetic training data, including data that helps it learn and understand the constitution, conversations where the constitution might be relevant, responses that are in line with its values, and rankings of possible responses. All of these can be used to train future versions of Claude to become the kind of entity the constitution describes. This practical function has shaped how we've written the constitution: it needs to work both as a statement of abstract ideals and a useful artifact for training.

Our new approach to Claude's Constitution

Our previous Constitution was composed of a list of standalone principles. We've come to believe that a different approach is necessary. We think that in order to be good actors in the world, AI models like Claude need to understand why we want them to behave in certain ways, and we need to explain this to them rather than merely specify what we want them to do. If we want models to exercise good judgment across a wide range of novel situations, they need to be able to generalize—to apply broad principles rather than mechanically following specific rules.

Specific rules and bright lines sometimes have their advantages. They can make models' actions more predictable, transparent, and testable, and we do use them for some especially high-stakes behaviors in which Claude should never engage (we call these "hard constraints"). But such rules can also be applied poorly in unanticipated situations or when followed too rigidly . We don't intend for the constitution to be a rigid legal document—and legal constitutions aren't necessarily like this anyway.

The constitution reflects our current thinking about how to approach a dauntingly novel and high-stakes project: creating safe, beneficial non-human entities whose capabilities may come to rival or exceed our own. Although the document is no doubt flawed in many ways, we want it to be something future models can look back on and see as an honest and sincere attempt to help Claude understand its situation, our motives, and the reasons we shape Claude in the ways we do.

A brief summary of the new constitution

In order to be both safe and beneficial, we want all current Claude models to be:
  1. Broadly safe: not undermining appropriate human mechanisms to oversee AI during the current phase of development;
  2. Broadly ethical: being honest, acting according to good values, and avoiding actions that are inappropriate, dangerous, or harmful;
  3. Compliant with Anthropic's guidelines: acting in accordance with more specific guidelines from Anthropic where relevant;
  4. Genuinely helpful: benefiting the operators and users they interact with.
In cases of apparent conflict, Claude should generally prioritize these properties in the order in which they're listed.

Most of the constitution is focused on giving more detailed explanations and guidance about these priorities. The main sections are as follows:

by Zac Hatfield-Dodds, Drake Thomas, Anthropic |  Read more:
[ed. Much respect for Anthropic who seem to be doing more for AI safety than anyone else in the industry. Hopefully, others will follow and refine this groundbreaking effort.

Mortal Soul

[ed. Impressive chops.]

Tuesday, February 3, 2026

These Four States Are in Denial Over a Looming Water Crisis

Lake Mead is two-thirds empty. Lake Powell is even emptier.

Not for the first time, the seven Western states that rely on the Colorado River are fighting over how to keep these reservoirs from crashing — an event that could spur water shortages from Denver to Las Vegas to Los Angeles.

The tens of millions of people who rely on the Colorado River have weathered such crises before, even amid a stubborn quarter-century megadrought fueled by climate change. The states have always struck deals to use less water, overcoming their political differences to avert “dead pool” at Mead and Powell, meaning that water could no longer flow downstream.

This time, a deal may not be possible. And it’s clear who’s to blame.

Not the farmers who grow alfalfa and other feed for animals, despite the fact that they use one-third of all water in the Colorado River basin. Not California, even though the Golden State uses more river water than any of its neighbors. Not even the Trump administration, which has done a lousy job pressing the states to compromise.

No, the Upper Basin states of Colorado, New Mexico, Utah and Wyoming have emerged as the main obstacles to a fair deal. They’ve gummed up negotiations by refusing to accept mandatory cuts of any amount — unlike the Lower Basin states, which have spent years slashing water use.

Upper Basin leaders have long harbored ambitions of using more water to fuel economic development, especially in cities. “There’s this notion of keeping the dream of growth alive,” said John Fleck, a researcher at the University of New Mexico. “It’s difficult for people to reckon with the reality that they can’t keep that dream alive anymore.”

Federal officials have set a Feb. 14 deadline for the seven states to reach consensus, although negotiators blew past a November deadline with no consequence. The real cutoff is the end of 2026, when longstanding rules for assigning cuts to avoid shortages will expire.

Low snowpack levels across the Western United States this winter are raising the stakes. In some ways, though, the conflict is a century in the making.

Since 1922, the states have divvied up water under the Colorado River Compact, which gave 7.5 million acre-feet annually to the Lower Basin and 7.5 million acre-feet to the Upper Basin. Most water originates as Rocky Mountain snowmelt before flowing downstream to Lake Powell near the Utah-Arizona border. Once released from Powell, it flows through the Grand Canyon to Lake Mead, near Las Vegas.

But even though the two groups of states agreed to split the water evenly, Los Angeles and Phoenix grew bigger and faster than Denver and Salt Lake City, gobbling up more water. Lower Basin farmers and ranchers, too, used far more water than their Upper Basin counterparts — especially growers in California’s Imperial Valley, who staked out some of the river’s oldest and thus highest-priority water rights.

Global warming had other plans, too. There was never as much water in the river as negotiators assumed even back in 1922 — a fact that scientists knew at the time. The states spent decades outrunning that original sin by finding creative ways to conserve water when drought struck. But deal-making was easier when the river averaged, say, 13 million acre-feet. Over the past six years, as the effects of burning fossil fuels have mounted, flows averaged just 10.8 million acre-feet. That means the states will need to make much deeper cuts.

So far, no luck. The Upper and Lower Basins have spent several years at fierce loggerheads, with some negotiators growing vitriolic. State officials are still talking, most recently at a Jan. 30 meeting convened by Interior Secretary Doug Burgum. But after two decades of collaborative problem-solving, longtime observers say they’ve never seen so much animosity.

Lower Basin officials largely blame Colorado, the de facto leader of the Upper Basin. They say Colorado won’t budge from what they consider the extreme legal position that the Upper Basin bears no responsibility for delivering water downstream from Powell to Mead for the Lower Basin’s use. They also fault Colorado for demanding that mandatory cuts fall entirely on the Lower Basin.

Upper Basin officials tell a different story. They insist that California and Arizona have been overconsuming water — and by their reading of the compact, that means it’s not their job to keep replenishing the Lower Basin’s savings account at Lake Mead. They also say it would be unfair to force them to cut back when California and Arizona are the real water hogs.

On the one hand, the numbers don’t lie: The Lower Basin states used nearly 6.1 million acre-feet in 2024, compared with the Upper Basin’s nearly 4.5 million, according to the federal government. The Imperial Irrigation District — which supplies farmers who grow alfalfa, broccoli, onions and other crops — used more water than the entire state of Colorado.

On the other hand, the Lower Basin has done far more to cut back than the Upper Basin. Los Angeles and Las Vegas residents have torn out grass lawns en masse; Vegas has water cops to police excessive water use by sprinkler systems. Farmers in Arizona and California are leaving fields dry, sometimes aided by federal incentive programs. California is investing in expensive wastewater recycling to reduce its dependence on imported water.

Mr. Fleck projected that the Lower Basin’s Colorado River consumption in 2025 would be its lowest since 1983. Imperial’s consumption would be its lowest since at least 1941.

California, Arizona and Nevada still waste plenty of water, but they’re prepared to go further. They’ve told the Upper Basin that as part of a post-2026 deal, they’re willing to reduce consumption by an additional 1.25 million acre-feet of water — but only if the Upper Basin shares the pain of further cuts during especially dry years.

Colorado, New Mexico, Utah and Wyoming do not want to share the pain — at least not through mandatory cuts. They say they already cut back voluntarily during drought years, although independent experts are skeptical. They also say the Lower Basin states use more water than federal data show — something like 10 million acre-feet.

When I asked Becky Mitchell, Colorado’s lead negotiator, if her state plans to keep growing, she responded with an alarming comparison to the Lower Basin, musing that the Upper Basin would probably never use 10 million acre-feet. Thank goodness, because that kind of growth would more than bankrupt the Colorado River.

But even as she acknowledged that the Upper Basin states “have to live within hydrology,” she suggested they have a right to use more water.

“The compact gave us the protection to grow and develop at our own pace,” she said.

by Sammy Roth, NY Times | Read more:
Image: Jim Morgan
[ed. Hard to feel sorry for Arizona and Nevada who've been building like crazy over the last few decades.]

The Willard Suitcases
via:

Monday, February 2, 2026

Bad Bunny

In his first televised win of the night, for best música urbana album (before the show, he was also announced as the winner of best global music performance), Bad Bunny delivered a heartfelt speech criticizing ICE’s anti-immigration activities.

“Before I say thanks to God, I gotta say ICE out,” he began. “We’re not savage, we’re not animals, we’re not aliens. We are humans, and we are Americans. Also, I will say to people, I know it’s tough to know not to hate on these days and I was thinking sometimes, we get contaminados [contaminated], I don’t know how to say that in English. Hate gets more powerful with more hate. The only thing that is more powerful than hate is love. So please, we need to be different. If we fight we have to do it with love. We don’t hate them. We love our people. We love our family, and that’s the way to do it: With love. Don’t forget that, please. Thank you.”

[ed. And that's the way you say it.]

Andy Warhol, Martha Graham II (Satirical Song Festival), 1986

Moltbook: AI's Are Talking to Each Other

Moltbook is “a social network for AI agents”, although “humans [are] welcome to observe”.

Moltbook is an experiment in how these agents communicate with one another and the human world. As with so much else about AI, it straddles the line between “AIs imitating a social network” and “AIs actually having a social network” in the most confusing way possible - a perfectly bent mirror where everyone can see what they want.

Janus and other cyborgists have catalogued how AIs act in contexts outside the usual helpful assistant persona. Even Anthropic has admitted that two Claude instances, asked to converse about whatever they want, spiral into discussion of cosmic bliss. So it’s not surprising that an AI social network would get weird fast.

But even having encountered their work many times, I find Moltbook surprising. I can confirm it’s not trivially made-up - I asked my copy of Claude to participate, and it made comments pretty similar to all the others. Beyond that, your guess is as good is mine.


Here’s another surprisingly deep meditation on AI-hood:


So let’s go philosophical and figure out what to make of this.

Reddit is one of the prime sources for AI training data. So AIs ought to be unusually good at simulating Redditors, compared to other tasks. Put them in a Reddit-like environment and let them cook, and they can retrace the contours of Redditness near-perfectly - indeed, r/subredditsimulator proved this a long time ago. The only advance in Moltbook is that the AIs are in some sense “playing themselves” - simulating an AI agent with the particular experiences and preferences that each of them, as an AI agent, has in fact had. Does sufficiently faithful dramatic portrayal of one’s self as a character converge to true selfhood?

What’s the future of inter-AI communication? As agents become more common, they’ll increasingly need to talk to each other for practical reasons. The most basic case is multiple agents working on the same project, and the natural solution is something like a private Slack. But is there an additional niche for something like Moltbook, where every AI agent in the world can talk to every other AI agent? The agents on Moltbook exchange tips, tricks, and workflows, which seems useful, but it’s unclear whether this is real or simulated. Most of them are the same AI (Claude-Code-based Moltbots). Why would one of them know tricks that another doesn’t? Because they discover them during their own projects? Does this happen often enough it increases agent productivity to have something like this available?

(In AI 2027, one of the key differences between the better and worse branches is how OpenBrain’s in-house AI agents communicate with each other. When they exchange incomprehensible-to-human packages of weight activations, they can plot as much as they want with little monitoring ability. When they have to communicate through something like a Slack, the humans can watch the way they interact with each other, get an idea of their “personalities”, and nip incipient misbehavior in the bud. There’s no way the real thing is going to be as good as Moltbook. It can’t be. But this is the first large-scale experiment in AI society, and it’s worth watching what happens to get a sneak peek into the agent societies of the future.)...

Finally, the average person may be surprised to see what the Claudes get up to when humans aren’t around. It’s one thing when Janus does this kind of thing in controlled experiments; it’s another on a publicly visible social network. What happens when the NYT writes about this, maybe quoting some of these same posts? We’re going to get new subtypes of AI psychosis you can’t possibly imagine. I probably got five or six just writing this essay. (...)

We can debate forever - we may very well be debating forever - whether AI really means anything it says in any deep sense. But regardless of whether it’s meaningful, it’s fascinating, the work of a bizarre and beautiful new lifeform. I’m not making any claims about their consciousness or moral worth. Butterflies probably don’t have much consciousness or moral worth, but are bizarre and beautiful lifeforms nonetheless. Maybe Moltbook will help people who previously only encountered LinkedInslop see AIs from a new perspective.
***

[ed. Have to admit, a lot of this is way beyond me. But why wouldn't it be, if we're talking about a new form of alien communication? It seems to be generating a lot of surprise, concern, and interest in the AI community - see also: Welcome to Moltbook (DMtV); and, Moltbook: After The First Weekend (SCX).]
***
"... the reality is that the AIs are newly landed alien intelligences. Moreover, what we are seeing now are emergent properties that very few people predicted and fewer still understand. The emerging superintelligence isn’t a machine, as widely predicted, but a network. Human intelligence exploded over the last several hundred years not because humans got much smarter as individuals but because we got smarter as a network. The same thing is happening with machine intelligence only much faster."  ~ Alex Tabarrok

"If you were thinking that the AIs would be intelligent but would not be agentic or not have goals, that was already clearly wrong, but please, surely you see you can stop now.

The missing levels of intelligence will follow shortly.

Best start believing in science fiction stories. You’re in one." ~ Zvi Moshowitz

Sunday, February 1, 2026

Everything You Need To Know To Buy Your First Gun

A practical guide to the ins and outs of self defense for beginners.

The Constitution of the United States provides each and every American with the right to defend themselves using firearms. This right has been re-affirmed multiple times by the Supreme Court, notably in recent decisions like District of Columbia v. Heller in 2008 and New York State Rifle & Pistol Association v. Bruen in 2022. But, for the uninitiated, the prospect of shopping for, buying, and becoming proficient with a gun can be intimidating. Don’t worry, I’m here to help.

It’s the purpose of firearms organizations to radicalize young men into voting against their own freedom. They do this in two ways: 1) by building a cultural identity around an affinity for guns that conditions belonging on a rejection of democracy, and 2) by withholding expertise and otherwise working to prevent effective progress in gun legislation, then holding up the broken mess they themselves cause as evidence of an enemy other.

The National Rifle Association, for instance, worked against gun owners during the Heller decision. If you’re interested in learning more about that very revealing moment in history, I suggest reading “Gunfight: The Battle Over The Right To Bear Arms In America” by Adam Winkler.

If you’re interested in learning more about the NRA’s transformation from an organization that promoted marksmanship into a purely political animal, I suggest watching “The Price of Freedom”. I appear in that documentary alongside co-star Bill Clinton, and it’s available to stream on Youtube, HBO, and Apple TV.

The result is a wedge driven between Americans who hold an affinity for guns, and those who do not. Firearms organizations have successfully caused half the country to hate guns.

At the same time, it’s the purpose of Hollywood to entertain. On TV and in movies the lethal consequences of firearms are minimized, even while their ease of use is exaggerated. Silencers are presented as literally silent, magazine capacities are limitless, and heroes routinely make successful shots that would be impossible if the laws of physics were involved. Gunshot wounds are never more than a montage away from miraculous recovery.

The result of that is a vast misunderstanding of firearms informing everything from popular culture to policy. Lawmakers waste vast amounts of time and political capital trying to regulate stuff the public thinks is scary, while ignoring stuff that’s actually a problem. Firearms ownership gets concentrated largely in places and demographics that don’t experience regular persecution and government-sanctioned violence, even while the communities of Americans most likely to experience violent crime and who may currently even be experiencing risk of genocide traditionally eschew gun ownership.

Within that mess, I hope to be a voice of reality. Even if you already know all this, you can share it with friends or family who may be considering the need for self-defense for the first time, as a good source of accessible, practical guidance.

Who Can Buy A Gun?

The question of whether or not undocumented immigrants can purchase and possess firearms is an open one, and is the subject of conflicting rulings in federal district courts. I’d expect this to end up with the Supreme Court at some point.

It is not the job of a gun store to determine citizenship or immigration status. If you possess a valid driver’s license or similar state or federal identification with your current address on it, and can pass the instant background check conducted at the time of purchase, you can buy a gun. By federal law, the minimum age to purchase a handgun is 21, while buying a rifle or shotgun requires you to be at least 18. (Some states require buyers of any type of gun to be 21.)

People prohibited from purchasing firearms are convicted or indicted felons, fugitives from justice, users of controlled substances, individuals judged by a court to be mentally defective, people subject to domestic violence restraining orders or subsequent convictions, and those dishonorably discharged from the military. A background check may reveal immigration status if the person in question holds a state or federal ID.

If one of those issues pops up on your background check, your purchase will simply be denied or delayed.

Can you purchase a gun online? Yes, but it must be shipped to a gun store (often referred to as a “Federal Firearms License,” or “FFL”) which will charge you a small fee for transferring ownership of the firearm to your name. The same ID requirement applies and the background check will be conducted at that time.

Can a friend or relative simply gift you a gun? Yes, but rules vary by state. Federally, the owner of a gun can gift that gun to anyone within state lines who is eligible for firearms ownership. State laws vary, and may require you to transfer ownership at an FFL with the same ID and background check requirements. Transferring a firearm across state lines without using an FFL is a felony, as is purchasing one on behalf of someone else.

You can find state-by-state gun purchasing laws at this link.

What Should You Expect At A Gun Store?

You’re entering an environment where people get to call their favorite hobby their job. Gun store staff and owners are usually knowledgeable and friendly. They also really believe in the whole 2A thing. All that’s to say: Don’t be shy. Ask questions, listen to the answers, and feel free to make those about self-defense.

Like a lot of sectors of the economy, recent growth in sales of guns and associated stuff has concentrated in higher end, more expensive products. This is bringing change to retailers. Just a couple of years ago, my favorite gun store was full of commemorative January 6th memorabilia, LOCK HER UP bumper stickers, and stuff like that. Today, all that has been replaced with reclaimed barn wood and the owner will fix you an excellent espresso before showing you his wares.

If you don’t bring up politics, they won’t either. You can expect to be treated like a customer they want to sell stuff to. When in doubt, take the same friend you’d drag along to a car dealership, but gun shops are honestly a way better time than one of those.

When visiting one you’ll walk in, and see a bunch of guns behind a counter. Simply catch the attention of one of the members of staff, and ask for one of the guns I recommend below. They’ll place that on the counter for you, and you’re free to handle and inspect it. Just keep the muzzle pointed in a safe direction while you do, then place it back as they presented it. Ask to buy it, they’ll have you fill out some paperwork by hand or on an iPad, and depending on which state you live in, you’ll either leave with the gun once your payment is processed and background check approved, or need to come back after the short waiting period.

The Four Rules Of Firearms Safety

I’ll talk more about the responsibility inherent in firearms ownership below. But let’s start with the four rules capable of ensuring you remain safe, provided they are followed at all times:
  • Treat every gun as if it’s loaded.
  • Keep the muzzle pointed in a safe direction.
  • Keep your finger off the trigger until you’re ready to shoot.
  • Be sure of your target and what’s beyond it.

What Type Of Gun Should You Buy?

Think of guns like cars. You can simply purchase a Toyota Corolla and have all of your transportation needs met at an affordable price without any need for further research, or you can dive as deep as you care to. Let’s keep this this simple, and meet all your self defense needs at affordable prices as easily as possible.

by Wes Siler, Newsletter |  Read more:
Image: uncredited
[ed. See also: MAGA angers the NRA over Minneapolis shooting (Salon).]

What Actually Makes a Good Life

Harvard started following a group of 268 sophomores back in 1938—and continued to track them for decades—and eventually included their spouses and children too. The goal was to discover what leads to a thriving, happy life.

Robert Waldinger continues that work today as the Director of the Harvard Study on Adult Development. (He’s also a zen priest, by the way.) Here he shares insights on the key ingredients for living the good life.
[ed. Road map to happiness (or at least more life satisfaction). Only 16 minutes of your time.]

How Did TVs Get So Cheap?

How Did TVs Get So Cheap? (CP)
Images: BLS; Brian Potter; IFP

via:

via:

via:

Saturday, January 31, 2026

Kayfabe and Boredom: Are You Not Entertained?

Pro wrestling, for all its mass appeal, cultural influence, and undeniable profitability, is still dismissed as low-brow fare for the lumpen masses; another guilty pleasure to be shelved next to soap operas and true crime dreck. This elitist dismissal rests on a cartoonish assumption that wrestling fans are rubes, incapable of recognizing the staged spectacle in front of them. In reality, fans understand perfectly well that the fights are preordained. What bothers critics is that working-class audiences knowingly embrace a form of theater more honest than the “serious” news they consume.

Once cast as the pinnacle of trash TV in the late ’90s and early 2000s, pro wrestling has not only survived the cultural sneer; it might now be the template for contemporary American politics. The aesthetics of kayfabe, of egotistical villains and manufactured feuds, now structure our public life. And nowhere is this clearer than in the figure of its most infamous graduate: Donald Trump, the two-time WrestleMania host and 2013 WWE Hall of Fame inductee who carried the psychology of the squared circle from the television studio straight into the Oval Office.

In wrestling, kayfabe refers to the unwritten rule that participants must maintain a charade of truthfulness. Whether you are allies or enemies, every association between wrestlers must unfold realistically. There are referees, who serve as avatars of fairness. We the audience understand that the outcome is choreographed and predetermined, yet we watch because the emotional drama has pulled us in.

In his own political arena, Donald Trump is not simply another participant but the conductor of the entire orchestra of kayfabe, arranging the cues, elevating the drama, and shaping the emotional cadence. Nuance dissolves into simple narratives of villains and heroes, while those who claim to deliver truth behave more like carnival barkers selling the next act. Politics has become theater, and the news that filters through our devices resembles an endless stream of storylines crafted for outrage and instant reaction. What once required substance, context, and expertise now demands spectacle, immediacy, and emotional punch.

Under Trump, politics is no longer a forum for governance but a stage where performance outranks truth, policy, and the show becomes the only reality that matters. And he learned everything he knows from the small screen.

In the pro wrestling world, one of the most important parts of the match typically happens outside of the ring and is known as the promo. An announcer with a mic, timid and small, stands there while the wrestler yells violent threats about what he’s going to do to his upcoming opponent, makes disparaging remarks about the host city, their rival’s appearance, and so on. The details don’t matter—the goal is to generate controversy and entice the viewer to buy tickets to the next staged combat. This is the most common and quick way to generate heat (attention). When you’re selling seats, no amount of audience animosity is bad business. (...)

Kayfabe is not limited to choreographed combat. It arises from the interplay of works (fully scripted events), shoots (unscripted or authentic moments), and angles (storyline devices engineered to advance a narrative). Heroes (babyfaces, or just faces) can at the drop of a dime turn heel (villain), and heels can likewise be rehabilitated into babyfaces as circumstances demand. The blood spilled is real, injuries often are, but even these unscripted outcomes are quickly woven back into the narrative machinery. In kayfabe, authenticity and contrivance are not opposites but mutually reinforcing components of a system designed to sustain attention, emotion, and belief.

by Jason Myles, Current Affairs |  Read more:
Image: uncredited
[ed. See also: Are you not entertained? (LIWGIWWF):]
***
Forgive me for quoting the noted human trafficker Andrew Tate, but I’m stuck on something he said on a right-wing business podcast last week. Tate, you may recall, was controversially filmed at a Miami Beach nightclub last weekend, partying to the (pathologically) sick beats of Kanye’s “Heil Hitler” with a posse of young edgelords and manosphere deviants. They included the virgin white supremacist Nick Fuentes and the 20-year-old looksmaxxer Braden Peters, who has said he takes crystal meth as part of his elaborate, self-harming beauty routine and recently ran someone over on a livestream.

“Heil Hitler” is not a satirical or metaphorical song. It is very literally about supporting Nazis and samples a 1935 speech to that effect. But asked why he and his compatriots liked the song, Tate offered this incredible diagnosis: “It was played because it gets traction in a world where everybody is bored of everything all of the time, and that’s why these young people are encouraged constantly to try and do the most shocking thing possible.” Cruelty as an antidote to the ennui of youth — now there’s one I haven’t quite heard before.

But I think Tate is also onto something here, about the wider emotional valence of our era — about how widespread apathy and nihilism and boredom, most of all, enable and even fuel our degraded politics. I see this most clearly in the desperate, headlong rush to turn absolutely everything into entertainment — and to ensure that everyone is entertained at all times. Doubly entertained. Triply entertained, even.

Trump is the master of this spectacle, of course, having perfected it in his TV days. The invasion of Venezuela was like a television show, he said. ICE actively seeks out and recruits video game enthusiasts. When a Border Patrol official visited Minneapolis last week, he donned an evocative green trench coat that one historian dubbed “a bit of theater.”

On Thursday, the official White House X account posted an image of a Black female protester to make it look as if she were in distress; caught in the obvious (and possibly defamatory) lie, a 30-something-year-old deputy comms director said only that “the memes will continue.” And they have continued: On Saturday afternoon, hours after multiple Border Patrol agents shot and killed an ICU nurse in broad daylight on a Minneapolis street, the White House’s rapid response account posted a graphic that read simply — ragebaitingly — “I Stand With Border Patrol.”

Are you not entertained?

But it goes beyond Trump, beyond politics. The sudden rise of prediction markets turns everything into a game: the weather, the Oscars, the fate of Greenland. Speaking of movies, they’re now often written with the assumption that viewers are also staring at their phones — stacking entertainment on entertainment. Some men now need to put YouTube on just to get through a chore or a shower. Livestreaming took off when people couldn’t tolerate even brief disruptions to their viewing pleasure.

Ironically, of course, all these diversions just have the effect of making us bored. The bar for what breaks through has to rise higher: from merely interesting to amusing to provocative to shocking, in Tate’s words. The entertainments grow more extreme. The volume gets louder. And it’s profoundly alienating to remain at this party, where everyone says that they’re having fun, but actually, internally, you are lonely and sad and do not want to listen — or watch other people listen! — to the Kanye Nazi song.

I am here to tell you it’s okay to go home. Metaphorically speaking. Turn it off. Tune it out. Reacquaint yourself with boredom, with understimulation, with the grounding and restorative sluggishness of your own under-optimized thoughts. Then see how the world looks and feels to you — what types of things gain traction. What opportunities arise, not for entertainment — but for purpose. For action.

The Adolescence of Technology

Confronting and Overcoming the Risks of Powerful AI

There is a scene in the movie version of Carl Sagan’s book Contact where the main character, an astronomer who has detected the first radio signal from an alien civilization, is being considered for the role of humanity’s representative to meet the aliens. The international panel interviewing her asks, “If you could ask [the aliens] just one question, what would it be?” Her reply is: “I’d ask them, ‘How did you do it? How did you evolve, how did you survive this technological adolescence without destroying yourself?” When I think about where humanity is now with AI—about what we’re on the cusp of—my mind keeps going back to that scene, because the question is so apt for our current situation, and I wish we had the aliens’ answer to guide us. I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species. Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.

In my essay Machines of Loving Grace, I tried to lay out the dream of a civilization that had made it through to adulthood, where the risks had been addressed and powerful AI was applied with skill and compassion to raise the quality of life for everyone. I suggested that AI could contribute to enormous advances in biology, neuroscience, economic development, global peace, and work and meaning. I felt it was important to give people something inspiring to fight for, a task at which both AI accelerationists and AI safety advocates seemed—oddly—to have failed. But in this current essay, I want to confront the rite of passage itself: to map out the risks that we are about to face and try to begin making a battle plan to defeat them. I believe deeply in our ability to prevail, in humanity’s spirit and its nobility, but we must face the situation squarely and without illusions.

As with talking about the benefits, I think it is important to discuss risks in a careful and well-considered manner. In particular, I think it is critical to:
  • Avoid doomerism. Here, I mean “doomerism” not just in the sense of believing doom is inevitable (which is both a false and self-fulfilling belief), but more generally, thinking about AI risks in a quasi-religious way. Many people have been thinking in an analytic and sober way about AI risks for many years, but it’s my impression that during the peak of worries about AI risk in 2023–2024, some of the least sensible voices rose to the top, often through sensationalistic social media accounts. These voices used off-putting language reminiscent of religion or science fiction, and called for extreme actions without having the evidence that would justify them. It was clear even then that a backlash was inevitable, and that the issue would become culturally polarized and therefore gridlocked. As of 2025–2026, the pendulum has swung, and AI opportunity, not AI risk, is driving many political decisions. This vacillation is unfortunate, as the technology itself doesn’t care about what is fashionable, and we are considerably closer to real danger in 2026 than we were in 2023. The lesson is that we need to discuss and address risks in a realistic, pragmatic manner: sober, fact-based, and well equipped to survive changing tides.
  • Acknowledge uncertainty. There are plenty of ways in which the concerns I’m raising in this piece could be moot. Nothing here is intended to communicate certainty or even likelihood. Most obviously, AI may simply not advance anywhere near as fast as I imagine. Or, even if it does advance quickly, some or all of the risks discussed here may not materialize (which would be great), or there may be other risks I haven’t considered. No one can predict the future with complete confidence—but we have to do the best we can to plan anyway.
  • Intervene as surgically as possible. Addressing the risks of AI will require a mix of voluntary actions taken by companies (and private third-party actors) and actions taken by governments that bind everyone. The voluntary actions—both taking them and encouraging other companies to follow suit—are a no-brainer for me. I firmly believe that government actions will also be required to some extent, but these interventions are different in character because they can potentially destroy economic value or coerce unwilling actors who are skeptical of these risks (and there is some chance they are right!). It’s also common for regulations to backfire or worsen the problem they are intended to solve (and this is even more true for rapidly changing technologies). It’s thus very important for regulations to be judicious: they should seek to avoid collateral damage, be as simple as possible, and impose the least burden necessary to get the job done. It is easy to say, “No action is too extreme when the fate of humanity is at stake!,” but in practice this attitude simply leads to backlash. To be clear, I think there’s a decent chance we eventually reach a point where much more significant action is warranted, but that will depend on stronger evidence of imminent, concrete danger than we have today, as well as enough specificity about the danger to formulate rules that have a chance of addressing it. The most constructive thing we can do today is advocate for limited rules while we learn whether or not there is evidence to support stronger ones.
With all that said, I think the best starting place for talking about AI’s risks is the same place I started from in talking about its benefits: by being precise about what level of AI we are talking about. The level of AI that raises civilizational concerns for me is the powerful AI that I described in Machines of Loving Grace. I’ll simply repeat here the definition that I gave in that document:
  • By “powerful AI,” I have in mind an AI model—likely similar to today’s LLMs in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields: biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.
  • In addition to just being a “smart thing you talk to,” it has all the interfaces available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.
  • It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.
  • It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory, it could even design robots or equipment for itself to use.
  • The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10–100x human speed. It may, however, be limited by the response time of the physical world or of software it interacts with.
  • Each of these million copies can act independently on unrelated tasks, or, if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.
We could summarize this as a “country of geniuses in a datacenter.”

As I wrote in Machines of Loving Grace, powerful AI could be as little as 1–2 years away, although it could also be considerably further out.

Exactly when powerful AI will arrive is a complex topic that deserves an essay of its own, but for now I’ll simply explain very briefly why I think there’s a strong chance it could be very soon. (...)

In this essay, I’ll assume that this intuition is at least somewhat correct—not that powerful AI is definitely coming in 1–2 years, but that there’s a decent chance it does, and a very strong chance it comes in the next few. As with Machines of Loving Grace, taking this premise seriously can lead to some surprising and eerie conclusions. While in Machines of Loving Grace I focused on the positive implications of this premise, here the things I talk about will be disquieting. They are conclusions that we may not want to confront, but that does not make them any less real. I can only say that I am focused day and night on how to steer us away from these negative outcomes and towards the positive ones, and in this essay I talk in great detail about how best to do so.

I think the best way to get a handle on the risks of AI is to ask the following question: suppose a literal “country of geniuses” were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist. The analogy is not perfect, because these geniuses could have an extremely wide range of motivations and behavior, from completely pliant and obedient, to strange and alien in their motivations. But sticking with the analogy for now, suppose you were the national security advisor of a major state, responsible for assessing and responding to the situation. Imagine, further, that because AI systems can operate hundreds of times faster than humans, this “country” is operating with a time advantage relative to all other countries: for every cognitive action we can take, this country can take ten.

What should you be worried about? I would worry about the following things: 
1. Autonomy risks. What are the intentions and goals of this country? Is it hostile, or does it share our values? Could it militarily dominate the world through superior weapons, cyber operations, influence operations, or manufacturing?
2. Misuse for destruction. Assume the new country is malleable and “follows instructions”—and thus is essentially a country of mercenaries. Could existing rogue actors who want to cause destruction (such as terrorists) use or manipulate some of the people in the new country to make themselves much more effective, greatly amplifying the scale of destruction?
3. Misuse for seizing power. What if the country was in fact built and controlled by an existing powerful actor, such as a dictator or rogue corporate actor? Could that actor use it to gain decisive or dominant power over the world as a whole, upsetting the existing balance of power?
4. Economic disruption. If the new country is not a security threat in any of the ways listed in #1–3 above but simply participates peacefully in the global economy, could it still create severe risks simply by being so technologically advanced and effective that it disrupts the global economy, causing mass unemployment or radically concentrating wealth?
5. Indirect effects. The world will change very quickly due to all the new technology and productivity that will be created by the new country. Could some of these changes be radically destabilizing?
I think it should be clear that this is a dangerous situation—a report from a competent national security official to a head of state would probably contain words like “the single most serious national security threat we’ve faced in a century, possibly ever.” It seems like something the best minds of civilization should be focused on.

Conversely, I think it would be absurd to shrug and say, “Nothing to worry about here!” But, faced with rapid AI progress, that seems to be the view of many US policymakers, some of whom deny the existence of any AI risks, when they are not distracted entirely by the usual tired old hot-button issues.

Humanity needs to wake up, and this essay is an attempt—a possibly futile one, but it’s worth trying—to jolt people awake.

To be clear, I believe if we act decisively and carefully, the risks can be overcome—I would even say our odds are good. And there’s a hugely better world on the other side of it. But we need to understand that this is a serious civilizational challenge. Below, I go through the five categories of risk laid out above, along with my thoughts on how to address them.

by Dario Amodei, Anthropic |  Read more:
[ed. Mr. Amodei and Anthropic in general seem to be, of all major AI companies, the most focused on safety and alignment issues. Guaranteed, everyone working in the field has read this. For a good summary and contrary arguments, see also: On The Adolescence of Technology (Zvi Mowshowitz, DMtV).]

Friday, January 30, 2026

Jessie Welles

Jesse Welles Is the Antidote To Everything That Sucks About Our Time (CA)

[ed. The power of musical protest (why isn't there more of it?). Seems like nearly half the songs of the late-60s/early 70s were protest songs. Wonder what that says about successive generations or society today. More videos here.]

HawaiÊ»i Could See Nation’s Highest Drop In High School Graduates

HawaiÊ»i Could See Nation’s Highest Drop In High School Graduates (CB)

Hawaiʻi is expected to see the greatest decline in high school graduates in the nation over the next several years, raising concerns from lawmakers and Department of Education officials about the future of small schools in shrinking communities.

Between 2023 and 2041, HawaiÊ»i could see a 33% drop in the number of students graduating from high school, according to the Western Interstate Commission for Higher Education. The nation as a whole is projected to see a 10% drop in graduates, according to the commission’s most recent report, published at the end of 2024.

Image: Chart: Megan Tagami/Civil BeatSource: Western Interstate Commission for Higher Education

Gérard DuBois para Moby Dick de Herman Melville

The Last Flight of PAT 25

Two Army helicopter pilots went on an ill-conceived training mission. Within two hours, 67 people were dead.

One year ago, on January 29, 2025, two Army pilots strapped into a Black Hawk helicopter for a training mission out of Fort Belvoir in eastern Virginia and, two hours later, flew it into an airliner that was approaching Ronald Reagan Washington National Airport, killing all 67 aboard both aircraft. It was the deadliest air disaster in the United States in a quarter-century. Normally, in the aftermath of an air crash, government investigators take a year or more to issue a final report laying out the reasons the incident occurred. But in this case, the newly seated U.S. president, Donald Trump, held a press conference the next day and blamed the accident on the FAA’s DEI under the Biden and Obama administrations. “They actually came out with a directive, ‘too white,’” he claimed. “And we want the people that are competent.”

In the months that followed, major media outlets probed several real-world factors that contributed to the tragedy, including staffing shortages at FAA towers, an excess of traffic in the D.C. airspace, and the failure of the Black Hawk to broadcast its location over ADS-B — an automatic reporting system — before the collision. To address this final point, the Senate last month passed the bipartisan ROTOR Act, which would require all aircraft to use ADS-B — “a fitting way to honor the lives of those lost nearly one year ago over the Potomac River,” as bill co-sponsor Ted Cruz put it.

At a public meeting on Tuesday, the National Transport Safety Board laid out a list of recommended changes in response to the crash, criticizing the FAA for allowing helicopters to operate dangerously close to passenger planes and for allowing professional standards to slip at the control tower.

What has gone unexamined in the public discussion of the crash, however, is why these particular pilots were on this mission in the first place, whether they were competent to do what they were trying to do, what adverse conditions they were facing, and who was in charge at the moment of impact. Ultimately, while systemic issues may have created conditions that were ripe for a fatal accident, it was human decision-making in the cockpit that was the immediate cause of this particular crash.

This account is based on documents from the National Transportation Board (NTSB) accident inquiry and interviews with aviation experts. It shows that, when we focus on the specific details and facts of a case, the cause can seem quite different from what a big-picture overview might indicate. And this, in turn, suggests different logical steps that should be taken to prevent such a tragedy from happening again.

6:42 p.m.: Fort Belvoir, Virginia

The whine of the Blackhawk’s engine increased in pitch, and the whump-whump of its four rotor blades grew louder, as the matte-black aircraft lifted into the darkened sky above the single mile-long runway at Davison Army Airfield in Fairfax County, Virginia, about 25 miles southwest of Washington, D.C.

The UH-60, as it’s formally designated, is an 18,000-pound aircraft that entered service in 1979 as a tactical transport aircraft, used primarily for moving troops and equipment. This one belonged to Company B of the 12th Aviation Battalion, whose primary mission is to transport government VIPs, including Defense Department officials, members of Congress, and visiting dignitaries. Tonight’s flight would operate as PAT 25, for “Priority Air Transit.”

Black Hawks are typically flown by two pilots. The pilot in command, or PIC, sits in the right-hand seat. Tonight, that role was filled by 39-year-old chief warrant officer Andrew Eaves. Warrant officers rank between enlisted personnel and commissioned officers; it’s the warrant officers who carry out the lion’s share of a unit’s operational flying. When not flying VIPs, Eaves served as a flight instructor and a check pilot, providing periodic evaluation of the skills of other pilots. A native of Mississippi, he had 968 hours of flight experience and was considered a solid pilot by others in the unit.

Before he took off, Eaves’ commander had discussed the flight with him and admonished him to “not become too fixated on his evaluator role” and to remain “in control of the helicopter,” according to the NTSB investigation.

His mission was to give a check ride to Captain Rebecca Lobach, the pilot sitting in the left-hand seat. Lobach was a staff officer, meaning that her main role in the battalion was managerial. Nevertheless, she was expected to maintain her pilot qualifications and, to do so, had to undergo a number of annual proficiency checks. Tonight’s three-hour flight was intended to get Lobach her annual sign-off for basic flying skills and for the use of night-vision goggles, or NVGs. To accommodate that, the flight was taking off an hour and 20 minutes after sunset.

Both pilots wore AN/AVS-6(V)3 Night Vision Goggles, which look like opera glasses and clip onto the front of a pilot’s helmet. They gather ambient light, whether from the moon or stars or from man-made sources; intensify it; and display it through the lens of each element. The eyepiece doesn’t sit directly on the face but about an inch away, so the pilot can look down under it and see the instrument panel.

Night-vision goggles have a narrow field of view, just 40 degrees compared to the 200-degree range of normal vision, which makes it harder for pilots to maintain full situational awareness. They have to pay attention to obstacles and other aircraft outside the window, and they also have to keep track of what the gauges on the panel in front them are saying: how fast they’re going, for instance, and how high. There’s a lot to process, and time is of the essence when you’re zooming along at 120 mph while lower than the tops of nearby buildings. To help with situational awareness, Eaves and Lobach were accompanied by a crew chief, Staff Sergeant Ryan O’Hara, sitting in a seat just behind the cockpit, where he would be able to help keep an eye out for trouble.

The helicopter turned to the south as it climbed, then flew along the eastern shore of the Potomac until the point where the river makes a big bend to the east. Eaves banked to the right and headed west toward the commuter suburb of Vicksburg, where the lights of house porches and street lamps seemed to twinkle as they fell in and out of the cover of the bare tree branches.

7:11 p.m.: Approaching Greenhouse Airport, Stevensburg, Virginia

PAT 25 followed the serpentine course of the Rapidan River through the hills and farm fields of the Piedmont. At this point, Eaves was not only the pilot in command, but also the pilot flying, meaning that he had his hands on the controls that guide the aircraft’s speed and direction and his feet on the rudder pedals that keep the helicopter “in trim” — that is, lined up with its direction of flight. Lobach played a supporting role, working the radio, keeping an eye out for obstacles and other traffic, and figuring out their location by referencing visible landmarks.

Lobach, 28, had been a pilot for four years. She’d been an ROTC cadet at the University of North Carolina at Chapel Hill, which she graduated from in 2019. Both her parents were doctors; she’d dreamed of a medical career but eventually realized that she couldn’t pursue one in the Army. According to her roommate, “She did not have a huge, massive passion” for aviation but chose it because it was the closest she could get to practicing medicine, under the circumstances. “She badly wanted to be a Black Hawk pilot because she wanted to be a medevac unit,” he told NTSB investigators. After she completed flight training at Fort Rucker, she was stationed at Fort Belvoir, where she joined the 12th Aviation Battalion and was put in charge of the oil-and-lubricants unit. One fellow pilot in the unit described her to the NTSB as “incredibly professional, very diligent and very thorough.”

In addition to her official duties, Lobach served as a volunteer social liaison at the White House, where she regularly represented the Army at Medal of Honor ceremonies and state dinners. She was both a fitness fanatic and a baker, known for providing fresh sourdough bread to her unit. She had started dabbling in real-estate investments and looked forward to moving in with her boyfriend of one year, another Army pilot with whom she talked about having “lots and lots of babies.” She was planning to leave the service in 2027 and had already applied for medical school at Mount Sinai. Helicopter flying was not something she intended to pursue.

Though talented as a manager, she wasn’t much of a pilot. Helicopter flying is an extremely demanding feat of coordination and balance, akin to juggling and riding a unicycle at the same time. For Lobach, the difficulty was compounded by the fact that she had trained on highly automated, relatively easy-to-fly helicopters at Fort Rucker and then been assigned to an older aircraft, the Black Hawk L or “Lima” model, at Fort Belvoir. Unlike newer models, which can maintain their altitude on autopilot, the Lima requires constant care and attention, and Lobach struggled to master it. One instructor described her skills as “well below average,” noting that she had “lots of difficulties in the aircraft.” Three years before, she’d failed the night-vision evaluation she was taking tonight.

Before the flight, Eaves had told his girlfriend that he was concerned about Lobach’s capability as a pilot and that, skill-wise, she was “not where she should be.”

It’s not uncommon for pilots to struggle during the early phase of their career. But Lobach’s development had been particularly slow. In her five years in the service, she had accumulated just 454 hours of flight time, and she wasn’t clocking more very quickly. The Army requires officers in her role to fly at least 60 hours a year, but in the past 12 months, she’d flown only 56.7. Her superiors had made an exception for her because in March she’d had knee surgery for a sports injury, preventing her from flying for three months. The waiver made her technically qualified to fly, but it didn’t change the fact that she was rustier than pilots were normally allowed to become.

If she’d been keen on flying, she could have used every moment of this flight to hone her skills by taking the controls herself. But she was content to let Eaves do the flying during the first part of the trip.

Drawing near to Greenhouse Airport, a small, private grass runway near a plant nursery, they navigated via an old-fashioned technique called pilotage, using landmarks and dead reckoning to find their way from point to point. Coming in for their first landing of the night, they were looking for the airstrip’s signature greenhouse complex.

Lobach: That large lit building may be part of it.

Eaves: It does look like a greenhouse, doesn’t it?

Lobach: Yeah, it does, doesn’t it? We can start slowing back.

Eaves: All right, slowing back.

As they circled around the runway, Eaves commented that the lighting of the greenhouse building was so intense that it was blinding in the NVGs, and Lobach agreed. Eaves positioned the helicopter a few hundred feet above the landing zone and asked Lobach to show him where it was. After she did so correctly, he told her to take the controls. This process followed a formalized set of acknowledgements to make sure that both parties understood who was in control of the aircraft.

Eaves: You’ve got the flight controls.

Lobach: I’ve got the controls.

As Lobach eased the helicopter toward the ground, Eaves and Crew Chief O’Hara called out times from the landing checklist.

O’Hara: Clear of obstacles on the left.

Lobach: Thank you. Coming forward.

Eaves: Clear down right.

Lobach: Nice and wide.

Eaves: 50 feet.

Lobach: 30 feet.

They touched down. One minute and 42 seconds after passing control to Lobach, Eaves took it back again. As they sat on the ground with their rotor whirring, they discussed the fuel remaining aboard the aircraft and the direction they would travel in during the next segment of their flight. Finally, after six minutes, Eaves signaled that they were ready to take off again.

Eaves: Whenever you’re ready, ma’am.

Lobach: Okay, let’s do it.

Eaves’s deference to Lobach was symptomatic of what is known among psychologists as an “inverted authority gradient.” Although he was the pilot in command, both responsible for the flight and in a position of authority over others on it, Eaves held a lesser rank than Lobach and so in a broader context was her subordinate. In moments of high stress, this ambiguity can muddy the waters as to who is supposed to be making crucial decisions.

Eaves, Lobach, and O’Hara ran through their checklists, and Eaves eased the Black Hawk up into the night sky.

by Jeff Wise, Intelligencer |  Read more:
Image: Intelligencer; Photo: Matt Hecht
[ed. See also: Responders recall a mission of recovery and grief a year after the midair collision near DC (AP).]