Sunday, February 1, 2026

Everything You Need To Know To Buy Your First Gun

A practical guide to the ins and outs of self defense for beginners.

The Constitution of the United States provides each and every American with the right to defend themselves using firearms. This right has been re-affirmed multiple times by the Supreme Court, notably in recent decisions like District of Columbia v. Heller in 2008 and New York State Rifle & Pistol Association v. Bruen in 2022. But, for the uninitiated, the prospect of shopping for, buying, and becoming proficient with a gun can be intimidating. Don’t worry, I’m here to help.

It’s the purpose of firearms organizations to radicalize young men into voting against their own freedom. They do this in two ways: 1) by building a cultural identity around an affinity for guns that conditions belonging on a rejection of democracy, and 2) by withholding expertise and otherwise working to prevent effective progress in gun legislation, then holding up the broken mess they themselves cause as evidence of an enemy other.

The National Rifle Association, for instance, worked against gun owners during the Heller decision. If you’re interested in learning more about that very revealing moment in history, I suggest reading “Gunfight: The Battle Over The Right To Bear Arms In America” by Adam Winkler.

If you’re interested in learning more about the NRA’s transformation from an organization that promoted marksmanship into a purely political animal, I suggest watching “The Price of Freedom”. I appear in that documentary alongside co-star Bill Clinton, and it’s available to stream on Youtube, HBO, and Apple TV.

The result is a wedge driven between Americans who hold an affinity for guns, and those who do not. Firearms organizations have successfully caused half the country to hate guns.

At the same time, it’s the purpose of Hollywood to entertain. On TV and in movies the lethal consequences of firearms are minimized, even while their ease of use is exaggerated. Silencers are presented as literally silent, magazine capacities are limitless, and heroes routinely make successful shots that would be impossible if the laws of physics were involved. Gunshot wounds are never more than a montage away from miraculous recovery.

The result of that is a vast misunderstanding of firearms informing everything from popular culture to policy. Lawmakers waste vast amounts of time and political capital trying to regulate stuff the public thinks is scary, while ignoring stuff that’s actually a problem. Firearms ownership gets concentrated largely in places and demographics that don’t experience regular persecution and government-sanctioned violence, even while the communities of Americans most likely to experience violent crime and who may currently even be experiencing risk of genocide traditionally eschew gun ownership.

Within that mess, I hope to be a voice of reality. Even if you already know all this, you can share it with friends or family who may be considering the need for self-defense for the first time, as a good source of accessible, practical guidance.

Who Can Buy A Gun?

The question of whether or not undocumented immigrants can purchase and possess firearms is an open one, and is the subject of conflicting rulings in federal district courts. I’d expect this to end up with the Supreme Court at some point.

It is not the job of a gun store to determine citizenship or immigration status. If you possess a valid driver’s license or similar state or federal identification with your current address on it, and can pass the instant background check conducted at the time of purchase, you can buy a gun. By federal law, the minimum age to purchase a handgun is 21, while buying a rifle or shotgun requires you to be at least 18. (Some states require buyers of any type of gun to be 21.)

People prohibited from purchasing firearms are convicted or indicted felons, fugitives from justice, users of controlled substances, individuals judged by a court to be mentally defective, people subject to domestic violence restraining orders or subsequent convictions, and those dishonorably discharged from the military. A background check may reveal immigration status if the person in question holds a state or federal ID.

If one of those issues pops up on your background check, your purchase will simply be denied or delayed.

Can you purchase a gun online? Yes, but it must be shipped to a gun store (often referred to as a “Federal Firearms License,” or “FFL”) which will charge you a small fee for transferring ownership of the firearm to your name. The same ID requirement applies and the background check will be conducted at that time.

Can a friend or relative simply gift you a gun? Yes, but rules vary by state. Federally, the owner of a gun can gift that gun to anyone within state lines who is eligible for firearms ownership. State laws vary, and may require you to transfer ownership at an FFL with the same ID and background check requirements. Transferring a firearm across state lines without using an FFL is a felony, as is purchasing one on behalf of someone else.

You can find state-by-state gun purchasing laws at this link.

What Should You Expect At A Gun Store?

You’re entering an environment where people get to call their favorite hobby their job. Gun store staff and owners are usually knowledgeable and friendly. They also really believe in the whole 2A thing. All that’s to say: Don’t be shy. Ask questions, listen to the answers, and feel free to make those about self-defense.

Like a lot of sectors of the economy, recent growth in sales of guns and associated stuff has concentrated in higher end, more expensive products. This is bringing change to retailers. Just a couple of years ago, my favorite gun store was full of commemorative January 6th memorabilia, LOCK HER UP bumper stickers, and stuff like that. Today, all that has been replaced with reclaimed barn wood and the owner will fix you an excellent espresso before showing you his wares.

If you don’t bring up politics, they won’t either. You can expect to be treated like a customer they want to sell stuff to. When in doubt, take the same friend you’d drag along to a car dealership, but gun shops are honestly a way better time than one of those.

When visiting one you’ll walk in, and see a bunch of guns behind a counter. Simply catch the attention of one of the members of staff, and ask for one of the guns I recommend below. They’ll place that on the counter for you, and you’re free to handle and inspect it. Just keep the muzzle pointed in a safe direction while you do, then place it back as they presented it. Ask to buy it, they’ll have you fill out some paperwork by hand or on an iPad, and depending on which state you live in, you’ll either leave with the gun once your payment is processed and background check approved, or need to come back after the short waiting period.

The Four Rules Of Firearms Safety

I’ll talk more about the responsibility inherent in firearms ownership below. But let’s start with the four rules capable of ensuring you remain safe, provided they are followed at all times:
  • Treat every gun as if it’s loaded.
  • Keep the muzzle pointed in a safe direction.
  • Keep your finger off the trigger until you’re ready to shoot.
  • Be sure of your target and what’s beyond it.

What Type Of Gun Should You Buy?

Think of guns like cars. You can simply purchase a Toyota Corolla and have all of your transportation needs met at an affordable price without any need for further research, or you can dive as deep as you care to. Let’s keep this this simple, and meet all your self defense needs at affordable prices as easily as possible.

by Wes Siler, Newsletter |  Read more:
Image: uncredited

What Actually Makes a Good Life

Harvard started following a group of 268 sophomores back in 1938—and continued to track them for decades—and eventually included their spouses and children too. The goal was to discover what leads to a thriving, happy life.

Robert Waldinger continues that work today as the Director of the Harvard Study on Adult Development. (He’s also a zen priest, by the way.) Here he shares insights on the key ingredients for living the good life.
[ed. Road map to happiness (or at least more life satisfaction). Only 16 minutes of your time.]

How Did TVs Get So Cheap?

How Did TVs Get So Cheap? (CP)
Images: BLS; Brian Potter; IFP

via:

via:

via:

Saturday, January 31, 2026

Kayfabe and Boredom: Are You Not Entertained?

Pro wrestling, for all its mass appeal, cultural influence, and undeniable profitability, is still dismissed as low-brow fare for the lumpen masses; another guilty pleasure to be shelved next to soap operas and true crime dreck. This elitist dismissal rests on a cartoonish assumption that wrestling fans are rubes, incapable of recognizing the staged spectacle in front of them. In reality, fans understand perfectly well that the fights are preordained. What bothers critics is that working-class audiences knowingly embrace a form of theater more honest than the “serious” news they consume.

Once cast as the pinnacle of trash TV in the late ’90s and early 2000s, pro wrestling has not only survived the cultural sneer; it might now be the template for contemporary American politics. The aesthetics of kayfabe, of egotistical villains and manufactured feuds, now structure our public life. And nowhere is this clearer than in the figure of its most infamous graduate: Donald Trump, the two-time WrestleMania host and 2013 WWE Hall of Fame inductee who carried the psychology of the squared circle from the television studio straight into the Oval Office.

In wrestling, kayfabe refers to the unwritten rule that participants must maintain a charade of truthfulness. Whether you are allies or enemies, every association between wrestlers must unfold realistically. There are referees, who serve as avatars of fairness. We the audience understand that the outcome is choreographed and predetermined, yet we watch because the emotional drama has pulled us in.

In his own political arena, Donald Trump is not simply another participant but the conductor of the entire orchestra of kayfabe, arranging the cues, elevating the drama, and shaping the emotional cadence. Nuance dissolves into simple narratives of villains and heroes, while those who claim to deliver truth behave more like carnival barkers selling the next act. Politics has become theater, and the news that filters through our devices resembles an endless stream of storylines crafted for outrage and instant reaction. What once required substance, context, and expertise now demands spectacle, immediacy, and emotional punch.

Under Trump, politics is no longer a forum for governance but a stage where performance outranks truth, policy, and the show becomes the only reality that matters. And he learned everything he knows from the small screen.

In the pro wrestling world, one of the most important parts of the match typically happens outside of the ring and is known as the promo. An announcer with a mic, timid and small, stands there while the wrestler yells violent threats about what he’s going to do to his upcoming opponent, makes disparaging remarks about the host city, their rival’s appearance, and so on. The details don’t matter—the goal is to generate controversy and entice the viewer to buy tickets to the next staged combat. This is the most common and quick way to generate heat (attention). When you’re selling seats, no amount of audience animosity is bad business. (...)

Kayfabe is not limited to choreographed combat. It arises from the interplay of works (fully scripted events), shoots (unscripted or authentic moments), and angles (storyline devices engineered to advance a narrative). Heroes (babyfaces, or just faces) can at the drop of a dime turn heel (villain), and heels can likewise be rehabilitated into babyfaces as circumstances demand. The blood spilled is real, injuries often are, but even these unscripted outcomes are quickly woven back into the narrative machinery. In kayfabe, authenticity and contrivance are not opposites but mutually reinforcing components of a system designed to sustain attention, emotion, and belief.

by Jason Myles, Current Affairs |  Read more:
Image: uncredited
[ed. See also: Are you not entertained? (LIWGIWWF):]
***
Forgive me for quoting the noted human trafficker Andrew Tate, but I’m stuck on something he said on a right-wing business podcast last week. Tate, you may recall, was controversially filmed at a Miami Beach nightclub last weekend, partying to the (pathologically) sick beats of Kanye’s “Heil Hitler” with a posse of young edgelords and manosphere deviants. They included the virgin white supremacist Nick Fuentes and the 20-year-old looksmaxxer Braden Peters, who has said he takes crystal meth as part of his elaborate, self-harming beauty routine and recently ran someone over on a livestream.

“Heil Hitler” is not a satirical or metaphorical song. It is very literally about supporting Nazis and samples a 1935 speech to that effect. But asked why he and his compatriots liked the song, Tate offered this incredible diagnosis: “It was played because it gets traction in a world where everybody is bored of everything all of the time, and that’s why these young people are encouraged constantly to try and do the most shocking thing possible.” Cruelty as an antidote to the ennui of youth — now there’s one I haven’t quite heard before.

But I think Tate is also onto something here, about the wider emotional valence of our era — about how widespread apathy and nihilism and boredom, most of all, enable and even fuel our degraded politics. I see this most clearly in the desperate, headlong rush to turn absolutely everything into entertainment — and to ensure that everyone is entertained at all times. Doubly entertained. Triply entertained, even.

Trump is the master of this spectacle, of course, having perfected it in his TV days. The invasion of Venezuela was like a television show, he said. ICE actively seeks out and recruits video game enthusiasts. When a Border Patrol official visited Minneapolis last week, he donned an evocative green trench coat that one historian dubbed “a bit of theater.”

On Thursday, the official White House X account posted an image of a Black female protester to make it look as if she were in distress; caught in the obvious (and possibly defamatory) lie, a 30-something-year-old deputy comms director said only that “the memes will continue.” And they have continued: On Saturday afternoon, hours after multiple Border Patrol agents shot and killed an ICU nurse in broad daylight on a Minneapolis street, the White House’s rapid response account posted a graphic that read simply — ragebaitingly — “I Stand With Border Patrol.”

Are you not entertained?

But it goes beyond Trump, beyond politics. The sudden rise of prediction markets turns everything into a game: the weather, the Oscars, the fate of Greenland. Speaking of movies, they’re now often written with the assumption that viewers are also staring at their phones — stacking entertainment on entertainment. Some men now need to put YouTube on just to get through a chore or a shower. Livestreaming took off when people couldn’t tolerate even brief disruptions to their viewing pleasure.

Ironically, of course, all these diversions just have the effect of making us bored. The bar for what breaks through has to rise higher: from merely interesting to amusing to provocative to shocking, in Tate’s words. The entertainments grow more extreme. The volume gets louder. And it’s profoundly alienating to remain at this party, where everyone says that they’re having fun, but actually, internally, you are lonely and sad and do not want to listen — or watch other people listen! — to the Kanye Nazi song.

I am here to tell you it’s okay to go home. Metaphorically speaking. Turn it off. Tune it out. Reacquaint yourself with boredom, with understimulation, with the grounding and restorative sluggishness of your own under-optimized thoughts. Then see how the world looks and feels to you — what types of things gain traction. What opportunities arise, not for entertainment — but for purpose. For action.

The Adolescence of Technology

Confronting and Overcoming the Risks of Powerful AI

There is a scene in the movie version of Carl Sagan’s book Contact where the main character, an astronomer who has detected the first radio signal from an alien civilization, is being considered for the role of humanity’s representative to meet the aliens. The international panel interviewing her asks, “If you could ask [the aliens] just one question, what would it be?” Her reply is: “I’d ask them, ‘How did you do it? How did you evolve, how did you survive this technological adolescence without destroying yourself?” When I think about where humanity is now with AI—about what we’re on the cusp of—my mind keeps going back to that scene, because the question is so apt for our current situation, and I wish we had the aliens’ answer to guide us. I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species. Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.

In my essay Machines of Loving Grace, I tried to lay out the dream of a civilization that had made it through to adulthood, where the risks had been addressed and powerful AI was applied with skill and compassion to raise the quality of life for everyone. I suggested that AI could contribute to enormous advances in biology, neuroscience, economic development, global peace, and work and meaning. I felt it was important to give people something inspiring to fight for, a task at which both AI accelerationists and AI safety advocates seemed—oddly—to have failed. But in this current essay, I want to confront the rite of passage itself: to map out the risks that we are about to face and try to begin making a battle plan to defeat them. I believe deeply in our ability to prevail, in humanity’s spirit and its nobility, but we must face the situation squarely and without illusions.

As with talking about the benefits, I think it is important to discuss risks in a careful and well-considered manner. In particular, I think it is critical to:
  • Avoid doomerism. Here, I mean “doomerism” not just in the sense of believing doom is inevitable (which is both a false and self-fulfilling belief), but more generally, thinking about AI risks in a quasi-religious way. Many people have been thinking in an analytic and sober way about AI risks for many years, but it’s my impression that during the peak of worries about AI risk in 2023–2024, some of the least sensible voices rose to the top, often through sensationalistic social media accounts. These voices used off-putting language reminiscent of religion or science fiction, and called for extreme actions without having the evidence that would justify them. It was clear even then that a backlash was inevitable, and that the issue would become culturally polarized and therefore gridlocked. As of 2025–2026, the pendulum has swung, and AI opportunity, not AI risk, is driving many political decisions. This vacillation is unfortunate, as the technology itself doesn’t care about what is fashionable, and we are considerably closer to real danger in 2026 than we were in 2023. The lesson is that we need to discuss and address risks in a realistic, pragmatic manner: sober, fact-based, and well equipped to survive changing tides.
  • Acknowledge uncertainty. There are plenty of ways in which the concerns I’m raising in this piece could be moot. Nothing here is intended to communicate certainty or even likelihood. Most obviously, AI may simply not advance anywhere near as fast as I imagine. Or, even if it does advance quickly, some or all of the risks discussed here may not materialize (which would be great), or there may be other risks I haven’t considered. No one can predict the future with complete confidence—but we have to do the best we can to plan anyway.
  • Intervene as surgically as possible. Addressing the risks of AI will require a mix of voluntary actions taken by companies (and private third-party actors) and actions taken by governments that bind everyone. The voluntary actions—both taking them and encouraging other companies to follow suit—are a no-brainer for me. I firmly believe that government actions will also be required to some extent, but these interventions are different in character because they can potentially destroy economic value or coerce unwilling actors who are skeptical of these risks (and there is some chance they are right!). It’s also common for regulations to backfire or worsen the problem they are intended to solve (and this is even more true for rapidly changing technologies). It’s thus very important for regulations to be judicious: they should seek to avoid collateral damage, be as simple as possible, and impose the least burden necessary to get the job done. It is easy to say, “No action is too extreme when the fate of humanity is at stake!,” but in practice this attitude simply leads to backlash. To be clear, I think there’s a decent chance we eventually reach a point where much more significant action is warranted, but that will depend on stronger evidence of imminent, concrete danger than we have today, as well as enough specificity about the danger to formulate rules that have a chance of addressing it. The most constructive thing we can do today is advocate for limited rules while we learn whether or not there is evidence to support stronger ones.
With all that said, I think the best starting place for talking about AI’s risks is the same place I started from in talking about its benefits: by being precise about what level of AI we are talking about. The level of AI that raises civilizational concerns for me is the powerful AI that I described in Machines of Loving Grace. I’ll simply repeat here the definition that I gave in that document:
  • By “powerful AI,” I have in mind an AI model—likely similar to today’s LLMs in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields: biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.
  • In addition to just being a “smart thing you talk to,” it has all the interfaces available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.
  • It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.
  • It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory, it could even design robots or equipment for itself to use.
  • The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10–100x human speed. It may, however, be limited by the response time of the physical world or of software it interacts with.
  • Each of these million copies can act independently on unrelated tasks, or, if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.
We could summarize this as a “country of geniuses in a datacenter.”

As I wrote in Machines of Loving Grace, powerful AI could be as little as 1–2 years away, although it could also be considerably further out.

Exactly when powerful AI will arrive is a complex topic that deserves an essay of its own, but for now I’ll simply explain very briefly why I think there’s a strong chance it could be very soon. (...)

In this essay, I’ll assume that this intuition is at least somewhat correct—not that powerful AI is definitely coming in 1–2 years, but that there’s a decent chance it does, and a very strong chance it comes in the next few. As with Machines of Loving Grace, taking this premise seriously can lead to some surprising and eerie conclusions. While in Machines of Loving Grace I focused on the positive implications of this premise, here the things I talk about will be disquieting. They are conclusions that we may not want to confront, but that does not make them any less real. I can only say that I am focused day and night on how to steer us away from these negative outcomes and towards the positive ones, and in this essay I talk in great detail about how best to do so.

I think the best way to get a handle on the risks of AI is to ask the following question: suppose a literal “country of geniuses” were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist. The analogy is not perfect, because these geniuses could have an extremely wide range of motivations and behavior, from completely pliant and obedient, to strange and alien in their motivations. But sticking with the analogy for now, suppose you were the national security advisor of a major state, responsible for assessing and responding to the situation. Imagine, further, that because AI systems can operate hundreds of times faster than humans, this “country” is operating with a time advantage relative to all other countries: for every cognitive action we can take, this country can take ten.

What should you be worried about? I would worry about the following things: 
1. Autonomy risks. What are the intentions and goals of this country? Is it hostile, or does it share our values? Could it militarily dominate the world through superior weapons, cyber operations, influence operations, or manufacturing?
2. Misuse for destruction. Assume the new country is malleable and “follows instructions”—and thus is essentially a country of mercenaries. Could existing rogue actors who want to cause destruction (such as terrorists) use or manipulate some of the people in the new country to make themselves much more effective, greatly amplifying the scale of destruction?
3. Misuse for seizing power. What if the country was in fact built and controlled by an existing powerful actor, such as a dictator or rogue corporate actor? Could that actor use it to gain decisive or dominant power over the world as a whole, upsetting the existing balance of power?
4. Economic disruption. If the new country is not a security threat in any of the ways listed in #1–3 above but simply participates peacefully in the global economy, could it still create severe risks simply by being so technologically advanced and effective that it disrupts the global economy, causing mass unemployment or radically concentrating wealth?
5. Indirect effects. The world will change very quickly due to all the new technology and productivity that will be created by the new country. Could some of these changes be radically destabilizing?
I think it should be clear that this is a dangerous situation—a report from a competent national security official to a head of state would probably contain words like “the single most serious national security threat we’ve faced in a century, possibly ever.” It seems like something the best minds of civilization should be focused on.

Conversely, I think it would be absurd to shrug and say, “Nothing to worry about here!” But, faced with rapid AI progress, that seems to be the view of many US policymakers, some of whom deny the existence of any AI risks, when they are not distracted entirely by the usual tired old hot-button issues.

Humanity needs to wake up, and this essay is an attempt—a possibly futile one, but it’s worth trying—to jolt people awake.

To be clear, I believe if we act decisively and carefully, the risks can be overcome—I would even say our odds are good. And there’s a hugely better world on the other side of it. But we need to understand that this is a serious civilizational challenge. Below, I go through the five categories of risk laid out above, along with my thoughts on how to address them.

by Dario Amodei, Anthropic |  Read more:
[ed. Mr. Amodei and Anthropic in general seem to be, of all major AI companies, the most focused on safety and alignment issues. Guaranteed, everyone working in the field has read this. For a good summary and contrary arguments, see also: On The Adolescence of Technology (Zvi Mowshowitz, DMtV).]

Friday, January 30, 2026

Jessie Welles

Jesse Welles Is the Antidote To Everything That Sucks About Our Time (CA)

[ed. The power of musical protest (why isn't there more of it?). Seems like nearly half the songs of the late-60s/early 70s were protest songs. Wonder what that says about successive generations or society today. More videos here.]

Hawaiʻi Could See Nation’s Highest Drop In High School Graduates

Hawaiʻi Could See Nation’s Highest Drop In High School Graduates (CB)

Hawaiʻi is expected to see the greatest decline in high school graduates in the nation over the next several years, raising concerns from lawmakers and Department of Education officials about the future of small schools in shrinking communities.

Between 2023 and 2041, Hawaiʻi could see a 33% drop in the number of students graduating from high school, according to the Western Interstate Commission for Higher Education. The nation as a whole is projected to see a 10% drop in graduates, according to the commission’s most recent report, published at the end of 2024.

Image: Chart: Megan Tagami/Civil BeatSource: Western Interstate Commission for Higher Education

Gérard DuBois para Moby Dick de Herman Melville

The Last Flight of PAT 25

Two Army helicopter pilots went on an ill-conceived training mission. Within two hours, 67 people were dead.

One year ago, on January 29, 2025, two Army pilots strapped into a Black Hawk helicopter for a training mission out of Fort Belvoir in eastern Virginia and, two hours later, flew it into an airliner that was approaching Ronald Reagan Washington National Airport, killing all 67 aboard both aircraft. It was the deadliest air disaster in the United States in a quarter-century. Normally, in the aftermath of an air crash, government investigators take a year or more to issue a final report laying out the reasons the incident occurred. But in this case, the newly seated U.S. president, Donald Trump, held a press conference the next day and blamed the accident on the FAA’s DEI under the Biden and Obama administrations. “They actually came out with a directive, ‘too white,’” he claimed. “And we want the people that are competent.”

In the months that followed, major media outlets probed several real-world factors that contributed to the tragedy, including staffing shortages at FAA towers, an excess of traffic in the D.C. airspace, and the failure of the Black Hawk to broadcast its location over ADS-B — an automatic reporting system — before the collision. To address this final point, the Senate last month passed the bipartisan ROTOR Act, which would require all aircraft to use ADS-B — “a fitting way to honor the lives of those lost nearly one year ago over the Potomac River,” as bill co-sponsor Ted Cruz put it.

At a public meeting on Tuesday, the National Transport Safety Board laid out a list of recommended changes in response to the crash, criticizing the FAA for allowing helicopters to operate dangerously close to passenger planes and for allowing professional standards to slip at the control tower.

What has gone unexamined in the public discussion of the crash, however, is why these particular pilots were on this mission in the first place, whether they were competent to do what they were trying to do, what adverse conditions they were facing, and who was in charge at the moment of impact. Ultimately, while systemic issues may have created conditions that were ripe for a fatal accident, it was human decision-making in the cockpit that was the immediate cause of this particular crash.

This account is based on documents from the National Transportation Board (NTSB) accident inquiry and interviews with aviation experts. It shows that, when we focus on the specific details and facts of a case, the cause can seem quite different from what a big-picture overview might indicate. And this, in turn, suggests different logical steps that should be taken to prevent such a tragedy from happening again.

6:42 p.m.: Fort Belvoir, Virginia

The whine of the Blackhawk’s engine increased in pitch, and the whump-whump of its four rotor blades grew louder, as the matte-black aircraft lifted into the darkened sky above the single mile-long runway at Davison Army Airfield in Fairfax County, Virginia, about 25 miles southwest of Washington, D.C.

The UH-60, as it’s formally designated, is an 18,000-pound aircraft that entered service in 1979 as a tactical transport aircraft, used primarily for moving troops and equipment. This one belonged to Company B of the 12th Aviation Battalion, whose primary mission is to transport government VIPs, including Defense Department officials, members of Congress, and visiting dignitaries. Tonight’s flight would operate as PAT 25, for “Priority Air Transit.”

Black Hawks are typically flown by two pilots. The pilot in command, or PIC, sits in the right-hand seat. Tonight, that role was filled by 39-year-old chief warrant officer Andrew Eaves. Warrant officers rank between enlisted personnel and commissioned officers; it’s the warrant officers who carry out the lion’s share of a unit’s operational flying. When not flying VIPs, Eaves served as a flight instructor and a check pilot, providing periodic evaluation of the skills of other pilots. A native of Mississippi, he had 968 hours of flight experience and was considered a solid pilot by others in the unit.

Before he took off, Eaves’ commander had discussed the flight with him and admonished him to “not become too fixated on his evaluator role” and to remain “in control of the helicopter,” according to the NTSB investigation.

His mission was to give a check ride to Captain Rebecca Lobach, the pilot sitting in the left-hand seat. Lobach was a staff officer, meaning that her main role in the battalion was managerial. Nevertheless, she was expected to maintain her pilot qualifications and, to do so, had to undergo a number of annual proficiency checks. Tonight’s three-hour flight was intended to get Lobach her annual sign-off for basic flying skills and for the use of night-vision goggles, or NVGs. To accommodate that, the flight was taking off an hour and 20 minutes after sunset.

Both pilots wore AN/AVS-6(V)3 Night Vision Goggles, which look like opera glasses and clip onto the front of a pilot’s helmet. They gather ambient light, whether from the moon or stars or from man-made sources; intensify it; and display it through the lens of each element. The eyepiece doesn’t sit directly on the face but about an inch away, so the pilot can look down under it and see the instrument panel.

Night-vision goggles have a narrow field of view, just 40 degrees compared to the 200-degree range of normal vision, which makes it harder for pilots to maintain full situational awareness. They have to pay attention to obstacles and other aircraft outside the window, and they also have to keep track of what the gauges on the panel in front them are saying: how fast they’re going, for instance, and how high. There’s a lot to process, and time is of the essence when you’re zooming along at 120 mph while lower than the tops of nearby buildings. To help with situational awareness, Eaves and Lobach were accompanied by a crew chief, Staff Sergeant Ryan O’Hara, sitting in a seat just behind the cockpit, where he would be able to help keep an eye out for trouble.

The helicopter turned to the south as it climbed, then flew along the eastern shore of the Potomac until the point where the river makes a big bend to the east. Eaves banked to the right and headed west toward the commuter suburb of Vicksburg, where the lights of house porches and street lamps seemed to twinkle as they fell in and out of the cover of the bare tree branches.

7:11 p.m.: Approaching Greenhouse Airport, Stevensburg, Virginia

PAT 25 followed the serpentine course of the Rapidan River through the hills and farm fields of the Piedmont. At this point, Eaves was not only the pilot in command, but also the pilot flying, meaning that he had his hands on the controls that guide the aircraft’s speed and direction and his feet on the rudder pedals that keep the helicopter “in trim” — that is, lined up with its direction of flight. Lobach played a supporting role, working the radio, keeping an eye out for obstacles and other traffic, and figuring out their location by referencing visible landmarks.

Lobach, 28, had been a pilot for four years. She’d been an ROTC cadet at the University of North Carolina at Chapel Hill, which she graduated from in 2019. Both her parents were doctors; she’d dreamed of a medical career but eventually realized that she couldn’t pursue one in the Army. According to her roommate, “She did not have a huge, massive passion” for aviation but chose it because it was the closest she could get to practicing medicine, under the circumstances. “She badly wanted to be a Black Hawk pilot because she wanted to be a medevac unit,” he told NTSB investigators. After she completed flight training at Fort Rucker, she was stationed at Fort Belvoir, where she joined the 12th Aviation Battalion and was put in charge of the oil-and-lubricants unit. One fellow pilot in the unit described her to the NTSB as “incredibly professional, very diligent and very thorough.”

In addition to her official duties, Lobach served as a volunteer social liaison at the White House, where she regularly represented the Army at Medal of Honor ceremonies and state dinners. She was both a fitness fanatic and a baker, known for providing fresh sourdough bread to her unit. She had started dabbling in real-estate investments and looked forward to moving in with her boyfriend of one year, another Army pilot with whom she talked about having “lots and lots of babies.” She was planning to leave the service in 2027 and had already applied for medical school at Mount Sinai. Helicopter flying was not something she intended to pursue.

Though talented as a manager, she wasn’t much of a pilot. Helicopter flying is an extremely demanding feat of coordination and balance, akin to juggling and riding a unicycle at the same time. For Lobach, the difficulty was compounded by the fact that she had trained on highly automated, relatively easy-to-fly helicopters at Fort Rucker and then been assigned to an older aircraft, the Black Hawk L or “Lima” model, at Fort Belvoir. Unlike newer models, which can maintain their altitude on autopilot, the Lima requires constant care and attention, and Lobach struggled to master it. One instructor described her skills as “well below average,” noting that she had “lots of difficulties in the aircraft.” Three years before, she’d failed the night-vision evaluation she was taking tonight.

Before the flight, Eaves had told his girlfriend that he was concerned about Lobach’s capability as a pilot and that, skill-wise, she was “not where she should be.”

It’s not uncommon for pilots to struggle during the early phase of their career. But Lobach’s development had been particularly slow. In her five years in the service, she had accumulated just 454 hours of flight time, and she wasn’t clocking more very quickly. The Army requires officers in her role to fly at least 60 hours a year, but in the past 12 months, she’d flown only 56.7. Her superiors had made an exception for her because in March she’d had knee surgery for a sports injury, preventing her from flying for three months. The waiver made her technically qualified to fly, but it didn’t change the fact that she was rustier than pilots were normally allowed to become.

If she’d been keen on flying, she could have used every moment of this flight to hone her skills by taking the controls herself. But she was content to let Eaves do the flying during the first part of the trip.

Drawing near to Greenhouse Airport, a small, private grass runway near a plant nursery, they navigated via an old-fashioned technique called pilotage, using landmarks and dead reckoning to find their way from point to point. Coming in for their first landing of the night, they were looking for the airstrip’s signature greenhouse complex.

Lobach: That large lit building may be part of it.

Eaves: It does look like a greenhouse, doesn’t it?

Lobach: Yeah, it does, doesn’t it? We can start slowing back.

Eaves: All right, slowing back.

As they circled around the runway, Eaves commented that the lighting of the greenhouse building was so intense that it was blinding in the NVGs, and Lobach agreed. Eaves positioned the helicopter a few hundred feet above the landing zone and asked Lobach to show him where it was. After she did so correctly, he told her to take the controls. This process followed a formalized set of acknowledgements to make sure that both parties understood who was in control of the aircraft.

Eaves: You’ve got the flight controls.

Lobach: I’ve got the controls.

As Lobach eased the helicopter toward the ground, Eaves and Crew Chief O’Hara called out times from the landing checklist.

O’Hara: Clear of obstacles on the left.

Lobach: Thank you. Coming forward.

Eaves: Clear down right.

Lobach: Nice and wide.

Eaves: 50 feet.

Lobach: 30 feet.

They touched down. One minute and 42 seconds after passing control to Lobach, Eaves took it back again. As they sat on the ground with their rotor whirring, they discussed the fuel remaining aboard the aircraft and the direction they would travel in during the next segment of their flight. Finally, after six minutes, Eaves signaled that they were ready to take off again.

Eaves: Whenever you’re ready, ma’am.

Lobach: Okay, let’s do it.

Eaves’s deference to Lobach was symptomatic of what is known among psychologists as an “inverted authority gradient.” Although he was the pilot in command, both responsible for the flight and in a position of authority over others on it, Eaves held a lesser rank than Lobach and so in a broader context was her subordinate. In moments of high stress, this ambiguity can muddy the waters as to who is supposed to be making crucial decisions.

Eaves, Lobach, and O’Hara ran through their checklists, and Eaves eased the Black Hawk up into the night sky.

by Jeff Wise, Intelligencer |  Read more:
Image: Intelligencer; Photo: Matt Hecht
[ed. See also: Responders recall a mission of recovery and grief a year after the midair collision near DC (AP).]

via:

Here Come the Beetles

The nearly 100-year-old Wailua Municipal Golf Course is home to more than 580 coconut trees. It’s also one of Kaua‘i’s most visible sites for coconut rhinoceros beetle damage.

Located makai of Kūhiō Highway, trees that would normally have full, verdant leaves are dull and have V-shaped cuts in their fronds. Some are bare and look more like matchsticks.

It’s not for lack of trying to mitigate the invasive pest. The trees’ crowns have been sprayed with a pesticide twice, and the trunks were injected twice with a systemic pesticide for longer term protection.

The Kaua‘i Department of Parks & Recreation maintains that even though the trees still look damaged, the treatments are working. Staff have collected 1,679 fallen, dead adult beetles over the last three years.

The most recent treatment, a systemic pesticide that travels through the trees’ vascular systems, was done in January 2025. While crown sprays kill the beetle on contact, systemic pesticides require the beetles to feed from the trees to die. The bugs eat the trees’ hearts — where new fronds develop — so it can take months for foliage damage to appear.
 
“The general public sees these trees that are damaged and thinks, ‘Oh my goodness they’re getting whacked,’ but in actuality, we need them to get whacked to kill (the beetles),” said Patrick Porter, county parks director.

But with the beetles continuing to spread around the island, the county is increasingly turning its attention to green waste, mulch piles and other breeding sites, where beetles spend four to six months growing from eggs to adults. A single adult female beetle can lay up to 140 eggs in her lifetime.

“The reality is if you don’t go after the larvae and you don’t go after your mulch cycle, you’re just pissing in the wind,” said Kaua‘i County Council member Fern Holland. “Because there are just going to be hundreds and hundreds of them hatching all the time, and you can’t go after all of them.” (...)

Last May, the County Council allocated $100,000 for invasive species and another $100,000 for CRB. It was the first time the county designated funds specifically to address the beetle.

Niki Kunioka-Volz, economic development specialist with the Kaua‘i Office of Economic Development, said none of that funding has been spent yet.
They’re considering using it to help get the breeding site at the Wailua golf course under control, such as by purchasing an air curtain burner, a fan-powered incinerator of sorts to dispose of green waste. The burner could also be a tool for the broader community. (...)

In 2024, the county received $200,000 from the state Department of Agriculture. That money was used for a CRB outreach campaign, training CRB detection dogs and distributing deterrent materials. State funding was also expected to help the county purchase a curtain burner, but that plan fell through.

Earlier this month, state legislators threatened to cut invasive species funding from the newly expanded Hawai‘i Department of Agriculture and Biosecurity over its slow progress in curbing threats such as coconut rhinoceros beetles.

“I’d like to see the pressure put on them to release the funds to the counties,” Holland said.

by Noelle Fujii-Oride, Honolulu Civil Beat | Read more:
Image: Kevin Fujii/David Croxford/Civil Beat
[ed. Tough, ugly, able to leap sleeping bureaucrats in a single bound. See also: As Palm-Killing Beetles Spread On Big Island, State Action Is Slow (CB):]
***
It has been nearly two years since the first rhinoceros coconut beetle was discovered on Hawaiʻi island. And yet, despite ongoing concern by residents, the state is moving slowly in devising its response.

Seven months ago, the state’s Department of Agriculture and Biosecurity said it would begin working to stop the spread of CRB, within and beyond North Kona. But a meeting of the agency’s board Tuesday marked the first concrete step to do so by regulators. Now, as agriculture department staff move to streamline and resolve apparent issues in the proposed regulations, it will likely take until March for the board to consider implementing them.

Many of the attendees at Tuesday’s meeting, including residents of other islands, said that the state is lagging on its pledge to regulate the movement of agricultural materials while the destructive pest is spreading and killing both the island’s coconut palms and its endangered, endemic loulu palms.

The First Two Years

Before making landfall on Hawaiʻi island in 2023, the beetles spent almost a decade in apparent confinement on Oʻahu.

At first they appeared to be isolated to Waikoloa. Then, in March of last year, larvae and beetles were discovered at Kona International Airport and the state-owned, 179-acre Keāhole Agriculture Park, before spreading further.

In response, the county implemented a voluntary order to discourage the movement of potentially-infested live plants, mulch and green waste, and other landscaping materials such as compost from the area in June 2025. The order was described as “a precursor to a mandatory compliance structure” to be implemented by the state, according to a press release from the time. (...)

The board spent about an hour considering the petition and hearing testimony. And while many who testified made recommendations about actual protocol that might be put into place, the board merely voted to move forward in the process. So it’s not yet clear whether it will adopt the Big Island petitioner’s proposed rules or create its own.

If You Want That Tattoo Erased It’s Going to Hurt and It’s Going to Cost You

Colin Farrell’s had it done — many times. So have Angelina Jolie and Megan Fox. Heck, even Bart Simpson did.

Whether it’s Marilyn Monroe’s face, Billy Bob Thornton’s name, a sultry rose or even Bart’s partially inscribed homage to his mother, some tattoos simply have to go for one reason or many others.

But the process of taking them off is longer, much more costly and ouch — extremely more painful than getting them put on, according to professionals in the industry.

Also, due to health reasons, some souls who braved the ink needle, should be wary of the laser when having their body art erased or covered up.

Tattoos have been around for centuries

The oldest known tattoos were found on remains of a Neolithic man who lived in the Italian Alps around 3,000 B.C. Many mummies from ancient Egypt also have tattoos, as do remains from cultures around the world.

Tattoo removal likely is almost as old as the practice of inking and included scraping the skin to get the pigments off or out.

A more “civilized” method evolved in the 1960s when Leon Goldman, a University of Cincinnati dermatologist, used “hot vapor bursts” from a laser on tattoos and the skin that bore them.

Many choose tattoos to honor someone

A 2023 survey by the Pew Research Center determined that 32% of adults in the United States have tattoos. About 22% have more than one, according to the survey.

Honoring or remembering someone or something accounts for the biggest reason Americans get their first tattoo. About 24% in the survey regret getting them.

Tracy Herrmann, 54, of Plymouth, Michigan, just west of Detroit, has eight tattoos and is in the process of getting four phrases, including “One step at a time,” “Surrender,” and “Through it all,” removed from her feet and arms.

She started inking up about six years ago and says she doesn’t regret getting tattoos.

“Maybe a different choice, maybe,” Herrmann said following her fourth tattoo removal session at Chroma Tattoo Studio & Laser Tattoo Removal in Brighton, Michigan.

“There was a period in my life that I felt I needed some extra reminder,” Hermann said. “I thought I would just embrace the period in my life, so that helped and then just to surrender and give it over to God. So, half of them were really, really pivotal to getting me over a hump in my life.”

Boredom among reasons to remove tats

Herrmann says the four getting lasered are part of her past and that’s where she wants them to stay.

“Now, I just want to move forward and go back to the original skin I was born with,” she said. “But the other four I’m going to keep. They still mean a lot to me, but they’re more hidden.”

Reasons for getting a tattoo removed are as varied and personal as the reasons for getting them in the first place, says Ryan Wright, a registered nurse and owner of Ink Blasters Precision Laser Tattoo Removal in Livonia, Michigan.

“A lot of people, when they get a new tattoo that makes some of their old tattoos look bad they get (the older tattoos) removed or reworked,” Wright said.

Chroma owner Jaime Howard says boredom plays a role, too.

“They got a tattoo off a whim and they’re like ‘hey, I’m really bored with this. I don’t want this anymore,’” Howard said. “It’s not about hating their tattoo, it’s about change for yourself.”

Like snapping a ‘rubber band’ on your skin

Howard and Wright, like many who perform laser removals, use something called a Q-switching, or quality switching, laser. It concentrates the light energy into intense short bursts or pulses.

“It’s very painful. Nine out of 10,” Wright said. “It kind of feels like a rubber band being snapped on your skin with hot bacon grease.”

Howard has had some of her tattoos removed and admits the procedure is painful.

But “you get through it,” she said. “A couple of days later you’re still feeling the sunburn, but it’s OK. If you want it bad enough, you’ll take it off because that’s what you want.”

Light heat from the laser breaks the ink into particles small enough to be absorbed by the body and later excreted as waste.

It’s not a “one and done.” Wright said. Tattoo removal can take eight to 12 treatments or more. A new tattoo can go over the old one once the skin has had time to sufficiently heal.

Howard consulted with Herrmann as her fourth session at Chroma began. They spoke about the previous session and how far along they were with the ink removal. Both then donned dark sunglasses to protect their eyes from the brightness of the laser. Herrmann winced. Seconds later, it was done. But she still has more sessions ahead.

“Oh gosh, it’s a 10 when you’re getting it done,” Herrmann said of the pain. “It’s pretty intense. It’s doable. I know price is sometimes an issue, but it’s worth it.”

Removal can be costly

Howard says the minimum she charges is $100 per session. Wright says that on a typical day he does about a dozen treatments and that cost depends on the square-inch size of the tattoo.

“The cost is really the technology in the laser,” Wright said. “It’s not like a time thing. Most treatments are under a minute. You’re paying for the technology and the person who knows how to use the technology. You can damage the skin if you don’t know what you’re doing.”

by Corey Williams, AP |  Read more:
Image: the author

Thursday, January 29, 2026

What is College For in the Age of AI?

When I left for college in the fall of 1991, the internet era was just beginning. By sophomore year, I received my first email address. By junior year, the first commercial web browser was released. The summer after graduation, I worked as a reporter at the Arizona Republic covering the internet’s rise in our everyday lives, writing about the opening of internet cafés and businesses launching their first websites. I was part of an in-between class of graduates who went off to college just before a new technology transformed what would define our careers.

So when Alina McMahon, a recent University of Pittsburgh graduate, described her job search to me, I immediately recognized her predicament. McMahon began college before AI was a thing. Three and a half years later, she graduated into a world where it was suddenly everywhere. McMahon majored in marketing, with a minor in film and media studies. “I was trying to do the stable option,” she said of her business degree. She followed the standard advice given to all undergraduates hoping for a job after college: Network and intern. Her first “coffee chat” with a Pitt alumnus came freshman year; she landed three internships, including one in Los Angeles at Paramount in media planning. There she compiled competitor updates and helped calculate metrics for which billboard advertisements the company would buy.

But when she started to apply for full-time jobs, all she heard back — on the rare occasions she heard anything — was that roles were being cut, either because of AI or outsourcing. Before pausing her job search recently, McMahon had applied to roughly 150 jobs. “I know those are kind of rookie numbers in this environment,” she said jokingly. “It’s very discouraging.”

McMahon’s frustrations are pretty typical among job seekers freshly out of college. There were 15 percent fewer entry-level and internship job postings in 2025 than the year before, according to Handshake, a job-search platform popular with college students; meanwhile, applications per posting rose 26 percent. The unemployment rate for new college graduates was 5.7 percent in December, more than a full percentage point above the national average and higher even than what high-school graduates face.

How much AI is to blame for the fragile entry-level job market is unclear. Several research studies show AI is hitting young college-educated workers disproportionately, but broader economic forces are part of the story, too. As Christine Cruzvergara, Handshake’s chief education-strategy officer, told me, AI isn’t “taking” jobs so much as employers are “choosing” to replace parts of jobs with automation rather than redesign roles around workers. “They’re replacing people instead of enabling their workforce,” she said.

The fact that Gen-Z college interns and recent graduates are the first workers being affected by AI is surprising. Historically, major technological shifts favored junior employees because they tend to make less money and be more skilled and enthusiastic in embracing new tools. But a study from Stanford’s Digital Economy Lab in August showed something quite different. Employment for Gen-Z college graduates in AI-affected jobs, such as software development and customer support, has fallen by 16 percent since late 2022. Meanwhile, more experienced workers in the same occupations aren’t feeling the same impact (at least not yet), said Erik Brynjolfsson, an economist who led the study. Why the difference? Senior workers, he told me, “learn tricks of the trade that maybe never get written down,” which allow them to better compete with AI than those new to a field who lack such “tacit knowledge.” For instance, that practical know-how might allow senior workers to better understand when AI is hallucinating, wrong, or simply not useful.

For employers, AI also complicates an already delicate calculus around hiring new talent. College interns and recent college graduates require — as they always have — time and resources to train. “It’s real easy to say ‘college students are expensive,’” Simon Kho told me in an interview. “Not from a salary standpoint, but from the investment we have to make.” Until recently, Kho ran early career programs at Raymond James Financial, where it took roughly 18 months for new college hires to pay off in terms of productivity. And then? “They get fidgety,” he added, and look for other jobs. “So you can see the challenges from an HR standpoint: ‘Where are we getting value? Will AI solve this for us?’”

Weeks after Stanford’s study was released, another by two researchers at Harvard University also found that less experienced employees were more affected by AI. And it revealed that where junior employees went to college influenced whether they stayed employed. Graduates from elite and lower-tier institutions fared better than those from mid-tier colleges, who experienced the steepest drop in employment. The study didn’t spell out why, but when I asked one of the authors, Seyed Mahdi Hosseini Maasoum, he offered a theory: Elite graduates may have stronger skills; lower-tier graduates may be cheaper. “Mid-tier graduates end up somewhat in between — they’re relatively costly to hire but not as skilled as graduates of the very prestigious universities — so they are hit the hardest,” Maasoum wrote to me.

Just three years after ChatGPT’s release, the speed of AI’s disruption on the early career job market is even catching the attention of observers at the highest level of the economy. In September, Fed chair Jerome Powell flagged the “particular focus on young people coming out of college” when asked about AI’s effects on the labor market. Brynjolfsson told me that if current trends hold, the impact of AI will be “quite a bit more noticeable” by the time the next graduating class hits the job market this spring. Employers already see it coming: In a recent survey by the National Association of Colleges and Employers, nearly half of 200 employers rated the outlook for the class of 2026 as poor or fair, the most pessimistic outlook since the first year of the pandemic.

The upheaval in the early career job market has caught higher education flat-footed. Colleges have long had an uneasy relationship with their unofficial role as vocational pipelines. When generative AI burst onto campuses in 2022, many administrators and faculty saw it primarily as a threat to learning — the world’s greatest cheating tool. Professors resurrected blue books for in-classroom exams and demanded that AI tools added to software be blocked in their classes.

Only now are colleges realizing that the implications of AI are much greater and are already outrunning their institutional ability to respond. As schools struggle to update their curricula and classroom policies, they also confront a deeper problem: the suddenly enormous gap between what they say a degree is for and what the labor market now demands. In that mismatch, students are left to absorb the risk. Alina McMahon and millions of other Gen-Zers like her are caught in a muddled in-between moment: colleges only just beginning to think about how to adapt and redefine their mission in the post-AI world, and a job market that’s changing much, much faster.

What feels like a sudden, unexpected dilemma for Gen-Z graduates has only been made worse by several structural changes across higher education over the past decade.

by Jeffrey Selingo, Intelligencer | Read more:
Image: Intelligencer; Photos:Getty

Frito Pie

Not quite nachos, and not quite pie, this comforting casserole is a cheesy and crunchy delight that is thought to have roots in both Texas and New Mexico. In its most classic (and some might say best) form, a small bag of Fritos corn chips is split down the middle, placed in a paper boat and piled high with chili, topped with cheese, diced onion, pickled jalapeños, sour cream and pico de gallo, then eaten with a plastic fork. (It is often called a “walking taco,” because it’s eaten on-the-go, at sporting events and fairs.) This version is adapted to feed a crowd: The Fritos, Cheddar and chili — made with ground beef, pinto beans, taco seasoning and enchilada sauce — are layered in a casserole dish, baked, then topped with a frenzy of fun toppings. For maximum crunch, save a cup of Fritos for topping as you eat.

Ingredients

Yield: 6 to 8 servings
1 tablespoon olive or vegetable oil
1 pound ground beef, preferably 20-percent fat
1 medium yellow onion, diced
1 (1-ounce) packet taco seasoning (or 3 tablespoons of a homemade taco seasoning)
2 (15-ounce) cans pinto beans, drained and rinsed
1 (19-ounce) can red enchilada sauce (or 2½ cups of homemade enchilada sauce)
2 (9-ounce) packages or 1 (18-ounce) package Fritos, 1 cup reserved for serving (8 to 10 cups)
8 ounces shredded Cheddar (about 2 cups)
Diced white onion, sliced scallions, pickled jalapeños, sour cream or pico de gallo, or a combination, for serving (optional)

Preparation 

Step 1: Heat the oven to 400 degrees. Coat a 9-by-13-inch baking dish with cooking spray.

Step 2: In a large Dutch oven or heavy-bottomed skillet, heat the oil over medium-high. Add the beef and onion, breaking up the meat with a wooden spoon. Cook, stirring occasionally, until the meat is browned and the onion is translucent, 8 to 10 minutes. Lower the heat if the meat is browning too quickly.

Step 3: Sprinkle the taco seasoning over the meat mixture and pour in ¾ cup of water; mix well. Bring to a simmer and cook until the liquid thickens and coats the pan, scraping up any browned bits, 2 to 3 minutes. Add the beans and enchilada sauce, stirring until combined. Bring to a simmer and cook for 5 minutes.

Step 4: Assemble the pie: Sprinkle half of the Fritos in the prepared baking dish, followed by half of the Cheddar. Cover with all of the meat filling. Finally, add the remaining Fritos (minus the reserved cup) and Cheddar.

Step 5: Bake until the cheese is melted and bubbly, 7 to 10 minutes. Rest for 5 minutes, then add the desired toppings to the casserole, or spoon into individual bowls and have eaters top as they please. Add reserved Fritos for more crunch, if desired.

by Kia Damon, NY Times |  Read more:
Image:Christopher Testani for The New York Times. Food Stylist: Simon Andrews.
[ed. Forgot about these. Should be great for Seattle's upcoming Super Bowl win.] 

Anne Zahalka - The Mathematician

Wednesday, January 28, 2026

Greg Girard - Hong Kong Cafe, Vancouver, Canada, 1975

On the Falsehoods of a Frictionless Relationship


To love is to be human. Or is it? As human-chatbot relationships become more common, the Times Opinion culture editor Nadja Spiegelman talks to the psychotherapist Esther Perel about what really defines human connection, and what we’re seeking when we look to satisfy our emotional needs on our phones.

Spiegelman: ...I’m curious about how you feel, in general, about people building relationships with A.I. Are these relationships potentially healthy? Is there a possibility for a relationship with an A.I. to be healthy?

Perel: Maybe before we answer it in this yes or no, healthy or unhealthy, I’ve been trying to think to myself, depending on how you define relationships, that will color your answer about what it means when it’s between a human and A.I.

But first, we need to define what goes on in relationships or what goes on in love. The majority of the time when we talk about love in A.I. or intimacy in A.I., we talk about it as feelings. But love is more than feelings.

Love is an encounter. It is an encounter that involves ethical demands, responsibility, and that is embodied. That embodiment means that there is physical contact, gestures, rhythms, gaze, frottement. There’s a whole range of physical experiences that are part of this relationship.

Can we fall in love with ideas? Yes. Do we fall in love with pets? Absolutely. Do children fall in love with teddy bears? Of course. We can fall in love and we can have feelings for all kinds of things.

That doesn’t mean that it is a relationship that we can call love. It is an encounter with uncertainty. A.I. takes care of that. Just about all the major pieces that enter relationships, the algorithm is trying to eliminate — otherness, uncertainty, suffering, the potential for breakup, ambiguity. The things that demand effort.

Whereas the love model that people idealize with A.I. is a model that is pliant: agreements and effortless pleasure and easy feelings.

Spiegelman: I think that’s so interesting — and exactly also where I was hoping this conversation would go — that in thinking about whether or not we can love A.I., we have to think about what it means to love. In the same way we ask ourselves if A.I. is conscious, we have to ask ourselves what it means to be conscious.

These questions bring up so much about what is fundamentally human about us, not just the question of what can or cannot be replicated.

Perel: For example, I heard this very interesting conversation about A.I. as a spiritual mediator of faith. We turn to A.I. with existential questions: Shall I try to prolong the life of my mother? Shall I stop the machines? What is the purpose of my life? How do I feel about death?

This is extraordinary. We are no longer turning to faith healers, but we are turning to these machines for answers. But they have no moral culpability. They have no responsibility for their answer.

If I’m a teacher and you ask me a question, I have a responsibility in what you do with the answer to your question. I’m implicated.

A.I. is not implicated. And from that moment on, it eliminates the ethical dimension of a relationship. When people talk about relationships these days, they emphasize empathy, courage, vulnerability, probably more than anything else. They rarely use the words accountability and responsibility and ethics. That adds a whole other dimension to relationships that is a lot more mature than the more regressive states of “What do you offer me?”

Spiegelman: I don’t disagree with you, but I’m going to play devil’s advocate. I would say that the people who create these chatbots very intentionally try and build in ethics — at least insofar as they have guide rails around trying to make sure that the people who are becoming intimately reliant on this technology aren’t harmed by it.

That’s a sense of ethics that comes not from the A.I. itself, but from its programmers — that guides people away from conversations that might be racist or homophobic, that tries to guide people toward healthy solutions in their lives. Does that not count if it’s programmed in?

Perel: I think the “programming in” is the last thing to be programmed.

I think that if you make this machine speak with people in other parts of the world, you will begin to see how biased they are. It’s one thing we should really remember. This is a business product.

When you say you have fallen in love with A.I., you have fallen in love with a business product. That business product is not here to just teach you how to fall in love and how to develop deeper feelings of love and then how to transmit them and transport them onto other people as a mediator, a transitional object.

Children play with their little stuffed animal and then they bring their learning from that relationship onto humans. The business model is meant to keep you there. Not to have you go elsewhere. It’s not meant to create an encounter with other people.

So, you can tell me about guardrails around the darkest corners of this. But fundamentally, you are in love with a business product whose intentions and incentives are to keep you interacting only with them — except they forget everything and you have to reset them.

Then you suddenly realize that they don’t have a shared memory with you, that the shared experience is programmed. Then, of course, you can buy the next subscription and then the memory will be longer. But you are having an intimate relationship with a business product.

We have to remember that. It helps.

Spiegelman: That’s so interesting.

Perel: That’s the guardrail...

Spiegelman: Yeah. This is so crucial, the fact that A.I. is a business product. They’re being marketed as something that’s going to replace the labor force, but instead, what they’re incredibly good at isn’t necessarily being able to problem solve in a way where they can replace someone’s job yet.

Instead, they’re forming these very intense, deep human connections with people, which doesn’t even necessarily seem like what they were first designed to do — but just happens to be something that they’re incredibly good at. Given all these people who say they’re falling in love with them, do you think that these companions highlight our human yearning? Are we learning something about our desires for validation, for presence, for being understood? Or are they reshaping those yearnings for us in ways that we don’t understand yet?

Perel: Both. You asked me if I use A.I — it’s a phenomenal tool. I think people begin to have a discussion when they ask: How does A.I. help us think more deeply on what is essentially human? In that way, I look at the relationship between people and the bot, but also how the bot is changing our expectations of relationships between people.

That is the most important piece, because the frictionless relationship that you have with the bot is fundamentally changing something in what we can tolerate in terms of experimentation, experience with the unknown, tolerance of uncertainty, conflict management — stuff that is part of relationships.

There is a clear sense that people are turning to A.I. with questions of love — or quests of love, more importantly — longings for love and intimacy, either because it’s an alternative to what they actually would want with a human being or because they bring to it a false vision of an idealized relationship — an idealized intimacy that is frictionless, that is effortless, that is kind, loving and reparative for many people...

Then you go and you meet a human being, and that person is not nearly as unconditional. That person has their own needs, their own longings, their own yearnings, their own objections, and you have zero preparation for that.

So, does A.I. inform us about what we are seeking? Yes. Does A.I. amplify the lack of what we are seeking? Yes. And does A.I. sometimes actually meet the need? All of it.

But it is a subjective experience, the fact that you feel certain things. That’s the next question: Because you feel it, does that makes it real and true?

We have always understood phenomenology as, “It is my subjective experience, and that’s what makes it true.” But that doesn’t mean it is true.

We are so quick to want to say, because I feel close and loved and intimate, that it is love. And that is a question. (...)

Spiegelman: This is one of your fundamental ideas that has been so meaningful for me in my own life: That desire is a function of knowing, of tolerating mystery in the other, that there has to be separation between yourself and the other to really feel eros and love. And it seems like what you’re saying is that with an A.I., there just simply isn’t the otherness.

Perel: Well, it’s also that mystery is often perceived as a bug, rather than as a feature.

by Esther Perel and Nadja Spiegelman, NY Times | Read more:
Video: Cartoontopia/Futurama via