Saturday, January 31, 2026

Kayfabe and Boredom: Why Extreme Content Sells

Pro wrestling, for all its mass appeal, cultural influence, and undeniable profitability, is still dismissed as low-brow fare for the lumpen masses; another guilty pleasure to be shelved next to soap operas and true crime dreck. This elitist dismissal rests on a cartoonish assumption that wrestling fans are rubes, incapable of recognizing the staged spectacle in front of them. In reality, fans understand perfectly well that the fights are preordained. What bothers critics is that working-class audiences knowingly embrace a form of theater more honest than the “serious” news they consume.

Once cast as the pinnacle of trash TV in the late ’90s and early 2000s, pro wrestling has not only survived the cultural sneer; it might now be the template for contemporary American politics. The aesthetics of kayfabe, of egotistical villains and manufactured feuds, now structure our public life. And nowhere is this clearer than in the figure of its most infamous graduate: Donald Trump, the two-time WrestleMania host and 2013 WWE Hall of Fame inductee who carried the psychology of the squared circle from the television studio straight into the Oval Office.

In wrestling, kayfabe refers to the unwritten rule that participants must maintain a charade of truthfulness. Whether you are allies or enemies, every association between wrestlers must unfold realistically. There are referees, who serve as avatars of fairness. We the audience understand that the outcome is choreographed and predetermined, yet we watch because the emotional drama has pulled us in.

In his own political arena, Donald Trump is not simply another participant but the conductor of the entire orchestra of kayfabe, arranging the cues, elevating the drama, and shaping the emotional cadence. Nuance dissolves into simple narratives of villains and heroes, while those who claim to deliver truth behave more like carnival barkers selling the next act. Politics has become theater, and the news that filters through our devices resembles an endless stream of storylines crafted for outrage and instant reaction. What once required substance, context, and expertise now demands spectacle, immediacy, and emotional punch.

Under Trump, politics is no longer a forum for governance but a stage where performance outranks truth, policy, and the show becomes the only reality that matters. And he learned everything he knows from the small screen.

In the pro wrestling world, one of the most important parts of the match typically happens outside of the ring and is known as the promo. An announcer with a mic, timid and small, stands there while the wrestler yells violent threats about what he’s going to do to his upcoming opponent, makes disparaging remarks about the host city, their rival’s appearance, and so on. The details don’t matter—the goal is to generate controversy and entice the viewer to buy tickets to the next staged combat. This is the most common and quick way to generate heat (attention). When you’re selling seats, no amount of audience animosity is bad business. (...)

Kayfabe is not limited to choreographed combat. It arises from the interplay of works (fully scripted events), shoots (unscripted or authentic moments), and angles (storyline devices engineered to advance a narrative). Heroes (babyfaces, or just faces) can at the drop of a dime turn heel (villain), and heels can likewise be rehabilitated into babyfaces as circumstances demand. The blood spilled is real, injuries often are, but even these unscripted outcomes are quickly woven back into the narrative machinery. In kayfabe, authenticity and contrivance are not opposites but mutually reinforcing components of a system designed to sustain attention, emotion, and belief.

by Jason Myles, Current Affairs |  Read more:
Image: uncredited
[ed. See also: Are you not entertained? (LIWGIWWF):]
***
Forgive me for quoting the noted human trafficker Andrew Tate, but I’m stuck on something he said on a right-wing business podcast last week. Tate, you may recall, was controversially filmed at a Miami Beach nightclub last weekend, partying to the (pathologically) sick beats of Kanye’s “Heil Hitler” with a posse of young edgelords and manosphere deviants. They included the virgin white supremacist Nick Fuentes and the 20-year-old looksmaxxer Braden Peters, who has said he takes crystal meth as part of his elaborate, self-harming beauty routine and recently ran someone over on a livestream.

“Heil Hitler” is not a satirical or metaphorical song. It is very literally about supporting Nazis and samples a 1935 speech to that effect. But asked why he and his compatriots liked the song, Tate offered this incredible diagnosis: “It was played because it gets traction in a world where everybody is bored of everything all of the time, and that’s why these young people are encouraged constantly to try and do the most shocking thing possible.” Cruelty as an antidote to the ennui of youth — now there’s one I haven’t quite heard before.

But I think Tate is also onto something here, about the wider emotional valence of our era — about how widespread apathy and nihilism and boredom, most of all, enable and even fuel our degraded politics. I see this most clearly in the desperate, headlong rush to turn absolutely everything into entertainment — and to ensure that everyone is entertained at all times. Doubly entertained. Triply entertained, even.

Trump is the master of this spectacle, of course, having perfected it in his TV days. The invasion of Venezuela was like a television show, he said. ICE actively seeks out and recruits video game enthusiasts. When a Border Patrol official visited Minneapolis last week, he donned an evocative green trench coat that one historian dubbed “a bit of theater.”

On Thursday, the official White House X account posted an image of a Black female protester to make it look as if she were in distress; caught in the obvious (and possibly defamatory) lie, a 30-something-year-old deputy comms director said only that “the memes will continue.” And they have continued: On Saturday afternoon, hours after multiple Border Patrol agents shot and killed an ICU nurse in broad daylight on a Minneapolis street, the White House’s rapid response account posted a graphic that read simply — ragebaitingly — “I Stand With Border Patrol.”

Are you not entertained?

But it goes beyond Trump, beyond politics. The sudden rise of prediction markets turns everything into a game: the weather, the Oscars, the fate of Greenland. Speaking of movies, they’re now often written with the assumption that viewers are also staring at their phones — stacking entertainment on entertainment. Some men now need to put YouTube on just to get through a chore or a shower. Livestreaming took off when people couldn’t tolerate even brief disruptions to their viewing pleasure.

Ironically, of course, all these diversions just have the effect of making us bored. The bar for what breaks through has to rise higher: from merely interesting to amusing to provocative to shocking, in Tate’s words. The entertainments grow more extreme. The volume gets louder. And it’s profoundly alienating to remain at this party, where everyone says that they’re having fun, but actually, internally, you are lonely and sad and do not want to listen — or watch other people listen! — to the Kanye Nazi song.

I am here to tell you it’s okay to go home. Metaphorically speaking. Turn it off. Tune it out. Reacquaint yourself with boredom, with understimulation, with the grounding and restorative sluggishness of your own under-optimized thoughts. Then see how the world looks and feels to you — what types of things gain traction. What opportunities arise, not for entertainment — but for purpose. For action.

The Adolescence of Technology

Confronting and Overcoming the Risks of Powerful AI

There is a scene in the movie version of Carl Sagan’s book Contact where the main character, an astronomer who has detected the first radio signal from an alien civilization, is being considered for the role of humanity’s representative to meet the aliens. The international panel interviewing her asks, “If you could ask [the aliens] just one question, what would it be?” Her reply is: “I’d ask them, ‘How did you do it? How did you evolve, how did you survive this technological adolescence without destroying yourself?” When I think about where humanity is now with AI—about what we’re on the cusp of—my mind keeps going back to that scene, because the question is so apt for our current situation, and I wish we had the aliens’ answer to guide us. I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species. Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.

In my essay Machines of Loving Grace, I tried to lay out the dream of a civilization that had made it through to adulthood, where the risks had been addressed and powerful AI was applied with skill and compassion to raise the quality of life for everyone. I suggested that AI could contribute to enormous advances in biology, neuroscience, economic development, global peace, and work and meaning. I felt it was important to give people something inspiring to fight for, a task at which both AI accelerationists and AI safety advocates seemed—oddly—to have failed. But in this current essay, I want to confront the rite of passage itself: to map out the risks that we are about to face and try to begin making a battle plan to defeat them. I believe deeply in our ability to prevail, in humanity’s spirit and its nobility, but we must face the situation squarely and without illusions.

As with talking about the benefits, I think it is important to discuss risks in a careful and well-considered manner. In particular, I think it is critical to:
  • Avoid doomerism. Here, I mean “doomerism” not just in the sense of believing doom is inevitable (which is both a false and self-fulfilling belief), but more generally, thinking about AI risks in a quasi-religious way. Many people have been thinking in an analytic and sober way about AI risks for many years, but it’s my impression that during the peak of worries about AI risk in 2023–2024, some of the least sensible voices rose to the top, often through sensationalistic social media accounts. These voices used off-putting language reminiscent of religion or science fiction, and called for extreme actions without having the evidence that would justify them. It was clear even then that a backlash was inevitable, and that the issue would become culturally polarized and therefore gridlocked. As of 2025–2026, the pendulum has swung, and AI opportunity, not AI risk, is driving many political decisions. This vacillation is unfortunate, as the technology itself doesn’t care about what is fashionable, and we are considerably closer to real danger in 2026 than we were in 2023. The lesson is that we need to discuss and address risks in a realistic, pragmatic manner: sober, fact-based, and well equipped to survive changing tides.
  • Acknowledge uncertainty. There are plenty of ways in which the concerns I’m raising in this piece could be moot. Nothing here is intended to communicate certainty or even likelihood. Most obviously, AI may simply not advance anywhere near as fast as I imagine. Or, even if it does advance quickly, some or all of the risks discussed here may not materialize (which would be great), or there may be other risks I haven’t considered. No one can predict the future with complete confidence—but we have to do the best we can to plan anyway.
  • Intervene as surgically as possible. Addressing the risks of AI will require a mix of voluntary actions taken by companies (and private third-party actors) and actions taken by governments that bind everyone. The voluntary actions—both taking them and encouraging other companies to follow suit—are a no-brainer for me. I firmly believe that government actions will also be required to some extent, but these interventions are different in character because they can potentially destroy economic value or coerce unwilling actors who are skeptical of these risks (and there is some chance they are right!). It’s also common for regulations to backfire or worsen the problem they are intended to solve (and this is even more true for rapidly changing technologies). It’s thus very important for regulations to be judicious: they should seek to avoid collateral damage, be as simple as possible, and impose the least burden necessary to get the job done. It is easy to say, “No action is too extreme when the fate of humanity is at stake!,” but in practice this attitude simply leads to backlash. To be clear, I think there’s a decent chance we eventually reach a point where much more significant action is warranted, but that will depend on stronger evidence of imminent, concrete danger than we have today, as well as enough specificity about the danger to formulate rules that have a chance of addressing it. The most constructive thing we can do today is advocate for limited rules while we learn whether or not there is evidence to support stronger ones.
With all that said, I think the best starting place for talking about AI’s risks is the same place I started from in talking about its benefits: by being precise about what level of AI we are talking about. The level of AI that raises civilizational concerns for me is the powerful AI that I described in Machines of Loving Grace. I’ll simply repeat here the definition that I gave in that document:
  • By “powerful AI,” I have in mind an AI model—likely similar to today’s LLMs in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields: biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.
  • In addition to just being a “smart thing you talk to,” it has all the interfaces available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.
  • It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.
  • It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory, it could even design robots or equipment for itself to use.
  • The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10–100x human speed. It may, however, be limited by the response time of the physical world or of software it interacts with.
  • Each of these million copies can act independently on unrelated tasks, or, if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.
We could summarize this as a “country of geniuses in a datacenter.”

As I wrote in Machines of Loving Grace, powerful AI could be as little as 1–2 years away, although it could also be considerably further out.

Exactly when powerful AI will arrive is a complex topic that deserves an essay of its own, but for now I’ll simply explain very briefly why I think there’s a strong chance it could be very soon. (...)

In this essay, I’ll assume that this intuition is at least somewhat correct—not that powerful AI is definitely coming in 1–2 years, but that there’s a decent chance it does, and a very strong chance it comes in the next few. As with Machines of Loving Grace, taking this premise seriously can lead to some surprising and eerie conclusions. While in Machines of Loving Grace I focused on the positive implications of this premise, here the things I talk about will be disquieting. They are conclusions that we may not want to confront, but that does not make them any less real. I can only say that I am focused day and night on how to steer us away from these negative outcomes and towards the positive ones, and in this essay I talk in great detail about how best to do so.

I think the best way to get a handle on the risks of AI is to ask the following question: suppose a literal “country of geniuses” were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist. The analogy is not perfect, because these geniuses could have an extremely wide range of motivations and behavior, from completely pliant and obedient, to strange and alien in their motivations. But sticking with the analogy for now, suppose you were the national security advisor of a major state, responsible for assessing and responding to the situation. Imagine, further, that because AI systems can operate hundreds of times faster than humans, this “country” is operating with a time advantage relative to all other countries: for every cognitive action we can take, this country can take ten.

What should you be worried about? I would worry about the following things: 
1. Autonomy risks. What are the intentions and goals of this country? Is it hostile, or does it share our values? Could it militarily dominate the world through superior weapons, cyber operations, influence operations, or manufacturing?
2. Misuse for destruction. Assume the new country is malleable and “follows instructions”—and thus is essentially a country of mercenaries. Could existing rogue actors who want to cause destruction (such as terrorists) use or manipulate some of the people in the new country to make themselves much more effective, greatly amplifying the scale of destruction?
3. Misuse for seizing power. What if the country was in fact built and controlled by an existing powerful actor, such as a dictator or rogue corporate actor? Could that actor use it to gain decisive or dominant power over the world as a whole, upsetting the existing balance of power?
4. Economic disruption. If the new country is not a security threat in any of the ways listed in #1–3 above but simply participates peacefully in the global economy, could it still create severe risks simply by being so technologically advanced and effective that it disrupts the global economy, causing mass unemployment or radically concentrating wealth?
5. Indirect effects. The world will change very quickly due to all the new technology and productivity that will be created by the new country. Could some of these changes be radically destabilizing?
I think it should be clear that this is a dangerous situation—a report from a competent national security official to a head of state would probably contain words like “the single most serious national security threat we’ve faced in a century, possibly ever.” It seems like something the best minds of civilization should be focused on.

Conversely, I think it would be absurd to shrug and say, “Nothing to worry about here!” But, faced with rapid AI progress, that seems to be the view of many US policymakers, some of whom deny the existence of any AI risks, when they are not distracted entirely by the usual tired old hot-button issues.

Humanity needs to wake up, and this essay is an attempt—a possibly futile one, but it’s worth trying—to jolt people awake.

To be clear, I believe if we act decisively and carefully, the risks can be overcome—I would even say our odds are good. And there’s a hugely better world on the other side of it. But we need to understand that this is a serious civilizational challenge. Below, I go through the five categories of risk laid out above, along with my thoughts on how to address them.

by Dario Amodei, Anthropic |  Read more:
[ed. Mr. Amodei and Anthropic in general seem to be, of all major AI companies, the most focused on safety and alignment issues. Guaranteed, everyone working in the field has read this. For a good summary and contrary arguments, see also: On The Adolescence of Technology (Zvi Mowshowitz, DMtV).]

Friday, January 30, 2026

Jessie Welles

Jesse Welles Is the Antidote To Everything That Sucks About Our Time (CA)

[ed. The power of musical protest (why isn't there more of it?). Seems like nearly half the songs of the late-60s/early 70s were protest songs. Wonder what that says about successive generations or society today. More videos here.]

Hawaiʻi Could See Nation’s Highest Drop In High School Graduates

Hawaiʻi Could See Nation’s Highest Drop In High School Graduates (CB)

Hawaiʻi is expected to see the greatest decline in high school graduates in the nation over the next several years, raising concerns from lawmakers and Department of Education officials about the future of small schools in shrinking communities.

Between 2023 and 2041, Hawaiʻi could see a 33% drop in the number of students graduating from high school, according to the Western Interstate Commission for Higher Education. The nation as a whole is projected to see a 10% drop in graduates, according to the commission’s most recent report, published at the end of 2024.

Image: Chart: Megan Tagami/Civil BeatSource: Western Interstate Commission for Higher Education

Gérard DuBois para Moby Dick de Herman Melville

The Last Flight of PAT 25

Two Army helicopter pilots went on an ill-conceived training mission. Within two hours, 67 people were dead.

One year ago, on January 29, 2025, two Army pilots strapped into a Black Hawk helicopter for a training mission out of Fort Belvoir in eastern Virginia and, two hours later, flew it into an airliner that was approaching Ronald Reagan Washington National Airport, killing all 67 aboard both aircraft. It was the deadliest air disaster in the United States in a quarter-century. Normally, in the aftermath of an air crash, government investigators take a year or more to issue a final report laying out the reasons the incident occurred. But in this case, the newly seated U.S. president, Donald Trump, held a press conference the next day and blamed the accident on the FAA’s DEI under the Biden and Obama administrations. “They actually came out with a directive, ‘too white,’” he claimed. “And we want the people that are competent.”

In the months that followed, major media outlets probed several real-world factors that contributed to the tragedy, including staffing shortages at FAA towers, an excess of traffic in the D.C. airspace, and the failure of the Black Hawk to broadcast its location over ADS-B — an automatic reporting system — before the collision. To address this final point, the Senate last month passed the bipartisan ROTOR Act, which would require all aircraft to use ADS-B — “a fitting way to honor the lives of those lost nearly one year ago over the Potomac River,” as bill co-sponsor Ted Cruz put it.

At a public meeting on Tuesday, the National Transport Safety Board laid out a list of recommended changes in response to the crash, criticizing the FAA for allowing helicopters to operate dangerously close to passenger planes and for allowing professional standards to slip at the control tower.

What has gone unexamined in the public discussion of the crash, however, is why these particular pilots were on this mission in the first place, whether they were competent to do what they were trying to do, what adverse conditions they were facing, and who was in charge at the moment of impact. Ultimately, while systemic issues may have created conditions that were ripe for a fatal accident, it was human decision-making in the cockpit that was the immediate cause of this particular crash.

This account is based on documents from the National Transportation Board (NTSB) accident inquiry and interviews with aviation experts. It shows that, when we focus on the specific details and facts of a case, the cause can seem quite different from what a big-picture overview might indicate. And this, in turn, suggests different logical steps that should be taken to prevent such a tragedy from happening again.

6:42 p.m.: Fort Belvoir, Virginia

The whine of the Blackhawk’s engine increased in pitch, and the whump-whump of its four rotor blades grew louder, as the matte-black aircraft lifted into the darkened sky above the single mile-long runway at Davison Army Airfield in Fairfax County, Virginia, about 25 miles southwest of Washington, D.C.

The UH-60, as it’s formally designated, is an 18,000-pound aircraft that entered service in 1979 as a tactical transport aircraft, used primarily for moving troops and equipment. This one belonged to Company B of the 12th Aviation Battalion, whose primary mission is to transport government VIPs, including Defense Department officials, members of Congress, and visiting dignitaries. Tonight’s flight would operate as PAT 25, for “Priority Air Transit.”

Black Hawks are typically flown by two pilots. The pilot in command, or PIC, sits in the right-hand seat. Tonight, that role was filled by 39-year-old chief warrant officer Andrew Eaves. Warrant officers rank between enlisted personnel and commissioned officers; it’s the warrant officers who carry out the lion’s share of a unit’s operational flying. When not flying VIPs, Eaves served as a flight instructor and a check pilot, providing periodic evaluation of the skills of other pilots. A native of Mississippi, he had 968 hours of flight experience and was considered a solid pilot by others in the unit.

Before he took off, Eaves’ commander had discussed the flight with him and admonished him to “not become too fixated on his evaluator role” and to remain “in control of the helicopter,” according to the NTSB investigation.

His mission was to give a check ride to Captain Rebecca Lobach, the pilot sitting in the left-hand seat. Lobach was a staff officer, meaning that her main role in the battalion was managerial. Nevertheless, she was expected to maintain her pilot qualifications and, to do so, had to undergo a number of annual proficiency checks. Tonight’s three-hour flight was intended to get Lobach her annual sign-off for basic flying skills and for the use of night-vision goggles, or NVGs. To accommodate that, the flight was taking off an hour and 20 minutes after sunset.

Both pilots wore AN/AVS-6(V)3 Night Vision Goggles, which look like opera glasses and clip onto the front of a pilot’s helmet. They gather ambient light, whether from the moon or stars or from man-made sources; intensify it; and display it through the lens of each element. The eyepiece doesn’t sit directly on the face but about an inch away, so the pilot can look down under it and see the instrument panel.

Night-vision goggles have a narrow field of view, just 40 degrees compared to the 200-degree range of normal vision, which makes it harder for pilots to maintain full situational awareness. They have to pay attention to obstacles and other aircraft outside the window, and they also have to keep track of what the gauges on the panel in front them are saying: how fast they’re going, for instance, and how high. There’s a lot to process, and time is of the essence when you’re zooming along at 120 mph while lower than the tops of nearby buildings. To help with situational awareness, Eaves and Lobach were accompanied by a crew chief, Staff Sergeant Ryan O’Hara, sitting in a seat just behind the cockpit, where he would be able to help keep an eye out for trouble.

The helicopter turned to the south as it climbed, then flew along the eastern shore of the Potomac until the point where the river makes a big bend to the east. Eaves banked to the right and headed west toward the commuter suburb of Vicksburg, where the lights of house porches and street lamps seemed to twinkle as they fell in and out of the cover of the bare tree branches.

7:11 p.m.: Approaching Greenhouse Airport, Stevensburg, Virginia

PAT 25 followed the serpentine course of the Rapidan River through the hills and farm fields of the Piedmont. At this point, Eaves was not only the pilot in command, but also the pilot flying, meaning that he had his hands on the controls that guide the aircraft’s speed and direction and his feet on the rudder pedals that keep the helicopter “in trim” — that is, lined up with its direction of flight. Lobach played a supporting role, working the radio, keeping an eye out for obstacles and other traffic, and figuring out their location by referencing visible landmarks.

Lobach, 28, had been a pilot for four years. She’d been an ROTC cadet at the University of North Carolina at Chapel Hill, which she graduated from in 2019. Both her parents were doctors; she’d dreamed of a medical career but eventually realized that she couldn’t pursue one in the Army. According to her roommate, “She did not have a huge, massive passion” for aviation but chose it because it was the closest she could get to practicing medicine, under the circumstances. “She badly wanted to be a Black Hawk pilot because she wanted to be a medevac unit,” he told NTSB investigators. After she completed flight training at Fort Rucker, she was stationed at Fort Belvoir, where she joined the 12th Aviation Battalion and was put in charge of the oil-and-lubricants unit. One fellow pilot in the unit described her to the NTSB as “incredibly professional, very diligent and very thorough.”

In addition to her official duties, Lobach served as a volunteer social liaison at the White House, where she regularly represented the Army at Medal of Honor ceremonies and state dinners. She was both a fitness fanatic and a baker, known for providing fresh sourdough bread to her unit. She had started dabbling in real-estate investments and looked forward to moving in with her boyfriend of one year, another Army pilot with whom she talked about having “lots and lots of babies.” She was planning to leave the service in 2027 and had already applied for medical school at Mount Sinai. Helicopter flying was not something she intended to pursue.

Though talented as a manager, she wasn’t much of a pilot. Helicopter flying is an extremely demanding feat of coordination and balance, akin to juggling and riding a unicycle at the same time. For Lobach, the difficulty was compounded by the fact that she had trained on highly automated, relatively easy-to-fly helicopters at Fort Rucker and then been assigned to an older aircraft, the Black Hawk L or “Lima” model, at Fort Belvoir. Unlike newer models, which can maintain their altitude on autopilot, the Lima requires constant care and attention, and Lobach struggled to master it. One instructor described her skills as “well below average,” noting that she had “lots of difficulties in the aircraft.” Three years before, she’d failed the night-vision evaluation she was taking tonight.

Before the flight, Eaves had told his girlfriend that he was concerned about Lobach’s capability as a pilot and that, skill-wise, she was “not where she should be.”

It’s not uncommon for pilots to struggle during the early phase of their career. But Lobach’s development had been particularly slow. In her five years in the service, she had accumulated just 454 hours of flight time, and she wasn’t clocking more very quickly. The Army requires officers in her role to fly at least 60 hours a year, but in the past 12 months, she’d flown only 56.7. Her superiors had made an exception for her because in March she’d had knee surgery for a sports injury, preventing her from flying for three months. The waiver made her technically qualified to fly, but it didn’t change the fact that she was rustier than pilots were normally allowed to become.

If she’d been keen on flying, she could have used every moment of this flight to hone her skills by taking the controls herself. But she was content to let Eaves do the flying during the first part of the trip.

Drawing near to Greenhouse Airport, a small, private grass runway near a plant nursery, they navigated via an old-fashioned technique called pilotage, using landmarks and dead reckoning to find their way from point to point. Coming in for their first landing of the night, they were looking for the airstrip’s signature greenhouse complex.

Lobach: That large lit building may be part of it.

Eaves: It does look like a greenhouse, doesn’t it?

Lobach: Yeah, it does, doesn’t it? We can start slowing back.

Eaves: All right, slowing back.

As they circled around the runway, Eaves commented that the lighting of the greenhouse building was so intense that it was blinding in the NVGs, and Lobach agreed. Eaves positioned the helicopter a few hundred feet above the landing zone and asked Lobach to show him where it was. After she did so correctly, he told her to take the controls. This process followed a formalized set of acknowledgements to make sure that both parties understood who was in control of the aircraft.

Eaves: You’ve got the flight controls.

Lobach: I’ve got the controls.

As Lobach eased the helicopter toward the ground, Eaves and Crew Chief O’Hara called out times from the landing checklist.

O’Hara: Clear of obstacles on the left.

Lobach: Thank you. Coming forward.

Eaves: Clear down right.

Lobach: Nice and wide.

Eaves: 50 feet.

Lobach: 30 feet.

They touched down. One minute and 42 seconds after passing control to Lobach, Eaves took it back again. As they sat on the ground with their rotor whirring, they discussed the fuel remaining aboard the aircraft and the direction they would travel in during the next segment of their flight. Finally, after six minutes, Eaves signaled that they were ready to take off again.

Eaves: Whenever you’re ready, ma’am.

Lobach: Okay, let’s do it.

Eaves’s deference to Lobach was symptomatic of what is known among psychologists as an “inverted authority gradient.” Although he was the pilot in command, both responsible for the flight and in a position of authority over others on it, Eaves held a lesser rank than Lobach and so in a broader context was her subordinate. In moments of high stress, this ambiguity can muddy the waters as to who is supposed to be making crucial decisions.

Eaves, Lobach, and O’Hara ran through their checklists, and Eaves eased the Black Hawk up into the night sky.

by Jeff Wise, Intelligencer |  Read more:
Image: Intelligencer; Photo: Matt Hecht
[ed. See also: Responders recall a mission of recovery and grief a year after the midair collision near DC (AP).]

via:

Here Come the Beetles

The nearly 100-year-old Wailua Municipal Golf Course is home to more than 580 coconut trees. It’s also one of Kaua‘i’s most visible sites for coconut rhinoceros beetle damage.

Located makai of Kūhiō Highway, trees that would normally have full, verdant leaves are dull and have V-shaped cuts in their fronds. Some are bare and look more like matchsticks.

It’s not for lack of trying to mitigate the invasive pest. The trees’ crowns have been sprayed with a pesticide twice, and the trunks were injected twice with a systemic pesticide for longer term protection.

The Kaua‘i Department of Parks & Recreation maintains that even though the trees still look damaged, the treatments are working. Staff have collected 1,679 fallen, dead adult beetles over the last three years.

The most recent treatment, a systemic pesticide that travels through the trees’ vascular systems, was done in January 2025. While crown sprays kill the beetle on contact, systemic pesticides require the beetles to feed from the trees to die. The bugs eat the trees’ hearts — where new fronds develop — so it can take months for foliage damage to appear.
 
“The general public sees these trees that are damaged and thinks, ‘Oh my goodness they’re getting whacked,’ but in actuality, we need them to get whacked to kill (the beetles),” said Patrick Porter, county parks director.

But with the beetles continuing to spread around the island, the county is increasingly turning its attention to green waste, mulch piles and other breeding sites, where beetles spend four to six months growing from eggs to adults. A single adult female beetle can lay up to 140 eggs in her lifetime.

“The reality is if you don’t go after the larvae and you don’t go after your mulch cycle, you’re just pissing in the wind,” said Kaua‘i County Council member Fern Holland. “Because there are just going to be hundreds and hundreds of them hatching all the time, and you can’t go after all of them.” (...)

Last May, the County Council allocated $100,000 for invasive species and another $100,000 for CRB. It was the first time the county designated funds specifically to address the beetle.

Niki Kunioka-Volz, economic development specialist with the Kaua‘i Office of Economic Development, said none of that funding has been spent yet.
They’re considering using it to help get the breeding site at the Wailua golf course under control, such as by purchasing an air curtain burner, a fan-powered incinerator of sorts to dispose of green waste. The burner could also be a tool for the broader community. (...)

In 2024, the county received $200,000 from the state Department of Agriculture. That money was used for a CRB outreach campaign, training CRB detection dogs and distributing deterrent materials. State funding was also expected to help the county purchase a curtain burner, but that plan fell through.

Earlier this month, state legislators threatened to cut invasive species funding from the newly expanded Hawai‘i Department of Agriculture and Biosecurity over its slow progress in curbing threats such as coconut rhinoceros beetles.

“I’d like to see the pressure put on them to release the funds to the counties,” Holland said.

by Noelle Fujii-Oride, Honolulu Civil Beat | Read more:
Image: Kevin Fujii/David Croxford/Civil Beat
[ed. Tough, ugly, able to leap sleeping bureaucrats in a single bound. See also: As Palm-Killing Beetles Spread On Big Island, State Action Is Slow (CB):]
***
It has been nearly two years since the first rhinoceros coconut beetle was discovered on Hawaiʻi island. And yet, despite ongoing concern by residents, the state is moving slowly in devising its response.

Seven months ago, the state’s Department of Agriculture and Biosecurity said it would begin working to stop the spread of CRB, within and beyond North Kona. But a meeting of the agency’s board Tuesday marked the first concrete step to do so by regulators. Now, as agriculture department staff move to streamline and resolve apparent issues in the proposed regulations, it will likely take until March for the board to consider implementing them.

Many of the attendees at Tuesday’s meeting, including residents of other islands, said that the state is lagging on its pledge to regulate the movement of agricultural materials while the destructive pest is spreading and killing both the island’s coconut palms and its endangered, endemic loulu palms.

The First Two Years

Before making landfall on Hawaiʻi island in 2023, the beetles spent almost a decade in apparent confinement on Oʻahu.

At first they appeared to be isolated to Waikoloa. Then, in March of last year, larvae and beetles were discovered at Kona International Airport and the state-owned, 179-acre Keāhole Agriculture Park, before spreading further.

In response, the county implemented a voluntary order to discourage the movement of potentially-infested live plants, mulch and green waste, and other landscaping materials such as compost from the area in June 2025. The order was described as “a precursor to a mandatory compliance structure” to be implemented by the state, according to a press release from the time. (...)

The board spent about an hour considering the petition and hearing testimony. And while many who testified made recommendations about actual protocol that might be put into place, the board merely voted to move forward in the process. So it’s not yet clear whether it will adopt the Big Island petitioner’s proposed rules or create its own.

If You Want That Tattoo Erased It’s Going to Hurt and It’s Going to Cost You

Colin Farrell’s had it done — many times. So have Angelina Jolie and Megan Fox. Heck, even Bart Simpson did.

Whether it’s Marilyn Monroe’s face, Billy Bob Thornton’s name, a sultry rose or even Bart’s partially inscribed homage to his mother, some tattoos simply have to go for one reason or many others.

But the process of taking them off is longer, much more costly and ouch — extremely more painful than getting them put on, according to professionals in the industry.

Also, due to health reasons, some souls who braved the ink needle, should be wary of the laser when having their body art erased or covered up.

Tattoos have been around for centuries

The oldest known tattoos were found on remains of a Neolithic man who lived in the Italian Alps around 3,000 B.C. Many mummies from ancient Egypt also have tattoos, as do remains from cultures around the world.

Tattoo removal likely is almost as old as the practice of inking and included scraping the skin to get the pigments off or out.

A more “civilized” method evolved in the 1960s when Leon Goldman, a University of Cincinnati dermatologist, used “hot vapor bursts” from a laser on tattoos and the skin that bore them.

Many choose tattoos to honor someone

A 2023 survey by the Pew Research Center determined that 32% of adults in the United States have tattoos. About 22% have more than one, according to the survey.

Honoring or remembering someone or something accounts for the biggest reason Americans get their first tattoo. About 24% in the survey regret getting them.

Tracy Herrmann, 54, of Plymouth, Michigan, just west of Detroit, has eight tattoos and is in the process of getting four phrases, including “One step at a time,” “Surrender,” and “Through it all,” removed from her feet and arms.

She started inking up about six years ago and says she doesn’t regret getting tattoos.

“Maybe a different choice, maybe,” Herrmann said following her fourth tattoo removal session at Chroma Tattoo Studio & Laser Tattoo Removal in Brighton, Michigan.

“There was a period in my life that I felt I needed some extra reminder,” Hermann said. “I thought I would just embrace the period in my life, so that helped and then just to surrender and give it over to God. So, half of them were really, really pivotal to getting me over a hump in my life.”

Boredom among reasons to remove tats

Herrmann says the four getting lasered are part of her past and that’s where she wants them to stay.

“Now, I just want to move forward and go back to the original skin I was born with,” she said. “But the other four I’m going to keep. They still mean a lot to me, but they’re more hidden.”

Reasons for getting a tattoo removed are as varied and personal as the reasons for getting them in the first place, says Ryan Wright, a registered nurse and owner of Ink Blasters Precision Laser Tattoo Removal in Livonia, Michigan.

“A lot of people, when they get a new tattoo that makes some of their old tattoos look bad they get (the older tattoos) removed or reworked,” Wright said.

Chroma owner Jaime Howard says boredom plays a role, too.

“They got a tattoo off a whim and they’re like ‘hey, I’m really bored with this. I don’t want this anymore,’” Howard said. “It’s not about hating their tattoo, it’s about change for yourself.”

Like snapping a ‘rubber band’ on your skin

Howard and Wright, like many who perform laser removals, use something called a Q-switching, or quality switching, laser. It concentrates the light energy into intense short bursts or pulses.

“It’s very painful. Nine out of 10,” Wright said. “It kind of feels like a rubber band being snapped on your skin with hot bacon grease.”

Howard has had some of her tattoos removed and admits the procedure is painful.

But “you get through it,” she said. “A couple of days later you’re still feeling the sunburn, but it’s OK. If you want it bad enough, you’ll take it off because that’s what you want.”

Light heat from the laser breaks the ink into particles small enough to be absorbed by the body and later excreted as waste.

It’s not a “one and done.” Wright said. Tattoo removal can take eight to 12 treatments or more. A new tattoo can go over the old one once the skin has had time to sufficiently heal.

Howard consulted with Herrmann as her fourth session at Chroma began. They spoke about the previous session and how far along they were with the ink removal. Both then donned dark sunglasses to protect their eyes from the brightness of the laser. Herrmann winced. Seconds later, it was done. But she still has more sessions ahead.

“Oh gosh, it’s a 10 when you’re getting it done,” Herrmann said of the pain. “It’s pretty intense. It’s doable. I know price is sometimes an issue, but it’s worth it.”

Removal can be costly

Howard says the minimum she charges is $100 per session. Wright says that on a typical day he does about a dozen treatments and that cost depends on the square-inch size of the tattoo.

“The cost is really the technology in the laser,” Wright said. “It’s not like a time thing. Most treatments are under a minute. You’re paying for the technology and the person who knows how to use the technology. You can damage the skin if you don’t know what you’re doing.”

by Corey Williams, AP |  Read more:
Image: the author

Thursday, January 29, 2026

What is College For in the Age of AI?

When I left for college in the fall of 1991, the internet era was just beginning. By sophomore year, I received my first email address. By junior year, the first commercial web browser was released. The summer after graduation, I worked as a reporter at the Arizona Republic covering the internet’s rise in our everyday lives, writing about the opening of internet cafés and businesses launching their first websites. I was part of an in-between class of graduates who went off to college just before a new technology transformed what would define our careers.

So when Alina McMahon, a recent University of Pittsburgh graduate, described her job search to me, I immediately recognized her predicament. McMahon began college before AI was a thing. Three and a half years later, she graduated into a world where it was suddenly everywhere. McMahon majored in marketing, with a minor in film and media studies. “I was trying to do the stable option,” she said of her business degree. She followed the standard advice given to all undergraduates hoping for a job after college: Network and intern. Her first “coffee chat” with a Pitt alumnus came freshman year; she landed three internships, including one in Los Angeles at Paramount in media planning. There she compiled competitor updates and helped calculate metrics for which billboard advertisements the company would buy.

But when she started to apply for full-time jobs, all she heard back — on the rare occasions she heard anything — was that roles were being cut, either because of AI or outsourcing. Before pausing her job search recently, McMahon had applied to roughly 150 jobs. “I know those are kind of rookie numbers in this environment,” she said jokingly. “It’s very discouraging.”

McMahon’s frustrations are pretty typical among job seekers freshly out of college. There were 15 percent fewer entry-level and internship job postings in 2025 than the year before, according to Handshake, a job-search platform popular with college students; meanwhile, applications per posting rose 26 percent. The unemployment rate for new college graduates was 5.7 percent in December, more than a full percentage point above the national average and higher even than what high-school graduates face.

How much AI is to blame for the fragile entry-level job market is unclear. Several research studies show AI is hitting young college-educated workers disproportionately, but broader economic forces are part of the story, too. As Christine Cruzvergara, Handshake’s chief education-strategy officer, told me, AI isn’t “taking” jobs so much as employers are “choosing” to replace parts of jobs with automation rather than redesign roles around workers. “They’re replacing people instead of enabling their workforce,” she said.

The fact that Gen-Z college interns and recent graduates are the first workers being affected by AI is surprising. Historically, major technological shifts favored junior employees because they tend to make less money and be more skilled and enthusiastic in embracing new tools. But a study from Stanford’s Digital Economy Lab in August showed something quite different. Employment for Gen-Z college graduates in AI-affected jobs, such as software development and customer support, has fallen by 16 percent since late 2022. Meanwhile, more experienced workers in the same occupations aren’t feeling the same impact (at least not yet), said Erik Brynjolfsson, an economist who led the study. Why the difference? Senior workers, he told me, “learn tricks of the trade that maybe never get written down,” which allow them to better compete with AI than those new to a field who lack such “tacit knowledge.” For instance, that practical know-how might allow senior workers to better understand when AI is hallucinating, wrong, or simply not useful.

For employers, AI also complicates an already delicate calculus around hiring new talent. College interns and recent college graduates require — as they always have — time and resources to train. “It’s real easy to say ‘college students are expensive,’” Simon Kho told me in an interview. “Not from a salary standpoint, but from the investment we have to make.” Until recently, Kho ran early career programs at Raymond James Financial, where it took roughly 18 months for new college hires to pay off in terms of productivity. And then? “They get fidgety,” he added, and look for other jobs. “So you can see the challenges from an HR standpoint: ‘Where are we getting value? Will AI solve this for us?’”

Weeks after Stanford’s study was released, another by two researchers at Harvard University also found that less experienced employees were more affected by AI. And it revealed that where junior employees went to college influenced whether they stayed employed. Graduates from elite and lower-tier institutions fared better than those from mid-tier colleges, who experienced the steepest drop in employment. The study didn’t spell out why, but when I asked one of the authors, Seyed Mahdi Hosseini Maasoum, he offered a theory: Elite graduates may have stronger skills; lower-tier graduates may be cheaper. “Mid-tier graduates end up somewhat in between — they’re relatively costly to hire but not as skilled as graduates of the very prestigious universities — so they are hit the hardest,” Maasoum wrote to me.

Just three years after ChatGPT’s release, the speed of AI’s disruption on the early career job market is even catching the attention of observers at the highest level of the economy. In September, Fed chair Jerome Powell flagged the “particular focus on young people coming out of college” when asked about AI’s effects on the labor market. Brynjolfsson told me that if current trends hold, the impact of AI will be “quite a bit more noticeable” by the time the next graduating class hits the job market this spring. Employers already see it coming: In a recent survey by the National Association of Colleges and Employers, nearly half of 200 employers rated the outlook for the class of 2026 as poor or fair, the most pessimistic outlook since the first year of the pandemic.

The upheaval in the early career job market has caught higher education flat-footed. Colleges have long had an uneasy relationship with their unofficial role as vocational pipelines. When generative AI burst onto campuses in 2022, many administrators and faculty saw it primarily as a threat to learning — the world’s greatest cheating tool. Professors resurrected blue books for in-classroom exams and demanded that AI tools added to software be blocked in their classes.

Only now are colleges realizing that the implications of AI are much greater and are already outrunning their institutional ability to respond. As schools struggle to update their curricula and classroom policies, they also confront a deeper problem: the suddenly enormous gap between what they say a degree is for and what the labor market now demands. In that mismatch, students are left to absorb the risk. Alina McMahon and millions of other Gen-Zers like her are caught in a muddled in-between moment: colleges only just beginning to think about how to adapt and redefine their mission in the post-AI world, and a job market that’s changing much, much faster.

What feels like a sudden, unexpected dilemma for Gen-Z graduates has only been made worse by several structural changes across higher education over the past decade.

by Jeffrey Selingo, Intelligencer | Read more:
Image: Intelligencer; Photos:Getty

Frito Pie

Not quite nachos, and not quite pie, this comforting casserole is a cheesy and crunchy delight that is thought to have roots in both Texas and New Mexico. In its most classic (and some might say best) form, a small bag of Fritos corn chips is split down the middle, placed in a paper boat and piled high with chili, topped with cheese, diced onion, pickled jalapeños, sour cream and pico de gallo, then eaten with a plastic fork. (It is often called a “walking taco,” because it’s eaten on-the-go, at sporting events and fairs.) This version is adapted to feed a crowd: The Fritos, Cheddar and chili — made with ground beef, pinto beans, taco seasoning and enchilada sauce — are layered in a casserole dish, baked, then topped with a frenzy of fun toppings. For maximum crunch, save a cup of Fritos for topping as you eat.

Ingredients

Yield: 6 to 8 servings
1 tablespoon olive or vegetable oil
1 pound ground beef, preferably 20-percent fat
1 medium yellow onion, diced
1 (1-ounce) packet taco seasoning (or 3 tablespoons of a homemade taco seasoning)
2 (15-ounce) cans pinto beans, drained and rinsed
1 (19-ounce) can red enchilada sauce (or 2½ cups of homemade enchilada sauce)
2 (9-ounce) packages or 1 (18-ounce) package Fritos, 1 cup reserved for serving (8 to 10 cups)
8 ounces shredded Cheddar (about 2 cups)
Diced white onion, sliced scallions, pickled jalapeños, sour cream or pico de gallo, or a combination, for serving (optional)

Preparation 

Step 1: Heat the oven to 400 degrees. Coat a 9-by-13-inch baking dish with cooking spray.

Step 2: In a large Dutch oven or heavy-bottomed skillet, heat the oil over medium-high. Add the beef and onion, breaking up the meat with a wooden spoon. Cook, stirring occasionally, until the meat is browned and the onion is translucent, 8 to 10 minutes. Lower the heat if the meat is browning too quickly.

Step 3: Sprinkle the taco seasoning over the meat mixture and pour in ¾ cup of water; mix well. Bring to a simmer and cook until the liquid thickens and coats the pan, scraping up any browned bits, 2 to 3 minutes. Add the beans and enchilada sauce, stirring until combined. Bring to a simmer and cook for 5 minutes.

Step 4: Assemble the pie: Sprinkle half of the Fritos in the prepared baking dish, followed by half of the Cheddar. Cover with all of the meat filling. Finally, add the remaining Fritos (minus the reserved cup) and Cheddar.

Step 5: Bake until the cheese is melted and bubbly, 7 to 10 minutes. Rest for 5 minutes, then add the desired toppings to the casserole, or spoon into individual bowls and have eaters top as they please. Add reserved Fritos for more crunch, if desired.

by Kia Damon, NY Times |  Read more:
Image:Christopher Testani for The New York Times. Food Stylist: Simon Andrews.
[ed. Forgot about these. Should be great for Seattle's upcoming Super Bowl win.] 

Anne Zahalka - The Mathematician

Wednesday, January 28, 2026

Greg Girard - Hong Kong Cafe, Vancouver, Canada, 1975

On the Falsehoods of a Frictionless Relationship


To love is to be human. Or is it? As human-chatbot relationships become more common, the Times Opinion culture editor Nadja Spiegelman talks to the psychotherapist Esther Perel about what really defines human connection, and what we’re seeking when we look to satisfy our emotional needs on our phones.

Spiegelman: ...I’m curious about how you feel, in general, about people building relationships with A.I. Are these relationships potentially healthy? Is there a possibility for a relationship with an A.I. to be healthy?

Perel: Maybe before we answer it in this yes or no, healthy or unhealthy, I’ve been trying to think to myself, depending on how you define relationships, that will color your answer about what it means when it’s between a human and A.I.

But first, we need to define what goes on in relationships or what goes on in love. The majority of the time when we talk about love in A.I. or intimacy in A.I., we talk about it as feelings. But love is more than feelings.

Love is an encounter. It is an encounter that involves ethical demands, responsibility, and that is embodied. That embodiment means that there is physical contact, gestures, rhythms, gaze, frottement. There’s a whole range of physical experiences that are part of this relationship.

Can we fall in love with ideas? Yes. Do we fall in love with pets? Absolutely. Do children fall in love with teddy bears? Of course. We can fall in love and we can have feelings for all kinds of things.

That doesn’t mean that it is a relationship that we can call love. It is an encounter with uncertainty. A.I. takes care of that. Just about all the major pieces that enter relationships, the algorithm is trying to eliminate — otherness, uncertainty, suffering, the potential for breakup, ambiguity. The things that demand effort.

Whereas the love model that people idealize with A.I. is a model that is pliant: agreements and effortless pleasure and easy feelings.

Spiegelman: I think that’s so interesting — and exactly also where I was hoping this conversation would go — that in thinking about whether or not we can love A.I., we have to think about what it means to love. In the same way we ask ourselves if A.I. is conscious, we have to ask ourselves what it means to be conscious.

These questions bring up so much about what is fundamentally human about us, not just the question of what can or cannot be replicated.

Perel: For example, I heard this very interesting conversation about A.I. as a spiritual mediator of faith. We turn to A.I. with existential questions: Shall I try to prolong the life of my mother? Shall I stop the machines? What is the purpose of my life? How do I feel about death?

This is extraordinary. We are no longer turning to faith healers, but we are turning to these machines for answers. But they have no moral culpability. They have no responsibility for their answer.

If I’m a teacher and you ask me a question, I have a responsibility in what you do with the answer to your question. I’m implicated.

A.I. is not implicated. And from that moment on, it eliminates the ethical dimension of a relationship. When people talk about relationships these days, they emphasize empathy, courage, vulnerability, probably more than anything else. They rarely use the words accountability and responsibility and ethics. That adds a whole other dimension to relationships that is a lot more mature than the more regressive states of “What do you offer me?”

Spiegelman: I don’t disagree with you, but I’m going to play devil’s advocate. I would say that the people who create these chatbots very intentionally try and build in ethics — at least insofar as they have guide rails around trying to make sure that the people who are becoming intimately reliant on this technology aren’t harmed by it.

That’s a sense of ethics that comes not from the A.I. itself, but from its programmers — that guides people away from conversations that might be racist or homophobic, that tries to guide people toward healthy solutions in their lives. Does that not count if it’s programmed in?

Perel: I think the “programming in” is the last thing to be programmed.

I think that if you make this machine speak with people in other parts of the world, you will begin to see how biased they are. It’s one thing we should really remember. This is a business product.

When you say you have fallen in love with A.I., you have fallen in love with a business product. That business product is not here to just teach you how to fall in love and how to develop deeper feelings of love and then how to transmit them and transport them onto other people as a mediator, a transitional object.

Children play with their little stuffed animal and then they bring their learning from that relationship onto humans. The business model is meant to keep you there. Not to have you go elsewhere. It’s not meant to create an encounter with other people.

So, you can tell me about guardrails around the darkest corners of this. But fundamentally, you are in love with a business product whose intentions and incentives are to keep you interacting only with them — except they forget everything and you have to reset them.

Then you suddenly realize that they don’t have a shared memory with you, that the shared experience is programmed. Then, of course, you can buy the next subscription and then the memory will be longer. But you are having an intimate relationship with a business product.

We have to remember that. It helps.

Spiegelman: That’s so interesting.

Perel: That’s the guardrail...

Spiegelman: Yeah. This is so crucial, the fact that A.I. is a business product. They’re being marketed as something that’s going to replace the labor force, but instead, what they’re incredibly good at isn’t necessarily being able to problem solve in a way where they can replace someone’s job yet.

Instead, they’re forming these very intense, deep human connections with people, which doesn’t even necessarily seem like what they were first designed to do — but just happens to be something that they’re incredibly good at. Given all these people who say they’re falling in love with them, do you think that these companions highlight our human yearning? Are we learning something about our desires for validation, for presence, for being understood? Or are they reshaping those yearnings for us in ways that we don’t understand yet?

Perel: Both. You asked me if I use A.I — it’s a phenomenal tool. I think people begin to have a discussion when they ask: How does A.I. help us think more deeply on what is essentially human? In that way, I look at the relationship between people and the bot, but also how the bot is changing our expectations of relationships between people.

That is the most important piece, because the frictionless relationship that you have with the bot is fundamentally changing something in what we can tolerate in terms of experimentation, experience with the unknown, tolerance of uncertainty, conflict management — stuff that is part of relationships.

There is a clear sense that people are turning to A.I. with questions of love — or quests of love, more importantly — longings for love and intimacy, either because it’s an alternative to what they actually would want with a human being or because they bring to it a false vision of an idealized relationship — an idealized intimacy that is frictionless, that is effortless, that is kind, loving and reparative for many people...

Then you go and you meet a human being, and that person is not nearly as unconditional. That person has their own needs, their own longings, their own yearnings, their own objections, and you have zero preparation for that.

So, does A.I. inform us about what we are seeking? Yes. Does A.I. amplify the lack of what we are seeking? Yes. And does A.I. sometimes actually meet the need? All of it.

But it is a subjective experience, the fact that you feel certain things. That’s the next question: Because you feel it, does that makes it real and true?

We have always understood phenomenology as, “It is my subjective experience, and that’s what makes it true.” But that doesn’t mean it is true.

We are so quick to want to say, because I feel close and loved and intimate, that it is love. And that is a question. (...)

Spiegelman: This is one of your fundamental ideas that has been so meaningful for me in my own life: That desire is a function of knowing, of tolerating mystery in the other, that there has to be separation between yourself and the other to really feel eros and love. And it seems like what you’re saying is that with an A.I., there just simply isn’t the otherness.

Perel: Well, it’s also that mystery is often perceived as a bug, rather than as a feature.

by Esther Perel and Nadja Spiegelman, NY Times | Read more:
Video: Cartoontopia/Futurama via

Why Even the Healthiest People Hit a Wall at Age 70

Are we currently determining how much of aging is lifestyle changes and interventions and how much of it is basically your genetic destiny?

 

[Transcript:] We are constantly being bombarded with health and lifestyle advice at the moment. I feel like I cannot open my social media feeds without seeing adverts for supplements or diet plans or exercise regimes. And I think that this really is a distraction from the big goals of longevity science. This is a really difficult needle to thread when it comes to talking about this stuff because I'm a huge advocate for public health. I think if we could help people eat better, if we could help 'em do more exercise, if we could help 'em quit smoking, this would have enormous effects on our health, on our economies all around the world. But this sort of micro-optimization, these three-hour long health podcasts that people are digesting on a daily basis these days, I think we're really majoring in the minors. We're trying to absolutely eke out every last single thing when it comes to living healthily. And I think the problem is that there are real limits to what we can do with health advice. 

So for example, there was a study that came out recently that was all over my social media feeds. And the headline was that by eating the best possible diet, you can double your chance of aging healthily. But I decided to dig into the results table. The healthiest diet was something called the Alternative Healthy Eating Index or AHEI. And even the people who are sticking most closely to this best diet, according to this study, the top 20% of adherence to the AHEI, only 13.6% of them made it to 70 years old without any chronic diseases. That means that over 85% of the people sticking to the best diet, according to this study, got to the age of 70 with at least something wrong with them. And that shows us that optimizing diet only has so far it can go. 

We're not talking about immortality or living to 120 here. If you wanna be 70 years old and in good enough health to play with your grandkids, I cannot guarantee that you can do that no matter how good your diet is. And that's why we need longevity medicine to help keep people healthier for longer. And actually, I think even this idea of 120, 150-year-old lifespans, you know, immortality even as a word that's often thrown around, I think the main thing we're trying to do is get people to 80, 90 years old in good health. 'cause we already know that most people alive today, when they reach that age, are unfortunately gonna be frail. They're probably gonna be suffering from two or three or four different diseases simultaneously. And what we wanna do is try and keep people healthier for longer. And by doing that, they probably will live longer but kind of as a side effect. 

If you look at photographs of people from the past, they often look older than people in the present day who are the same age. And part of these are these terrible fashion choices that people made in the past. And we can look back and, you know, understand the mistakes they've made with hindsight. But part of that actually is aging biology. I think the fact that people can be different biological ages at the same chronological ages, something that's really quite intuitive. All of us know people who've waltzed into their 60s looking great and, you know, basically as fit as someone in their 40s or 50s. And we know similar people who have also gone into their 60s, but they're looking haggard, they've got multiple different diseases, they're already struggling through life. 

In the last decade, scientists have come up with various measures of what's called biological age as distinct from chronological age. So your chronological age is just how many candles there are on your birthday cake. And obviously, you know, most of us are familiar with that. But the idea of biological age is to look inside your cells, look inside your body, and work out how old you are on a biological level. Now we aren't perfect at doing this yet, but we do have a variety of different measures. We can use blood tests, we can use what are called epigenetic tests, or we can do things that are far more sort of basic and functional, how strong your grip is declines with age. And by comparing the value of something like your grip strength to an average person of a given age, we can assign you a biological age value. And I think the ones that are getting the most buzz at the moment within the scientific community, but also all around the internet, are these epigenetic age tests. 

So the way that this works is that you'll take a blood test or a saliva sample and scientists will measure something about your epigenome. So the genome is your DNA, it's the instruction manual of life. And the epigenome is a layer of chemistry that sits on top of your genome. If you think of your DNA is that instruction manual, then the epigenome is the notes in the margin. It's the little sticky notes that have been stuck on the side and they tell the cell which DNA to use at which particular time. And we know that there are changes to this epigenome as you get older. And so by measuring the changes in the epigenome, you can assign someone a biological age. 

At the moment, these epigene clocks are a really great research tool. They're really deepening our understanding of biological aging in the lab. I think the problem with these tests as applied to individuals is we don't know enough about exactly what they're telling us. We don't know what these individual changes in epigenetic marks mean. We know they're correlated with age, but what we don't know is if they're causally related. And in particular, we don't know if you intervene, if you make a change in your lifestyle, if you start taking a certain supplement and that reduces your biological age. We don't know whether that actually means you're gonna dilate or whether it means you're gonna stay healthier for longer or whether you've done something that's kind of adjacent to that. And so we need to do more research to understand if we can causally impact these epigenetic measures. (...)

Machine learning and artificial intelligence are gonna be hugely, hugely important in understanding the biology of aging. Because the body is such a complicated system that in order to really understand it, we're gonna need these vast computer models to try and decode the data for us. The challenge is that what machine learning can do at the moment is it can identify correlations. So it can identify things that are associated with aging, but it can't necessarily tell us what's causing something else. So for example, in the case of these epigenetic clocks, the parts of the epigenome that change with age have been identified because they correlate. But what we don't know is if you intervene in any one of these individual epigenetic marks, if you move it in the direction of something younger, does that actually make people healthier? And so what we need to do is more experiments where we try and work out if we can intervene in these epigenetic, in these biological clocks, can we make people live healthier for longer? 

Over the last 10 or 15 years, scientists have really started to understand the fundamental underlying biology of the aging process. And they broke this down into 12 what are called hallmarks of aging. One of those hallmarks is the accumulation of senescent cells. Now senescent is just a biological technical term for old. These are cells that accumulate in all of our bodies as the years go by. And scientists have noticed that these cells seem to drive a range of different diseases as we get older. And so the idea was what if we could remove these cells and leave the rest of the cells of the body intact? Could that slow down or even partially reverse the aging process? And scientists identified drugs called it senolytic drugs. 

These are drugs that kill those senescent cells and they tried them out in mice and they do indeed effectively make the mice biologically younger. So if you give mice a course of senolytic drugs, it removes those senescent cells from their body. And firstly, it makes them live a bit longer. That's a good thing if you're slowing down the aging process, the basic thing you want to see. But it's not dragging out that period of frailty at the end of life. It's keeping the mice healthier for longer so they get less cancer, they get less heart disease, they get fewer cataracts. The mice are also less frail. They basically send the mice to a tiny mouse-scale gym in these experiments. And the mice that have been given the drugs, they can run further and faster on the mousey treadmills that they try them out on. 

It also seems to reverse some of the cognitive effects that come along with aging. So if you put an older mouse in a maze, it's often a bit anxious, doesn't really want to explore. Whereas a younger mouse is desperate to, you know, run around and find the cheese or whatever it is mice doing in mazes. And by giving them these senolytic drugs, you can unlock some of that youthful curiosity. And finally, these mice just look great. You do not need to be an expert mouse biologist to see which one has had the pills and which one hasn't. They've got thicker fur. They've got plumper skin. They've got brighter eyes. They've got less fat on their bodies. And what this shows us is that by targeting the fundamental processes of aging, by identifying something like senescent cells that drives a whole range of age-related problems, we can hit much perhaps even all of the aging process with a single treatment. 

Senescent cells are, of course, only one of these 12 hallmarks of aging. And I think in order to both understand and treat the aging process, we're potentially gonna only treatments for many, perhaps even all of those hallmarks. There's never gonna be a single magic pill that can just make you live forever. Aging is much, much more complicated than that. But by understanding this relatively short list of underlying processes, maybe we can come up with 12, 20 different treatments that can have a really big effect on how long we live. 

One of the most exciting ideas in longevity science at the moment is what's called cellular reprogramming. I sometimes describe this as a treatment that has fallen through a wormhole from the future. This is the idea that we can reset the biological clock inside of our cells. And the idea first came about in the mid 2000s because there was a scientist called Shinya Yamanaka who was trying to find out how to turn regular adult body cells all the way back to the very beginning of their biological existence. And Yamanaka and his team were able to identify four genes that you could insert into a cell and turn back that biological clock. 

Now, he was interested in this from the point of view of creating stem cells, a cell that can create any other kind of cell in the body, which we might be able to use for tissue repair in future. But scientists also noticed, as well as turning back the developmental clock on these cells, it also turns back the aging clock, cells that are given these four Yamanaka factors actually are biologically younger than cells that haven't had the treatment. And so what scientists decided to do was insert these Yamanaka factor genes into mice. 

Now if you do this in a naive way, so there's genes active all the time, it's actually very bad news for the mice, unfortunately. because these stem cells, although they're very powerful in terms of what kind of cell they can become, they are useless at being a liver cell or being a heart cell. And so the mice very quickly died of organ failure. But if you activate these genes only transiently, and the way that scientists did it the first time successfully was essentially to activate them at weekends. So they produced these genes in such a way that they could be activated with the drug and they gave the mice the drug for two days of the week, and then gave them five days off so the Yamanaka factors were then suppressed. They found that this was enough to turn back the biological clock in those cells, but without turning back the developmental clock and turn them into these stem cells. And that meant the mice stayed a little bit healthier. We now know that they can live a little bit longer with this treatment too.

Now the real challenge is that this is a gene therapy treatment. It involves delivering four different genes to every single cell in your body. The question is can we, with our puny 2020s biotechnology, make this into a viable treatment, a pill even, that we can actually use in human beings? I really think this idea of cellular reprogramming appeals to a particular tech billionaire sort of mentality. The idea that we can go in and edit the code of life and reprogram our biological age, it's a hugely powerful concept. And if this works, the fact that you can turn back the biological clock all the way to zero, this really is a very, very cool idea. And that's what's led various different billionaires from the Bay Area to invest huge, huge amounts of money in this. 

Altos Labs is the biggest so-called startup in this space. And I wouldn't really call it a startup 'cause it's got funding of $3 billion from amongst other people, Jeff Bezos, the founder of Amazon. Now I'm very excited about this because I think $3 billion is enough to have a good go and see if we can turn this into a viable human treatment. My only concern is that epigenetics is only one of those hallmarks of aging. And so it might be the case that we solve aging inside our individual cells, but we leave other parts of the aging process intact. (...)

Probably the quickest short-term wins in longevity science are going to be repurposed existing drugs. And the reason for this is because we spent many, many years developing these drugs. We understand how they work in humans. We understand a bit about their safety profile. And because these molecules already exist, we've just tried them out in mice, in, you know, various organisms in the lab and found that a subset of them do indeed slow down the aging process. The first trial of a longevity drug that was proposed in humans was for a drug called metformin, which is a pre-existing drug that we prescribe actually for diabetes in this case, and has some indications that it might slow down the aging process in people. (...)

I think one of the ones that's got the most buzz around it at the moment is a drug called rapamycin. This is a drug that's been given for organ transplants. It's sometimes used to coat stents, which these little things that you stick in the arteries around your heart to expand them if you've got a contraction of those arteries that's restricting the blood supply. But we also know from experiments in the lab that can make all kinds of different organisms live longer, everything from single-cell yeast, to worms, to flies, to mice, to marmoset, which are primates. They're very, very evolutionarily close to us as one of the latest results. 

Rapamycin has this really incredible story. It was first isolated in bacteria from a soil sample from Easter Island, which is known as Rapa Nui in the local Polynesians. That's where the drug gets its name. And when it was first isolated, it was discovered to be antifungal. It could stop fungal cells from growing. So that was what we thought we'd use it for initially. But when the scientists started playing around with in the lab, they realized it didn't just stop fungal cells from growing. It also stopped many other kinds of cells as well, things like up to and including human cells. And so the slight disadvantage was that if you used it as an antifungal agent, it would also stop your immune cells from being able to divide, which is obviously be a bit of a sort of counterintuitive way to try and treat a fungal disease. So scientists decided to use it as an immune suppressant. It can stop your immune system from going haywire when you get an organ transplant, for example, and rejecting that new organ. 

It is also developed as an anti-cancer drug. So if it can stop cells dividing or cancer as cells dividing out of control. But the way that rapamycin works is it targets a fundamental central component of cellular metabolism. And we noticed that that seemed to be very, very important in the aging process. And so by tamping it down by less than you would do in a patient where you're trying to suppress their immune system, you can actually rather than stopping the cell dividing entirely, you can make it enter a state where it's much more efficient in its use of resources. It starts this process called autophagy, which is Greek for self-eating, autophagy. And that means it consumes old damaged proteins, and then recycles them into fresh new ones. And that actually is a critical process in slowing down aging, biologically speaking. And in 2009, we found out for the first time that by giving it to mice late in life, you could actually extend their remaining lifespan. They live by 10 or 15% longer. And this was a really incredible result. 

This was the first time a drug had been shown to slow down aging in mammals. And accordingly, scientists have become very, very excited about it. And we've now tried it in loads of different contexts and loads of different animals and loads of different organisms at loads of different times in life. You can even wait until very late in a mouse lifespan to give it rapamycin and you still see most of that same lifespan extension effect. And that's fantastic news potentially for us humans because not all of us, unfortunately, can start taking a drug from birth 'cause most of us were born quite a long time ago. But rapamycin still works even if you give it to mice who are the equivalent of 60 or 70 years old in human terms. And that means that for those of us who are already aged a little bit, Rapamycin could still help us potentially. And there are already biohackers out there trying this out for themselves, hopefully with the help of a doctor to make sure that they're doing everything as safely as possible to try and extend their healthy life. And so the question is: should we do a human trial of rapamycin to find out if it can slow down the aging process in people as well? (...)

We've already got dozens of ideas in the lab for ways to slow down, maybe even reverse the age of things like mice and cells in a dish. And that means we've got a lot of shots on goal. I think it'll be wildly unlucky if none of the things that slow down aging in the lab actually translate to human beings. That doesn't mean that most of them will work, probably most of them won't, but we only need one or two of them to succeed and really make a big difference. And I think a great example of this is GLP-1 drugs, the ozempics, the things that are allowing people to suddenly lose a huge amount of weight. We've been looking for decades for these weight loss drugs, and now we finally found them. It's shown that these breakthroughs are possible, they can come out of left field. And all we need to do in some cases is a human trial to find out if these drugs actually work in people. 

And what that means is that, you know, the average person on planet earth is under the age of 40. They've probably got 40 or 50 years of life expectancy left depending on the country that they live in. And that's an awful lot of time for science to happen. And if then in the next 5 or 10 years, we do put funding toward these human trials, we might have those first longevity drugs that might make you live one or two or five years longer. And that gives scientists even more time to develop the next treatment. And if we think about some more advanced treatments, not just drugs, things like stem cell therapy or gene therapy, those things can sound pretty sci-fi. But actually, we know that these things are already being deployed in hospitals and clinics around the world. They're being deployed for specific serious diseases, for example, where we know that a single gene can be a problem and we can go in and fix that gene and give a child a much better chance at a long, healthy life. 

But as we learn how these technologies work in the context of these serious diseases, we're gonna learn how to make them effective. And most importantly, we're gonna learn how to make them safe. And so we could imagine doing longevity gene edits in human beings, perhaps not in the next five years, but I think it'll be foolish to bet against it happening in the next 20 years, for example. 

by Andrew Steele, The Big Think |  Read more:
Image: Yamanka factors via:
[ed. See also: Researchers Are Using A.I. to Decode the Human Genome (NYT).]