Showing posts with label Critical Thought. Show all posts
Showing posts with label Critical Thought. Show all posts

Sunday, February 1, 2026

What Actually Makes a Good Life

Harvard started following a group of 268 sophomores back in 1938—and continued to track them for decades—and eventually included their spouses and children too. The goal was to discover what leads to a thriving, happy life.

Robert Waldinger continues that work today as the Director of the Harvard Study on Adult Development. (He’s also a zen priest, by the way.) Here he shares insights on the key ingredients for living the good life.
[ed. Road map to happiness (or some degree of life satisfaction). Only 16 minutes of your time.]

Saturday, January 31, 2026

Kayfabe and Boredom: Are You Not Entertained?

Pro wrestling, for all its mass appeal, cultural influence, and undeniable profitability, is still dismissed as low-brow fare for the lumpen masses; another guilty pleasure to be shelved next to soap operas and true crime dreck. This elitist dismissal rests on a cartoonish assumption that wrestling fans are rubes, incapable of recognizing the staged spectacle in front of them. In reality, fans understand perfectly well that the fights are preordained. What bothers critics is that working-class audiences knowingly embrace a form of theater more honest than the “serious” news they consume.

Once cast as the pinnacle of trash TV in the late ’90s and early 2000s, pro wrestling has not only survived the cultural sneer; it might now be the template for contemporary American politics. The aesthetics of kayfabe, of egotistical villains and manufactured feuds, now structure our public life. And nowhere is this clearer than in the figure of its most infamous graduate: Donald Trump, the two-time WrestleMania host and 2013 WWE Hall of Fame inductee who carried the psychology of the squared circle from the television studio straight into the Oval Office.

In wrestling, kayfabe refers to the unwritten rule that participants must maintain a charade of truthfulness. Whether you are allies or enemies, every association between wrestlers must unfold realistically. There are referees, who serve as avatars of fairness. We the audience understand that the outcome is choreographed and predetermined, yet we watch because the emotional drama has pulled us in.

In his own political arena, Donald Trump is not simply another participant but the conductor of the entire orchestra of kayfabe, arranging the cues, elevating the drama, and shaping the emotional cadence. Nuance dissolves into simple narratives of villains and heroes, while those who claim to deliver truth behave more like carnival barkers selling the next act. Politics has become theater, and the news that filters through our devices resembles an endless stream of storylines crafted for outrage and instant reaction. What once required substance, context, and expertise now demands spectacle, immediacy, and emotional punch.

Under Trump, politics is no longer a forum for governance but a stage where performance outranks truth, policy, and the show becomes the only reality that matters. And he learned everything he knows from the small screen.

In the pro wrestling world, one of the most important parts of the match typically happens outside of the ring and is known as the promo. An announcer with a mic, timid and small, stands there while the wrestler yells violent threats about what he’s going to do to his upcoming opponent, makes disparaging remarks about the host city, their rival’s appearance, and so on. The details don’t matter—the goal is to generate controversy and entice the viewer to buy tickets to the next staged combat. This is the most common and quick way to generate heat (attention). When you’re selling seats, no amount of audience animosity is bad business. (...)

Kayfabe is not limited to choreographed combat. It arises from the interplay of works (fully scripted events), shoots (unscripted or authentic moments), and angles (storyline devices engineered to advance a narrative). Heroes (babyfaces, or just faces) can at the drop of a dime turn heel (villain), and heels can likewise be rehabilitated into babyfaces as circumstances demand. The blood spilled is real, injuries often are, but even these unscripted outcomes are quickly woven back into the narrative machinery. In kayfabe, authenticity and contrivance are not opposites but mutually reinforcing components of a system designed to sustain attention, emotion, and belief.

by Jason Myles, Current Affairs |  Read more:
Image: uncredited
[ed. See also: Are you not entertained? (LIWGIWWF):]
***
Forgive me for quoting the noted human trafficker Andrew Tate, but I’m stuck on something he said on a right-wing business podcast last week. Tate, you may recall, was controversially filmed at a Miami Beach nightclub last weekend, partying to the (pathologically) sick beats of Kanye’s “Heil Hitler” with a posse of young edgelords and manosphere deviants. They included the virgin white supremacist Nick Fuentes and the 20-year-old looksmaxxer Braden Peters, who has said he takes crystal meth as part of his elaborate, self-harming beauty routine and recently ran someone over on a livestream.

“Heil Hitler” is not a satirical or metaphorical song. It is very literally about supporting Nazis and samples a 1935 speech to that effect. But asked why he and his compatriots liked the song, Tate offered this incredible diagnosis: “It was played because it gets traction in a world where everybody is bored of everything all of the time, and that’s why these young people are encouraged constantly to try and do the most shocking thing possible.” Cruelty as an antidote to the ennui of youth — now there’s one I haven’t quite heard before.

But I think Tate is also onto something here, about the wider emotional valence of our era — about how widespread apathy and nihilism and boredom, most of all, enable and even fuel our degraded politics. I see this most clearly in the desperate, headlong rush to turn absolutely everything into entertainment — and to ensure that everyone is entertained at all times. Doubly entertained. Triply entertained, even.

Trump is the master of this spectacle, of course, having perfected it in his TV days. The invasion of Venezuela was like a television show, he said. ICE actively seeks out and recruits video game enthusiasts. When a Border Patrol official visited Minneapolis last week, he donned an evocative green trench coat that one historian dubbed “a bit of theater.”

On Thursday, the official White House X account posted an image of a Black female protester to make it look as if she were in distress; caught in the obvious (and possibly defamatory) lie, a 30-something-year-old deputy comms director said only that “the memes will continue.” And they have continued: On Saturday afternoon, hours after multiple Border Patrol agents shot and killed an ICU nurse in broad daylight on a Minneapolis street, the White House’s rapid response account posted a graphic that read simply — ragebaitingly — “I Stand With Border Patrol.”

Are you not entertained?

But it goes beyond Trump, beyond politics. The sudden rise of prediction markets turns everything into a game: the weather, the Oscars, the fate of Greenland. Speaking of movies, they’re now often written with the assumption that viewers are also staring at their phones — stacking entertainment on entertainment. Some men now need to put YouTube on just to get through a chore or a shower. Livestreaming took off when people couldn’t tolerate even brief disruptions to their viewing pleasure.

Ironically, of course, all these diversions just have the effect of making us bored. The bar for what breaks through has to rise higher: from merely interesting to amusing to provocative to shocking, in Tate’s words. The entertainments grow more extreme. The volume gets louder. And it’s profoundly alienating to remain at this party, where everyone says that they’re having fun, but actually, internally, you are lonely and sad and do not want to listen — or watch other people listen! — to the Kanye Nazi song.

I am here to tell you it’s okay to go home. Metaphorically speaking. Turn it off. Tune it out. Reacquaint yourself with boredom, with understimulation, with the grounding and restorative sluggishness of your own under-optimized thoughts. Then see how the world looks and feels to you — what types of things gain traction. What opportunities arise, not for entertainment — but for purpose. For action.

The Adolescence of Technology

Confronting and Overcoming the Risks of Powerful AI

There is a scene in the movie version of Carl Sagan’s book Contact where the main character, an astronomer who has detected the first radio signal from an alien civilization, is being considered for the role of humanity’s representative to meet the aliens. The international panel interviewing her asks, “If you could ask [the aliens] just one question, what would it be?” Her reply is: “I’d ask them, ‘How did you do it? How did you evolve, how did you survive this technological adolescence without destroying yourself?” When I think about where humanity is now with AI—about what we’re on the cusp of—my mind keeps going back to that scene, because the question is so apt for our current situation, and I wish we had the aliens’ answer to guide us. I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species. Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.

In my essay Machines of Loving Grace, I tried to lay out the dream of a civilization that had made it through to adulthood, where the risks had been addressed and powerful AI was applied with skill and compassion to raise the quality of life for everyone. I suggested that AI could contribute to enormous advances in biology, neuroscience, economic development, global peace, and work and meaning. I felt it was important to give people something inspiring to fight for, a task at which both AI accelerationists and AI safety advocates seemed—oddly—to have failed. But in this current essay, I want to confront the rite of passage itself: to map out the risks that we are about to face and try to begin making a battle plan to defeat them. I believe deeply in our ability to prevail, in humanity’s spirit and its nobility, but we must face the situation squarely and without illusions.

As with talking about the benefits, I think it is important to discuss risks in a careful and well-considered manner. In particular, I think it is critical to:
  • Avoid doomerism. Here, I mean “doomerism” not just in the sense of believing doom is inevitable (which is both a false and self-fulfilling belief), but more generally, thinking about AI risks in a quasi-religious way. Many people have been thinking in an analytic and sober way about AI risks for many years, but it’s my impression that during the peak of worries about AI risk in 2023–2024, some of the least sensible voices rose to the top, often through sensationalistic social media accounts. These voices used off-putting language reminiscent of religion or science fiction, and called for extreme actions without having the evidence that would justify them. It was clear even then that a backlash was inevitable, and that the issue would become culturally polarized and therefore gridlocked. As of 2025–2026, the pendulum has swung, and AI opportunity, not AI risk, is driving many political decisions. This vacillation is unfortunate, as the technology itself doesn’t care about what is fashionable, and we are considerably closer to real danger in 2026 than we were in 2023. The lesson is that we need to discuss and address risks in a realistic, pragmatic manner: sober, fact-based, and well equipped to survive changing tides.
  • Acknowledge uncertainty. There are plenty of ways in which the concerns I’m raising in this piece could be moot. Nothing here is intended to communicate certainty or even likelihood. Most obviously, AI may simply not advance anywhere near as fast as I imagine. Or, even if it does advance quickly, some or all of the risks discussed here may not materialize (which would be great), or there may be other risks I haven’t considered. No one can predict the future with complete confidence—but we have to do the best we can to plan anyway.
  • Intervene as surgically as possible. Addressing the risks of AI will require a mix of voluntary actions taken by companies (and private third-party actors) and actions taken by governments that bind everyone. The voluntary actions—both taking them and encouraging other companies to follow suit—are a no-brainer for me. I firmly believe that government actions will also be required to some extent, but these interventions are different in character because they can potentially destroy economic value or coerce unwilling actors who are skeptical of these risks (and there is some chance they are right!). It’s also common for regulations to backfire or worsen the problem they are intended to solve (and this is even more true for rapidly changing technologies). It’s thus very important for regulations to be judicious: they should seek to avoid collateral damage, be as simple as possible, and impose the least burden necessary to get the job done. It is easy to say, “No action is too extreme when the fate of humanity is at stake!,” but in practice this attitude simply leads to backlash. To be clear, I think there’s a decent chance we eventually reach a point where much more significant action is warranted, but that will depend on stronger evidence of imminent, concrete danger than we have today, as well as enough specificity about the danger to formulate rules that have a chance of addressing it. The most constructive thing we can do today is advocate for limited rules while we learn whether or not there is evidence to support stronger ones.
With all that said, I think the best starting place for talking about AI’s risks is the same place I started from in talking about its benefits: by being precise about what level of AI we are talking about. The level of AI that raises civilizational concerns for me is the powerful AI that I described in Machines of Loving Grace. I’ll simply repeat here the definition that I gave in that document:
  • By “powerful AI,” I have in mind an AI model—likely similar to today’s LLMs in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields: biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.
  • In addition to just being a “smart thing you talk to,” it has all the interfaces available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.
  • It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.
  • It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory, it could even design robots or equipment for itself to use.
  • The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10–100x human speed. It may, however, be limited by the response time of the physical world or of software it interacts with.
  • Each of these million copies can act independently on unrelated tasks, or, if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.
We could summarize this as a “country of geniuses in a datacenter.”

As I wrote in Machines of Loving Grace, powerful AI could be as little as 1–2 years away, although it could also be considerably further out.

Exactly when powerful AI will arrive is a complex topic that deserves an essay of its own, but for now I’ll simply explain very briefly why I think there’s a strong chance it could be very soon. (...)

In this essay, I’ll assume that this intuition is at least somewhat correct—not that powerful AI is definitely coming in 1–2 years, but that there’s a decent chance it does, and a very strong chance it comes in the next few. As with Machines of Loving Grace, taking this premise seriously can lead to some surprising and eerie conclusions. While in Machines of Loving Grace I focused on the positive implications of this premise, here the things I talk about will be disquieting. They are conclusions that we may not want to confront, but that does not make them any less real. I can only say that I am focused day and night on how to steer us away from these negative outcomes and towards the positive ones, and in this essay I talk in great detail about how best to do so.

I think the best way to get a handle on the risks of AI is to ask the following question: suppose a literal “country of geniuses” were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist. The analogy is not perfect, because these geniuses could have an extremely wide range of motivations and behavior, from completely pliant and obedient, to strange and alien in their motivations. But sticking with the analogy for now, suppose you were the national security advisor of a major state, responsible for assessing and responding to the situation. Imagine, further, that because AI systems can operate hundreds of times faster than humans, this “country” is operating with a time advantage relative to all other countries: for every cognitive action we can take, this country can take ten.

What should you be worried about? I would worry about the following things: 
1. Autonomy risks. What are the intentions and goals of this country? Is it hostile, or does it share our values? Could it militarily dominate the world through superior weapons, cyber operations, influence operations, or manufacturing?
2. Misuse for destruction. Assume the new country is malleable and “follows instructions”—and thus is essentially a country of mercenaries. Could existing rogue actors who want to cause destruction (such as terrorists) use or manipulate some of the people in the new country to make themselves much more effective, greatly amplifying the scale of destruction?
3. Misuse for seizing power. What if the country was in fact built and controlled by an existing powerful actor, such as a dictator or rogue corporate actor? Could that actor use it to gain decisive or dominant power over the world as a whole, upsetting the existing balance of power?
4. Economic disruption. If the new country is not a security threat in any of the ways listed in #1–3 above but simply participates peacefully in the global economy, could it still create severe risks simply by being so technologically advanced and effective that it disrupts the global economy, causing mass unemployment or radically concentrating wealth?
5. Indirect effects. The world will change very quickly due to all the new technology and productivity that will be created by the new country. Could some of these changes be radically destabilizing?
I think it should be clear that this is a dangerous situation—a report from a competent national security official to a head of state would probably contain words like “the single most serious national security threat we’ve faced in a century, possibly ever.” It seems like something the best minds of civilization should be focused on.

Conversely, I think it would be absurd to shrug and say, “Nothing to worry about here!” But, faced with rapid AI progress, that seems to be the view of many US policymakers, some of whom deny the existence of any AI risks, when they are not distracted entirely by the usual tired old hot-button issues.

Humanity needs to wake up, and this essay is an attempt—a possibly futile one, but it’s worth trying—to jolt people awake.

To be clear, I believe if we act decisively and carefully, the risks can be overcome—I would even say our odds are good. And there’s a hugely better world on the other side of it. But we need to understand that this is a serious civilizational challenge. Below, I go through the five categories of risk laid out above, along with my thoughts on how to address them.

by Dario Amodei, Anthropic |  Read more:
[ed. Mr. Amodei and Anthropic in general seem to be, of all major AI companies, the most focused on safety and alignment issues. Guaranteed, everyone working in the field has read this. For a good summary and contrary arguments, see also: On The Adolescence of Technology (Zvi Mowshowitz, DMtV).]

Friday, January 23, 2026

AI: Practical Advice for the Worried

A Word On Thinking For Yourself

There are good reasons to worry about AI. This includes good reasons to worry about AI wiping out all value in the universe, or AI killing everyone, or other similar very bad outcomes.

There are also good reasons that AGI, or otherwise transformational AI, might not come to pass for a long time.

As I say in the Q&A section later, I do not consider imminent transformational AI inevitable in our lifetimes: Some combination of ‘we run out of training data and ways to improve the systems, and AI systems max out at not that much more powerful than current ones’ and ‘turns out there are regulatory and other barriers that prevent AI from impacting that much of life or the economy that much’ could mean that things during our lifetimes turn out to be not that strange. These are definitely world types my model says you should consider plausible.

There is also the highly disputed question of how likely it is that if we did create an AGI reasonably soon, it would wipe out all value in the universe. There are what I consider very good arguments that this is what happens unless we solve extremely difficult problems to prevent it, and that we are unlikely to solve those problems in time. Thus I believe this is very likely, although there are some (such as Eliezer Yudkowsky) who consider it more likely still.

That does not mean you should adapt my position, or anyone else’s position, or mostly use social cognition from those around you, on such questions, no matter what those methods would tell you. If this is something that is going to impact your major life decisions, or keep you up at night, you need to develop your own understanding and model, and decide for yourself what you predict. (...)

Overview

There is some probability that humanity will create transformational AI soon, for various definitions of soon. You can and should decide what you think that probability is, and conditional on that happening, your probability of various outcomes.

Many of these outcomes, both good and bad, will radically alter the payoffs of various life decisions you might make now. Some such changes are predictable. Others not.

None of this is new. We have long lived under the very real threat of potential nuclear annihilation. The employees of the RAND corporation, in charge of nuclear strategic planning, famously did not contribute to their retirement accounts because they did not expect to live long enough to need them. Given what we know now about the close calls of the cold war, and what they knew at the time, perhaps this was not so crazy a perspective.

Should this imminent small but very real risk radically change your actions? I think the answer here is a clear no, unless your actions are relevant to nuclear war risks, either personally or globally, in some way, in which case one can shut up and multiply.

This goes back far longer. For much longer than that, various religious folks have expected Judgment Day to arrive soon, often with a date attached. Often they made poor decisions in response to this, even given their beliefs.

There are some people that talk or feel this same way about climate change, as an impending inevitable extinction event for humanity.

Under such circumstances, I would center my position on a simple claim: Normal Life is Worth Living, even if you think P(doom) relatively soon is very high. (...)

More generally, in terms of helping: Burning yourself out, stressing yourself out, tying yourself up in existential angst all are not helpful. It would be better to keep yourself sane and healthy and financially intact, in case you are later offered leverage. Fighting the good fight, however doomed it might be, because it is a far, far better thing to do, is also a fine response, if you keep in mind how easy it is to end up not helping that fight. But do that while also living a normal life, even if that might seem indulgent. You will be more effective for it, especially over time. (...)

On to individual questions to flesh all this out.

Q&A

Q: Should I still save for retirement?
Short Answer: Yes.
Long Answer: Yes, to most (but not all) of the extent that this would otherwise be a concern and action of yours in the ‘normal’ world
. It would be better to say ‘build up asset value over time’ than ‘save for retirement’ in my model. Building up assets gives you resources to influence the future on all scales, whether or not retirement is even involved. I wouldn’t get too attached to labels.

Remember that while it is not something one should do lightly, none of this is lightly, and you can raid retirement accounts with what in context is a modest penalty, in an extreme enough ‘endgame’ situation - it does not even take that many years for the expected value of the compounded tax advantages to exceed the withdraw penalty - the cost of emptying the account, should you need to do that, is only 10% of funds and about a week in the United States (plus now having to pay taxes on it). And that in some extreme future situations, having that cash would be highly valuable, none of which suggests now is the time to empty it, or to not build it up.

The case for saving money does not depend on expecting a future ‘normal’ world. Which is good, because even without AI the future world is likely to not be all that ‘normal.

Q: Should I take on a ton of debt intending to never have to pay it back?

Short Answer: No, except for a mortgage.

Long Answer: Mostly no, except for a mortgage. Save your powder. See my post On AI and Interest Rates for an extended treatment of this question - I feel that is a definitive answer to the supposed ‘gotcha’ question of why doomers don’t take on lots of debt. Taking on a bunch of debt is a limited resource, and good ways to do it are even more limited for most of us. Yes, where you get the opportunity it would be good to lock in long borrow periods at fixed rates if you think things are about to get super weird. But if your plan is ‘the market will realize what is happening and adjust the value of my debt in time for me to profit’ that does not seem, to me, like a good plan. Nor does borrowing now much change your actual constraints on where you run out of money.

Does borrowing money that you have to pay back in 2033 mean you have more money to spend? That depends. What is your intention if 2033 rolls around and the world hasn’t ended? Are you going to pay it back? If so then you need to prepare now to be able to do that. So you didn’t accomplish all that much.

You need very high confidence in High Weirdness Real Soon Now before you can expect to get net rewarded for putting your financial future on quicksand, where you are in real trouble if you get the timing wrong. You also need a good way to spend that money to change the outcome.

Yes, there is a level of confidence in both speed and magnitude, combined with a good way to spend, that would change that, and that I do not believe is warranted. One must notice that you need vastly less certainty than this to be shouting about these issues from the rooftops, or devoting your time to working on them.

Eliezer’s position, as per his most recent podcast is something like ‘AGI could come very soon, seems inevitable by 2050 barring civilizational collapse, and if it happens we almost certainly all die.’ Suppose you really actually believed that. It’s still not enough to do much with debt unless you have a great use of money - there’s still a lot of probability mass that the money is due back while you’re still alive, potentially right before it might matter.

Yes, also, this changes if you think you can actually change the outcome for the better by spending money now, money loses impact over time, so your discount factor should be high. That however does not seem to be the case that I see being made.

Q: Does buying a house make sense?

A: Maybe. It is an opportunity to borrow money at low interest rates with good tax treatment. It also potentially ties up capital and ties you down to a particular location, and is not as liquid as some other forms of capital. So ask yourself how psychologically hard it would be to undo that. In terms of whether it looks like a good investment in a world with useful but non-transformational AI, an AI could figure out how to more efficiently build housing, but would that cause more houses to be built?

Q: Does it make sense to start a business?

A: Yes, although not because of AI. It is good to start a business. Of course, if the business is going to involve AI, carefully consider whether you are making the situation worse.

Q: Does It Still Make Sense to Try and Have Kids?

Short Answer: Yes.

Long Answer: Yes. Kids are valuable and make the world and your own world better, even if the world then ends. I would much rather exist for a bit than never exist at all. Kids give you hope for the future and something to protect, get you to step up. They get others to take you more seriously. Kids teach you many things that help one think better about AI. You think they take away your free time, but there is a limit to how much creative work one can do in a day. This is what life is all about. Missing out on this is deeply sad. Don’t let it pass you by.

Is there a level of working directly on the problem, or being uniquely positioned to help with the problem, where I would consider changing this advice? Yes, there are a few names where I think this is not so clear, but I am thinking of a very small number of names right now, and yours is not one of them.

You can guess how I would answer most other similar questions. I do not agree with Buffy Summers that the hardest thing in this world is to live in it. I do think she knows better than any of us that not living in this world is not the way to save it.

Q: Should I talk to my kids about how there’s a substantial chance they won’t get to grow up?

A: I would not (and will not) hide this information from my kids, any more than I would hide the risk from nuclear war, but ‘you may not get to grow up’ is not a helpful thing to say to (or to emphasize to) kids. Talking to your kids about this (in the sense of ‘talk to your kids about drugs’) is only going to distress them to no purpose. While I don’t believe in hiding stuff from kids, I also don’t think this is something it is useful to hammer into them. Kids should still get to be and enjoy being kids. (...)

Q: Should I just try to have a good time while I can?

A: No, because my model says that this doesn’t work. It is empty. You can have fun for a day, a week, a month, perhaps a year, but after a while it rings hollow, feels empty, and your future will fill you with dread. Certainly it makes sense to shift this on the margin, get your key bucket list items in early, put a higher marginal priority on fun - even more so than you should have been doing anyway. But I don’t think my day-to-day life experience would improve for very long by taking this kind of path. Then again, each of us is different.

That all assumes you have ruled out attempting to improve our chances. Personally, even if I had to go down, I’d rather go down fighting. Insert rousing speech here.

Q: How Long Do We Have? What is the Timeline?

Short Answer: Unknown. Look at the arguments and evidence. Form your own opinion.

Long Answer: High uncertainty about when this will happen if it happens, whether or not one has high uncertainty about whether it happens at all within our lifetimes. Eliezer’s answer was that he would be very surprised if it didn’t happen by 2050, but that within that range little would surprise him and he has low confidence. Others have longer or shorter means and medians in their timelines. Mine are substantially longer and less confident than Eliezer’s. This is a question you must decide for yourself. The key is that there is uncertainty, so lots of difference scenarios matter.

by Zvi Mowshowitz, Don't Worry About the Vase |  Read more:
Image: via Linkedin Image Generator
[ed. See also: The AI doomers feel undeterred (MIT).]

Monday, January 19, 2026

Time Passing

So here's the problem. If you don't believe in God or an afterlife; or if you believe that the existence of God or an afterlife are fundamentally unanswerable questions; or if you do believe in God or an afterlife but you accept that your belief is just that, a belief, something you believe rather than something you know -- if any of that is true for you, then death can be an appalling thing to think about. Not just frightening, not just painful. It can be paralyzing. The fact that your lifespan is an infinitesimally tiny fragment in the life of the universe, and that there is, at the very least, a strong possibility that when you die, you disappear completely and forever, and that in five hundred years nobody will remember you and in five billion years the Earth will be boiled into the sun: this can be a profound and defining truth about your existence that you reflexively repulse, that you flinch away from and refuse to accept or even think about, consistently pushing to the back of your mind whenever it sneaks up, for fear that if you allow it to sit in your mind even for a minute, it will swallow everything else. It can make everything you do, and everything anyone else does, seem meaningless, trivial to the point of absurdity. It can make you feel erased, wipe out joy, make your life seem like ashes in your hands. Those of us who are skeptics and doubters are sometimes dismissive of people who fervently hold beliefs they have no evidence for simply because they find them comforting -- but when you're in the grip of this sort of existential despair, it can be hard to feel like you have anything but that handful of ashes to offer them in exchange.

But here's the thing. I think it's possible to be an agnostic, or an atheist, or to have religious or spiritual beliefs that you don't have certainty about, and still feel okay about death. I think there are ways to look at death, ways to experience the death of other people and to contemplate our own, that allow us to feel the value of life without denying the finality of death. I can't make myself believe in things I don't actually believe -- Heaven, or reincarnation, or a greater divine plan for our lives -- simply because believing those things would make death easier to accept. And I don't think I have to, or that anyone has to. I think there are ways to think about death that are comforting, that give peace and solace, that allow our lives to have meaning and even give us more of that meaning -- and that have nothing whatsoever to do with any kind of God, or any kind of afterlife.

Here's the first thing. The first thing is time, and the fact that we live in it. Our existence and experience are dependent on the passing of time, and on change. No, not dependent -- dependent is too weak a word. Time and change are integral to who we are, the foundation of our consciousness, and its warp and weft as well. I can't imagine what it would mean to be conscious without passing through time and being aware of it. There may be some form of existence outside of time, some plane of being in which change and the passage of time is an illusion, but it certainly isn't ours.

And inherent in change is loss. The passing of time has loss and death woven into it: each new moment kills the moment before it, and its own death is implied in the moment that comes after. There is no way to exist in the world of change without accepting loss, if only the loss of a moment in time: the way the sky looks right now, the motion of the air, the number of birds in the tree outside your window, the temperature, the placement of your body, the position of the people in the street. It's inherent in the nature of having moments: you never get to have this exact one again.

And a good thing, too. Because all the things that give life joy and meaning -- music, conversation, eating, dancing, playing with children, reading, thinking, making love, all of it -- are based on time passing, and on change, and on the loss of an infinitude of moments passing through us and then behind us. Without loss and death, we don't get to have existence. We don't get to have Shakespeare, or sex, or five-spice chicken, without allowing their existence and our experience of them to come into being and then pass on. We don't get to listen to Louis Armstrong without letting the E-flat disappear and turn into a G. We don't get to watch "Groundhog Day" without letting each frame of it pass in front of us for a 24th of a second and then move on. We don't get to walk in the forest without passing by each tree and letting it fall behind us; we don't even get to stand still in the forest and gaze at one tree for hours without seeing the wind blow off a leaf, a bird break off a twig for its nest, the clouds moving behind it, each manifestation of the tree dying and a new one taking its place.

And we wouldn't want to have it if we could. The alternative would be time frozen, a single frame of the film, with nothing to precede it and nothing to come after. I don't think any of us would want that. And if we don't want that, if instead we want the world of change, the world of music and talking and sex and whatnot, then it is worth our while to accept, and even love, the loss and the death that make it possible.

Here's the second thing. Imagine, for a moment, stepping away from time, the way you'd step back from a physical place, to get a better perspective on it. Imagine being outside of time, looking at all of it as a whole -- history, the present, the future -- the way the astronauts stepped back from the Earth and saw it whole.

Keep that image in your mind. Like a timeline in a history class, but going infinitely forward and infinitely back. And now think of a life, a segment of that timeline, one that starts in, say, 1961, and ends in, say, 2037. Does that life go away when 2037 turns into 2038? Do the years 1961 through 2037 disappear from time simply because we move on from them and into a new time, any more than Chicago disappears when we leave it behind and go to California?

It does not. The time that you live in will always exist, even after you've passed out of it, just like Paris exists before you visit it, and continues to exist after you leave. And the fact that people in the 23rd century will probably never know you were alive... that doesn't make your life disappear, any more than Paris disappears if your cousin Ethel never sees it. Your segment on that timeline will always have been there. The fact of your death doesn't make the time that you were alive disappear.

And it doesn't make it meaningless. Yes, stepping back and contemplating all of time and space can be daunting, can make you feel tiny and trivial. And that perception isn't entirely inaccurate. It's true; the small slice of time that we have is no more important than the infinitude of time that came before we were born, or the infinitude that will follow after we die.

But it's no less important, either.

I don't know what happens when we die. I don't know if we come back in a different body, or if we get to hover over time and space and view it in all its glory and splendor, or if our souls dissolve into the world-soul the way our bodies dissolve into the ground, or if, as seems very likely, we simply disappear. I have no idea. And I don't know that it matters. What matters is that we get to be alive. We get to be conscious. We get to be connected with each other, and with the world, and we get to be aware of that connection and to spend a few years mucking about in its possibilities. We get to have a slice of time and space that's ours. As it happened, we got the slice that has Beatles records and Thai restaurants and AIDS and the Internet. People who came before us got the slice that had horse-drawn carriages and whist and dysentery, or the one that had stone huts and Viking invasions and pigs in the yard. And the people who come after us will get the slice that has, I don't know, flying cars and soybean pies and identity chips in their brains. But our slice is no less important because it comes when it does, and it's no less important because we'll leave it someday. The fact that time will continue after we die does not negate the time that we were alive. We are alive now, and nothing can erase that.

Greta Christina, Greta's Blog |  Read more:
Image: uncredited
[ed. Repost from, actually quite a while ago (folks should really check out the archive). Something reminded me of this essay today, and I'm glad it did, because it's a favorite. Unfortunately, I think the link is dead (as we all shall soon be... haha), but it's all here.]

Saturday, January 17, 2026

The Dilbert Afterlife

Sixty-eight years of highly defective people

Thanks to everyone who sent in condolences on my recent death from prostate cancer at age 68, but that was Scott Adams. I (Scott Alexander) am still alive.

Still, the condolences are appreciated. Scott Adams was a surprisingly big part of my life. I may be the only person to have read every Dilbert book before graduating elementary school. For some reason, 10-year-old-Scott found Adams’ stories of time-wasting meetings and pointy-haired bosses hilarious. No doubt some of the attraction came from a more-than-passing resemblance between Dilbert’s nameless corporation and the California public school system. We’re all inmates in prisons with different names.

But it would be insufficiently ambitious to stop there. Adams’ comics were about the nerd experience. About being cleverer than everyone else, not just in the sense of being high IQ, but in the sense of being the only sane man in a crazy world where everyone else spends their days listening to overpaid consultants drone on about mission statements instead of doing anything useful. There’s an arc in Dilbert where the boss disappears for a few weeks and the engineers get to manage their own time. Productivity shoots up. Morale soars. They invent warp drives and time machines. Then the boss returns, and they’re back to being chronically behind schedule and over budget. This is the nerd outlook in a nutshell: if I ran the circus, there’d be some changes around here.

Yet the other half of the nerd experience is: for some reason this never works. Dilbert and his brilliant co-workers are stuck watching from their cubicles while their idiot boss racks in bonuses and accolades. If humor, like religion, is an opiate of the masses, then Adams is masterfully unsubtle about what type of wound his art is trying to numb.

This is the basic engine of Dilbert: everyone is rewarded in exact inverse proportion to their virtue. Dilbert and Alice are brilliant and hard-working, so they get crumbs. Wally is brilliant but lazy, so he at least enjoys a fool’s paradise of endless coffee and donuts while his co-workers clean up his messes. The P.H.B. is neither smart nor industrious, so he is forever on top, reaping the rewards of everyone else’s toil. Dogbert, an inveterate scammer with a passing resemblance to various trickster deities, makes out best of all.

The repressed object at the bottom of the nerd subconscious, the thing too scary to view except through humor, is that you’re smarter than everyone else, but for some reason it isn’t working. Somehow all that stuff about small talk and sportsball and drinking makes them stronger than you. No equation can tell you why. Your best-laid plans turn to dust at a single glint of Chad’s perfectly-white teeth.

Lesser lights may distance themselves from their art, but Adams radiated contempt for such surrender. He lived his whole life as a series of Dilbert strips. Gather them into one of his signature compendia, and the title would be Dilbert Achieves Self Awareness And Realizes That If He’s So Smart Then He Ought To Be Able To Become The Pointy-Haired Boss, Devotes His Whole Life To This Effort, Achieves About 50% Success, Ends Up In An Uncanny Valley Where He Has Neither The Virtues Of The Honest Engineer Nor Truly Those Of The Slick Consultant, Then Dies Of Cancer Right When His Character Arc Starts To Get Interesting.

If your reaction is “I would absolutely buy that book”, then keep reading, but expect some detours.

Fugitive From The Cubicle Police

The niche that became Dilbert opened when Garfield first said “I hate Mondays”. The quote became a popular sensation, inspiring t-shirts, coffee mugs, and even a hit single. But (as I’m hardly the first to point out) why should Garfield hate Mondays? He’s a cat! He doesn’t have to work!

In the 80s and 90s, saying that you hated your job was considered the height of humor. Drew Carey: “Oh, you hate your job? There’s a support group for that. It’s called everybody, and they meet at the bar.”


This was merely the career subregion of the supercontinent of Boomer self-deprecating jokes, whose other prominences included “I overeat”, “My marriage is on the rocks”, “I have an alcohol problem”, and “My mental health is poor”.

Arguably this had something to do with the Bohemian turn, the reaction against the forced cheer of the 1950s middle-class establishment of company men who gave their all to faceless corporations and then dropped dead of heart attacks at 60. You could be that guy, proudly boasting to your date about how you traded your second-to-last patent artery to complete a spreadsheet that raised shareholder value 14%. Or you could be the guy who says “Oh yeah, I have a day job working for the Man, but fuck the rat race, my true passion is white water rafting”. When your father came home every day looking haggard and worn out but still praising his boss because “you’ve got to respect the company or they won’t take care of you”, being able to say “I hate Mondays” must have felt liberating, like the mantra of a free man.

This was the world of Dilbert’s rise. You’d put a Dilbert comic on your cubicle wall, and feel like you’d gotten away with something. If you were really clever, you’d put the Dilbert comic where Dilbert gets in trouble for putting a comic on his cubicle wall on your cubicle wall, and dare them to move against you.


(again, I was ten at the time. I only know about this because Scott Adams would start each of his book collections with an essay, and sometimes he would talk about letters he got from fans, and many of them would have stories like these.)

But t-shirts saying “Working Hard . . . Or Hardly Working?” no longer hit as hard as they once did. Contra the usual story, Millennials are too earnest to tolerate the pleasant contradiction of saying they hate their job and then going in every day with a smile. They either have to genuinely hate their job - become some kind of dirtbag communist labor activist - or at least pretend to love it. The worm turns, all that is cringe becomes based once more and vice versa. Imagine that guy boasting to his date again. One says: “Oh yeah, I grudgingly clock in every day to give my eight hours to the rat race, but trust me, I’m secretly hating myself the whole time”? The other: “I work for a boutique solar energy startup that’s ending climate change - saving the environment is my passion!” Zoomers are worse still: not even the fig leaf of social good, just pure hustle.

Dilbert is a relic of a simpler time, when the trope could be played straight. But it’s also an artifact of the transition, maybe even a driver of it. Scott Adams appreciated these considerations earlier and more acutely than anyone else. And they drove him nuts.

Stick To Drawing Comics, Monkey Brain

Adams knew, deep in his bones, that he was cleverer than other people. God always punishes this impulse, especially in nerds. His usual strategy is straightforward enough: let them reach the advanced physics classes, where there will always be someone smarter than them, then beat them on the head with their own intellectual inferiority so many times that they cry uncle and admit they’re nothing special.

For Adams, God took a more creative and – dare I say, crueler – route. He created him only-slightly-above-average at everything except for a world-historical, Mozart-tier, absolutely Leonardo-level skill at making silly comics about hating work.


Scott Adams never forgave this. Too self-aware to deny it, too narcissistic to accept it, he spent his life searching for a loophole. You can read his frustration in his book titles: How To Fail At Almost Everything And Still Win Big. Trapped In A Dilbert World. Stick To Drawing Comics, Monkey Brain. Still, he refused to stick to comics. For a moment in the late-90s, with books like The Dilbert Principle and The Dilbert Future, he seemed on his way to be becoming a semi-serious business intellectual. He never quite made it, maybe because the Dilbert Principle wasn’t really what managers and consultants wanted to hear:
I wrote The Dilbert Principle around the concept that in many cases the least competent, least smart people are promoted, simply because they’re the ones you don't want doing actual work. You want them ordering the doughnuts and yelling at people for not doing their assignments—you know, the easy work. Your heart surgeons and your computer programmers—your smart people—aren't in management.
Okay, “I am cleverer than everyone else”, got it. His next venture (c. 1999) was the Dilberito, an attempt to revolutionize food via a Dilbert-themed burrito with the full Recommended Daily Allowance of twenty-three vitamins. I swear I am not making this up. A contemporaneous NYT review said it “could have been designed only by a food technologist or by someone who eats lunch without much thought to taste”. The Onion, in its twenty year retrospective for the doomed comestible, called it a frustrated groping towards meal replacements like Soylent or Huel, long before the existence of a culture nerdy enough to support them. Adams himself, looking back from several years’ distance, was even more scathing: “the mineral fortification was hard to disguise, and because of the veggie and legume content, three bites of the Dilberito made you fart so hard your intestines formed a tail.”

His second foray into the culinary world was a local restaurant called Stacey’s.

by Scott Alexander, Astral Codex Ten |  Read more:
Images: Dilbert/ACX 
[ed. First picture: Adams actually had a custom-built tower on his home shaped like Dilbert’s head.]

Friday, January 16, 2026

Measure Up

“My very dear friend Broadwood—

I have never felt a greater pleasure than in your honor’s notification of the arrival of this piano, with which you are honoring me as a present. I shall look upon it as an altar upon which I shall place the most beautiful offerings of my spirit to the divine Apollo. As soon as I receive your excellent instrument, I shall immediately send you the fruits of the first moments of inspiration I gather from it, as a souvenir for you from me, my very dear Broadwood; and I hope that they will be worthy of your instrument. My dear sir, accept my warmest consideration, from your friend and very humble servant.

—Ludwig van Beethoven”

As musical instruments improved through history, new kinds of music became possible. Sometimes, the improved instrument could make novel sounds; other times, it was louder; and other times stronger, allowing for more aggressive play. Like every technology, musical instruments are the fruit of generations worth of compounding technological refinement.

In a shockingly brief period between the late 18th and early 19th centuries, the piano was transformed technologically, and so too was the function of the music it produced.

To understand what happened, consider the form of classical music known as the “piano sonata.” This is a piece written for solo piano, and it is one of the forms that persisted through the transition, at least in name. In 1790, these were written for an early version of the piano that we now think of as the fortepiano. It sounded like a mix of a modern piano and a harpsichord.

Piano sonatas in the early 1790s were thought of primarily as casual entertainment. It wouldn’t be quite right to call them “background music” as we understand that term today—but they were often played in the background. People would talk over these little keyboard works, play cards, eat, drink.

In the middle of the 1790s, however, the piano started to improve at an accelerated rate. It was the early industrial revolution. Throughout the economy, many things were starting to click into place. Technologies that had kind of worked for a while began to really work. Scale began to be realized. Thicker networks of people, money, ideas, and goods were being built. Capital was becoming more productive, and with this serendipity was becoming more common. Few at the time could understand it, but it was the beginning of a wave—one made in the wake of what we today might call the techno-capital machine.

Riding this wave, the piano makers were among a great many manufacturers who learned to build better machines during this period. And with those improvements, more complex uses of those machines became possible.

Just as this industrial transformation was gaining momentum in the mid-1790s, a well-regarded keyboard player named Ludwig van Beethoven was starting his career in earnest. He, like everyone else, was riding the wave—though he, like everyone else, did not wholly understand it.

Beethoven was an emerging superstar, and he lived in Vienna, the musical capital of the world. It was a hub not just of musicians but also of musical instruments and the people who manufactured them. Some of the finest piano makers of the day—Walter, Graf, and Schanz—were in or around Vienna, and they were in fierce competition with one another. Playing at the city’s posh concert spaces, Beethoven had the opportunity to sample a huge range of emerging pianistic innovations. As his career blossomed, he acquired some of Europe’s finest pianos—including even stronger models from British manufacturers like Broadwood and Sons.

Iron reinforcement enabled piano frames with higher tolerances for louder and longer play. The strings became more robust. More responsive pedals meant a more direct relationship between the player and his tool. Innovations in casting, primitive machine tools, and mechanized woodworking yielded more precise parts. With these parts one could build superior hammer and escapement systems, which in turn led to faster-responding keys. And more of them, too—with higher and lower octaves now available. It is not just that the sound these pianos made was new: These instruments had an enhanced, more responsive user interface.

You could hit these instruments harder. You could play them softer, too. Beethoven’s iconic use of sforzando—rapid swings from soft to loud tones—would have been unplayable on the older pianos. So too would his complex and often rapid solos. In so many ways, then, Beethoven’s characteristic style and sound on the keyboard was technologically impossible for his predecessors to achieve... 

Beethoven was famous for breaking piano strings that were not yet strong enough to render his vision. There was always a relevant margin against which to press. By his final sonata, written in the early 1820s, he was pressing in the direction of early jazz. It was a technological and artistic takeoff from this to this, and from this to this.

Beethoven’s compositions for other instruments followed a structurally similar trajectory: compounding leaps in expressiveness, technical complexity, and thematic ambition, every few years. Here is what one of Mozart’s finest string quartets sounded like. Here is what Beethoven would do with the string quartet by the end of his career.

No longer did audiences talk during concerts. No longer did they play cards and make jokes. Audiences became silent and still, because what was happening to them in the concert hall had changed. A new type of art was emerging, and a new meta-character in human history—the artist—was being born. Beethoven was doing something different, something grander, something more intense, and the way listeners experienced it was different too.

The musical ideas Beethoven introduced to the world originated from his mind, but those ideas would have been unthinkable without a superior instrument.
I bought the instrument I’m using to write this essay in December 2020. I was standing in the frigid cold outside of the Apple Store in the Georgetown neighborhood of Washington, D.C., wearing a KN-95 face mask, separated by six feet from those next to me in line. I had dinner with a friend scheduled that evening. A couple weeks later, the Mayor would temporarily outlaw even that nicety.

I carried this laptop with me every day throughout the remainder of the pandemic. I ran a foundation using this laptop, and after that I orchestrated two career transitions using it. I built two small businesses, and I bought a house. I got married, and I planned a honeymoon with my wife. (...)

In a windowless office on a work trip to Stanford University on November 30, 2022, I discovered ChatGPT on this laptop. I stayed up all night in my hotel playing with the now-primitive GPT-3.5. Using my laptop, I educated myself more deeply about how this mysterious new tool worked.

I thought at first that it was an “answer machine,” a kind of turbocharged search engine. But I eventually came to prefer thinking of these language models as simulators of the internet that, by statistically modeling trillions of human-written words, learned new things about the structure of human-written text.

What might arise from a deeper-than-human understanding of the structures and meta-structures of nearly all the words humans have written for public consumption? What inductive priors might that understanding impart to this cognitive instrument? We know that a raw pretrained model, though deeply flawed, has quite sophisticated inductive priors with no additional human effort. With a great deal of additional human effort, we have made these systems quite useful little helpers, even if they still have their quirks and limitations.

But what if you could teach a system to guide itself through that digital landscape of modeled human thoughts to find better, rather than likelier, answers? What if the machine had good intellectual taste, because it could consider options, recognize mistakes, and decide on a course of cognitive action? Or what if it could, at least, simulate those cognitive processes? And what if that machine improved as quickly as we have seen AI advance so far? This is no longer science fiction; this research has been happening inside of the world’s leading AI firms, and with models like OpenAI’s o1 and o3, we see undoubtedly that progress is being made.

What would it mean for a machine to match the output of a human genius, word for word? What would it mean for a machine to exceed it? In at least some domains, even if only a very limited number at first, it seems likely that we will soon breach these thresholds. It is very hard to say how far this progress will go; as they say, experts disagree.

This strange simulator is “just math,”—it is, ultimately, ones and zeroes, electrons flowing through processed sand. But the math going on inside it is more like biochemistry than it is like arithmetic. The language model is, ultimately, still an instrument, but it is a strange one. Smart people, working in a field called mechanistic interpretability, are bettering our understanding all the time, but our understanding remains highly imperfect, and it will probably never be complete. We don’t quite have precise control yet over these instruments, but our control is getting better with time. We do not yet know how to make our control systems “good enough,” because we don’t quite know what “good enough” means yet—though here too, we are trying. We are searching.

As these instruments improve, the questions we ask them will have to get harder, smarter, and more detailed. This isn’t to say, necessarily, that we will need to become better “prompt engineers.” Instead, it is to suggest that we will need to become more curious. These new instruments will demand that we formulate better questions, and formulating better questions, often, is at least the seed of formulating better answers.

The input and the output, the prompt and the response, the question and the answer, the keyboard and the music, the photons and the photograph. We push at our instruments, we measure them up, and in their way, they measure us. (...)
I don’t like to think about technology in the abstract. Instead, I prefer to think about instruments like this laptop. I think about all the ways in which this instrument is better than the ones that came before it—faster, more reliable, more precise—and why it has improved. And I think about the ways in which this same laptop has become wildly more capable as new software tools came to be. I wonder at the capabilities I can summon with this keyboard now compared with when I was standing in that socially distanced line at the Apple Store four years ago.

I also think about the young Beethoven, playing around, trying to discover the capabilities of instruments with better keyboards, larger range, stronger frames, and suppler pedals. I think about all the uncoordinated work that had to happen—the collective and yet unplanned cultivation of craftsmanship, expertise, and industrial capacity—to make those pianos. I think about the staggering number of small industrial miracles that underpinned Beethoven’s keyboards, and the incomprehensibly larger number of industrial miracles that underpin the keyboard in front of me today. (...)

This past weekend, I replaced my MacBook Air with a new laptop. I wonder what it will be possible to do with this tremendous machine in a few years, or in a few weeks. New instruments for expression, and for intellectual exploration, will be built, and I will learn to use nearly all of them with my new laptop’s keyboard. It is now clear that a history-altering amount of cognitive potential will be at my fingertips, and yours, and everyone else’s. Like any technology, these new instruments will be much more useful to some than to others—but they will be useful in some way to almost everyone.

And just like the piano, what we today call “AI” will enable intellectual creations of far greater complexity, scale, and ambition—and greater repercussions, too. Higher dynamic range. I hope that among the instrument builders there will be inveterate craftsmen, and I hope that young Beethovens, practicing a wholly new kind of art, will emerge among the instrument players.

by Dean Ball, Hyperdimensional |  Read more:
Image: 1827 Broadwood & Sons grand piano/Wikipedia
[ed. Thoughtful essay throughout, well deserving of a full reading (even if you're just interested in Beethoven). On the hysterical end of the spectrum, here's what state legislators are proposing: The AI Patchwork Emerges. An update on state AI law in 2026 (so far) (Hyperdimensional):]
***
State legislative sessions are kicking into gear, and that means a flurry of AI laws are already under consideration across America. In prior years, the headline number of introduced state AI laws has been large: famously, 2025 saw over 1,000 state bills related to AI in some way. But as I pointed out, the vast majority of those laws were harmless: creating committees to study some aspect of AI and make policy recommendations, imposing liability on individuals who distribute AI-generated child pornography, and other largely non-problematic bills. The number of genuinely substantive bills—the kind that impose novel regulations on AI development or diffusion—was relatively small.

In 2026, this is no longer the case: there are now numerous substantive state AI bills floating around covering liability, algorithmic pricing, transparency, companion chatbots, child safety, occupational licensing, and more. In previous years, it was possible for me to independently cover most, if not all, of the interesting state AI bills at the level of rigor I expect of myself, and that my readers expect of me. This is no longer the case. There are simply too many of them.

Saturday, January 10, 2026

How Consent Can—and Cannot—Help Us Have Better Sex

The idea is legally vital, but ultimately unsatisfying. Is there another way forward?

In 1978, Greta Hibbard was twenty-two and living in rural Oregon. She had a two-year-old daughter, a minimum-wage job, and an unemployed husband. She was, she would later say, “living on peanut butter sandwiches.” She and her husband, John Rideout, often fought; sometimes he hit her or demanded sex. On the afternoon of October 10th, when he did just that, Hibbard fled to a neighbor’s house. Rideout followed her, cornered her in a park, and took her home. Once inside, she said, he punched her several times in the face and pulled down her pants. Their toddler, who was watching, went into her bedroom and wailed as her father penetrated her mother.

That this might be rape, legally speaking, was a brand-new idea. Until the mid-seventies, much of the sex in the United States was regulated not by the theory of consent but by that of property: a husband could no more be arrested for raping his wife than for breaking into his own house. In 1977, Oregon became one of the first states to make spousal rape illegal, and even then some politicians thought the law should apply only to couples living apart or in the process of divorcing. A California state senator summed up the prevailing attitude: “If you can’t rape your wife, who can you rape?”

Hibbard herself had only just learned that she had a right to decline sex with her husband. (At a woman’s crisis center, she had noticed a sign on the wall that read “If she says no, it’s rape.”) The night before the incident, she and Rideout were chatting with a neighbor when she brought up the new law. “I don’t believe it,” Rideout said. When he was arrested a few days later, he still didn’t. What followed was Oregon v. Rideout, the first time in the United States that a man stood trial for the rape of a wife with whom he lived, and a formative test of the notion that consent should determine the legality of sex.

Sarah Weinman retells this story in “Without Consent: A Landmark Trial and the Decades-Long Struggle to Make Spousal Rape a Crime” (Ecco). Weinman is known for taking a true-crime approach to intellectual history: her previous books center on the murderer who befriended William F. Buckley, Jr.—the founder of the National Review—and on the kidnapping that is believed to have inspired Vladimir Nabokov to write “Lolita.” Her writing is breezy even when the subject matter is not exactly beachy. Rideout’s trial, for example, teemed with outrages. His defense lawyer smeared Hibbard for her sexual past: two abortions, a supposed lesbian experience, and a previous assault allegation against Rideout’s half brother, which, according to Weinman, Hibbard retracted after threats from the accused. Meanwhile, even the prosecutor thought Rideout seemed like a good guy. “I don’t think he belongs in prison or jail,” he told the press. When Rideout was acquitted, the courtroom burst into applause.

Hibbard, who reconciled with Rideout almost immediately after the trial, would divorce him within months. But Weinman follows Rideout all the way through 2017, when he was once again tried for rape. This time, the victims were Sheila Moxley, an acquaintance who had grudgingly allowed a drunk Rideout to sleep on her sofa after he came over to help her fix some furniture, and Teresa Hern, a long-term, on-and-off girlfriend. Both women had been held down and penetrated by Rideout in the middle of the night. Once again, a defense lawyer attempted to paint the women as lying, scheming seductresses. But this time Rideout was convicted on all counts and eventually sentenced to twenty-five years in prison. “You are a bad man,” Moxley read in a statement. “You are an evil man. You are a monster.”

Weinman’s choice to begin and end with Rideout’s trials allows her to tell a story of comeuppance, in which, during the span of one man’s life, society decided to take rape seriously and punish the monsters who commit it. This is a happy thought. But the real arc of history is not so short, nor does it bend with anything like certainty toward justice. Today, about one in ten American women have been raped by their intimate partners—roughly the same rate reported in the eighties. This year, the Trump Administration removed the Center for Disease Control’s online statistics on intimate-partner and sexual violence; the page was restored by a court order, and now contains a disclaimer: “This page does not reflect reality.” Donald Trump himself has been accused of sexual misconduct by at least twenty-four women. He has denied these accusations, including one from his first wife, Ivana, who testified under oath that he threw her on the bed, ripped out a handful of her hair, and then forced himself on her. She later clarified that she didn’t mean the word “rape” in the “literal or criminal sense.”

In Weinman’s epilogue, she briefly points to the unfinished business of ending rape, spousal or otherwise. But her book assumes that society has at least sorted out the philosophical underpinnings of how to regulate sex. “Younger generations were far clearer about these issues,” Weinman writes, “understanding that consent must be given ‘freely and intelligently’ by those who were capable, and anything shy of full consent was considered rape.” There is, I think, no such clarity. It is not just people like Trump, Jeffrey Epstein, Pete Hegseth, Brock Turner, Bill Cosby, Sean Combs, Dominique Pelicot, and their many, many friends who seem to have a bone to pick with consent. Feminists have their own quibbles. What does “freely and intelligently” mean, they ask, and what entails “full consent”? Who exactly is capable of consenting? And what are we to do with rapists?

For some second-wave feminists, the very idea that a woman living under patriarchy could “consent” to sex with a man was absurd. After all, we don’t think of a serf consenting to work for her feudal overlord: the serf might well enjoy tilling the fields, she might even love her master, but she didn’t choose farm labor so much as she was kept, by rigid and often violent social limits, from pursuing anything else. And even if the choice were free—even if decades of hard-fought feminist struggle had occasioned the sort of emancipation that meant women were no longer analogous to serfs—could such a choice ever be “intelligent”? Some women find knitting pleasurable, comforting, and affirming of their femininity, but how many would recommend it to a friend if it carried a ten-per-cent chance of rape?

These were lively arguments in the seventies and eighties, advanced by feminists like Catharine MacKinnon and Andrea Dworkin, who had herself been battered by her husband. Today, the basic idea—often glossed as “all heterosexual sex is rape,” though neither MacKinnon nor Dworkin wrote exactly those words—seems almost farcical. Radical feminists no longer blame heterosexual women for “sleeping with the enemy.” It’s widely accepted that a woman really can consent to sex with a husband on whom she is financially dependent. The immediate though rather less accepted corollary is that she can also consent to sex with a paying stranger. To say anything else, many feminists now argue, would be to infantilize her, to subordinate her—to the state, to moralism—rather than acknowledge her mastery of her own body.

But the root of the second-wave critique, that there are power differentials across which professed consent is insufficient, lives on in other debates. Children, a class whom the poet Mary Karr once described as “three feet tall, flat broke, unemployed, and illiterate,” are an obvious example. It is easy to be horrified by situations where children are subjected to sex that is forced or coerced. But what about sex that they claim to want? Can children consent to sex with other children? With adults? Can a nineteen-year-old girl legally have what she believes to be loving, consensual sex with her stepfather? What about with her stepmother? Can students choose to have sex with their professors, or employees with their bosses? How we answer these questions depends on whom we consider to be so gullible, vulnerable, or exploited that they must be protected from their own expressed desires. (...)

One critique of consent, then, is that it is too permissive—that it ignores how coercion or delusion may result in the illusion of agreement. But another critique is that it’s too restrictive and punitive. Decades of reform laws have expanded the number of situations legally considered to be rape: it’s no longer a charge that can be brought only against an armed stranger who attacks a struggling victim, ideally a white virgin. On university campuses, the idea that “no means no” has given way—because of the well-documented fact that many people freeze and are unable to speak in moments of fear—to “yes means yes.”

Critics of this shift worry about encounters where both parties are blackout drunk, or where one appears to retroactively withdraw consent. They argue that a lower bar for rape leads to the criminalization—or at least the litigation—of misunderstandings, and so discourages the sort of carefree sexual experimentation that some feminists very much hope to champion. “I can think of no better way to subjugate women than to convince us that assault is around every corner,” the self-identified feminist Laura Kipnis writes in “Unwanted Advances,” a 2017 book about “sexual paranoia on campus.” Kipnis describes her own mother laughingly recalling a college professor chasing her around a desk and trying to kiss her. That young women today are encouraged to think of this kind of “idiocy” as an “incapacitating trauma,” Kipnis argues, codifies sexist ideas about their innocence, purity, and helplessness. Another interpretation is that young women have decided, with a rather masculine sense of their own entitlement, that they need not smile indulgently upon their transgressors. But Kipnis is right in her broader point: the bureaucratization of our erotic lives is no path to liberation.

by S.C. Cornell, New Yorker | Read more:
Image: Michelle Mildenberg Lara

Thursday, January 8, 2026

Fossil Words and the Road to Damascus


Caravaggio, The Conversion of Saint Paul
via:
[ed. Fossil word(s). When a word is broadly obsolete but remains in use due to its presence in an idiom or phrase. 

For example, I've always understood the phrase Road to Damascus to be a sort of epiphany or form of enlightment (without knowing what it actually meant). Another example would be Crossing the Rubicon (a point of no return; or decision with no turning back). Of course, these aren't outdated words/phrases as much as shorthand for mental laziness (or trite writing habits). Wikipedia provides a number of examples of actual fossil words, including "much ado about nothing" or "without further ado" (who uses ado in any other context these days?); or "in point", as in "a case in point", or "in point of fact". So, to help promote a little more clarity around here -- Road to Damascus:] 
***
The conversion of Paul the Apostle was, according to the New Testament, an event in the life of Saul/Paul the Apostle that led him to cease persecuting early Christians and to become a follower of Jesus. Paul, who also went by Saul, was "a Pharisee of Pharisees" who "intensely persecuted" the followers of Jesus. Paul describes his life before conversion in his Epistle to the Galatians:
For you have heard of my previous way of life in Judaism, how intensely I persecuted the church of God and tried to destroy it. I was advancing in Judaism beyond many of my own age among my people and was extremely zealous for the traditions of my fathers...
As he neared Damascus on his journey, suddenly a light from heaven flashed around him. He fell to the ground and heard a voice say to him, "Saul, Saul, why do you persecute me?"

"Who are you, Lord?" Saul asked.

"I am Jesus, whom you are persecuting," he replied. "Now get up and go into the city, and you will be told what you must do."

The men traveling with Saul stood there speechless; they heard the sound but did not see anyone. Paul got up from the ground, but when he opened his eyes he could see nothing. So they led him by the hand into Damascus. For three days he was blind, and did not eat or drink anything.

— Acts 9:3–9

Wikipedia Style Guide

Many people edit Wikipedia because they enjoy writing; however, that passion can result in overlong composition. This reflects a lack of time or commitment to refine an effort through successively more concise drafts. With some application, natural redundancies and digressions can often be eliminated. Recall the venerable paraphrase of Pascal: "I made this so long because I did not have time to make it shorter." [Wikipedia: tl;dr]

Inverted pyramid

Some articles follow the inverted pyramid structure of journalism, which can be seen in news articles that get directly to the point. The main feature of the inverted pyramid is placement of important information first, with a decreasing importance as the article advances. Originally developed so that the editors could cut from the bottom to fit an item into the available layout space, this style encourages brevity and prioritizes information, because many people expect to find important material early, and less important information later, where interest decreases. (...)

What Wikipedia is not

Wikipedia is not a manual, guidebook, textbook, or scientific journal. Articles and other encyclopedic content should be written in a formal tone. Standards for formal tone vary depending upon the subject matter but should usually match the style used in Featured- and Good-class articles in the same category. Encyclopedic writing has a fairly academic approach, while remaining clear and understandable. Formal tone means that the article should not be written using argot, slang, colloquialisms, doublespeak, legalese, or jargon that is unintelligible to an average reader; it means that the English language should be used in a businesslike manner (e.g. use "feel" or "atmosphere" instead of "vibes").

News style or persuasive writing

A Wikipedia article should not sound like a news article. Especially avoid bombastic wording, attempts at humor or cleverness, over-reliance on primary sources, editorializing, recentism, pull quotes, journalese, and headlinese.

Similarly, avoid persuasive writing, which has many of those faults and more of its own, most often various kinds of appeals to emotion and related fallacies. This style is used in press releases, advertising, editorial writing, activism, propaganda, proposals, formal debate, reviews, and much tabloid and sometimes investigative journalism. It is not Wikipedia's role to try to convince the reader of anything, only to provide the salient facts as best they can be determined, and the reliable sources for them.

Comparison of styles

via: Wikipedia: Writing better articles
Image: Benjamin Busch/Import Projects - Wikimedia commons 
[ed. In celebration of Wikipedia Day (roughly Jan. 15). It's easy to forget how awesome this product really is: a massive, free, indispensable resource tended to by hundreds (thousands?) of volunteers simply for altruistic reasons. The best of the internet (and reminder of what could have been). See also: Wikipedia:What Wikipedia is not]