Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Wednesday, January 28, 2026

Why Even the Healthiest People Hit a Wall at Age 70

Are we currently determining how much of aging is lifestyle changes and interventions and how much of it is basically your genetic destiny?

 

[Transcript:] We are constantly being bombarded with health and lifestyle advice at the moment. I feel like I cannot open my social media feeds without seeing adverts for supplements or diet plans or exercise regimes. And I think that this really is a distraction from the big goals of longevity science. This is a really difficult needle to thread when it comes to talking about this stuff because I'm a huge advocate for public health. I think if we could help people eat better, if we could help 'em do more exercise, if we could help 'em quit smoking, this would have enormous effects on our health, on our economies all around the world. But this sort of micro-optimization, these three-hour long health podcasts that people are digesting on a daily basis these days, I think we're really majoring in the minors. We're trying to absolutely eke out every last single thing when it comes to living healthily. And I think the problem is that there are real limits to what we can do with health advice. 

So for example, there was a study that came out recently that was all over my social media feeds. And the headline was that by eating the best possible diet, you can double your chance of aging healthily. But I decided to dig into the results table. The healthiest diet was something called the Alternative Healthy Eating Index or AHEI. And even the people who are sticking most closely to this best diet, according to this study, the top 20% of adherence to the AHEI, only 13.6% of them made it to 70 years old without any chronic diseases. That means that over 85% of the people sticking to the best diet, according to this study, got to the age of 70 with at least something wrong with them. And that shows us that optimizing diet only has so far it can go. 

We're not talking about immortality or living to 120 here. If you wanna be 70 years old and in good enough health to play with your grandkids, I cannot guarantee that you can do that no matter how good your diet is. And that's why we need longevity medicine to help keep people healthier for longer. And actually, I think even this idea of 120, 150-year-old lifespans, you know, immortality even as a word that's often thrown around, I think the main thing we're trying to do is get people to 80, 90 years old in good health. 'cause we already know that most people alive today, when they reach that age, are unfortunately gonna be frail. They're probably gonna be suffering from two or three or four different diseases simultaneously. And what we wanna do is try and keep people healthier for longer. And by doing that, they probably will live longer but kind of as a side effect. 

If you look at photographs of people from the past, they often look older than people in the present day who are the same age. And part of these are these terrible fashion choices that people made in the past. And we can look back and, you know, understand the mistakes they've made with hindsight. But part of that actually is aging biology. I think the fact that people can be different biological ages at the same chronological ages, something that's really quite intuitive. All of us know people who've waltzed into their 60s looking great and, you know, basically as fit as someone in their 40s or 50s. And we know similar people who have also gone into their 60s, but they're looking haggard, they've got multiple different diseases, they're already struggling through life. 

In the last decade, scientists have come up with various measures of what's called biological age as distinct from chronological age. So your chronological age is just how many candles there are on your birthday cake. And obviously, you know, most of us are familiar with that. But the idea of biological age is to look inside your cells, look inside your body, and work out how old you are on a biological level. Now we aren't perfect at doing this yet, but we do have a variety of different measures. We can use blood tests, we can use what are called epigenetic tests, or we can do things that are far more sort of basic and functional, how strong your grip is declines with age. And by comparing the value of something like your grip strength to an average person of a given age, we can assign you a biological age value. And I think the ones that are getting the most buzz at the moment within the scientific community, but also all around the internet, are these epigenetic age tests. 

So the way that this works is that you'll take a blood test or a saliva sample and scientists will measure something about your epigenome. So the genome is your DNA, it's the instruction manual of life. And the epigenome is a layer of chemistry that sits on top of your genome. If you think of your DNA is that instruction manual, then the epigenome is the notes in the margin. It's the little sticky notes that have been stuck on the side and they tell the cell which DNA to use at which particular time. And we know that there are changes to this epigenome as you get older. And so by measuring the changes in the epigenome, you can assign someone a biological age. 

At the moment, these epigene clocks are a really great research tool. They're really deepening our understanding of biological aging in the lab. I think the problem with these tests as applied to individuals is we don't know enough about exactly what they're telling us. We don't know what these individual changes in epigenetic marks mean. We know they're correlated with age, but what we don't know is if they're causally related. And in particular, we don't know if you intervene, if you make a change in your lifestyle, if you start taking a certain supplement and that reduces your biological age. We don't know whether that actually means you're gonna dilate or whether it means you're gonna stay healthier for longer or whether you've done something that's kind of adjacent to that. And so we need to do more research to understand if we can causally impact these epigenetic measures. (...)

Machine learning and artificial intelligence are gonna be hugely, hugely important in understanding the biology of aging. Because the body is such a complicated system that in order to really understand it, we're gonna need these vast computer models to try and decode the data for us. The challenge is that what machine learning can do at the moment is it can identify correlations. So it can identify things that are associated with aging, but it can't necessarily tell us what's causing something else. So for example, in the case of these epigenetic clocks, the parts of the epigenome that change with age have been identified because they correlate. But what we don't know is if you intervene in any one of these individual epigenetic marks, if you move it in the direction of something younger, does that actually make people healthier? And so what we need to do is more experiments where we try and work out if we can intervene in these epigenetic, in these biological clocks, can we make people live healthier for longer? 

Over the last 10 or 15 years, scientists have really started to understand the fundamental underlying biology of the aging process. And they broke this down into 12 what are called hallmarks of aging. One of those hallmarks is the accumulation of senescent cells. Now senescent is just a biological technical term for old. These are cells that accumulate in all of our bodies as the years go by. And scientists have noticed that these cells seem to drive a range of different diseases as we get older. And so the idea was what if we could remove these cells and leave the rest of the cells of the body intact? Could that slow down or even partially reverse the aging process? And scientists identified drugs called it senolytic drugs. 

These are drugs that kill those senescent cells and they tried them out in mice and they do indeed effectively make the mice biologically younger. So if you give mice a course of senolytic drugs, it removes those senescent cells from their body. And firstly, it makes them live a bit longer. That's a good thing if you're slowing down the aging process, the basic thing you want to see. But it's not dragging out that period of frailty at the end of life. It's keeping the mice healthier for longer so they get less cancer, they get less heart disease, they get fewer cataracts. The mice are also less frail. They basically send the mice to a tiny mouse-scale gym in these experiments. And the mice that have been given the drugs, they can run further and faster on the mousey treadmills that they try them out on. 

It also seems to reverse some of the cognitive effects that come along with aging. So if you put an older mouse in a maze, it's often a bit anxious, doesn't really want to explore. Whereas a younger mouse is desperate to, you know, run around and find the cheese or whatever it is mice doing in mazes. And by giving them these senolytic drugs, you can unlock some of that youthful curiosity. And finally, these mice just look great. You do not need to be an expert mouse biologist to see which one has had the pills and which one hasn't. They've got thicker fur. They've got plumper skin. They've got brighter eyes. They've got less fat on their bodies. And what this shows us is that by targeting the fundamental processes of aging, by identifying something like senescent cells that drives a whole range of age-related problems, we can hit much perhaps even all of the aging process with a single treatment. 

Senescent cells are, of course, only one of these 12 hallmarks of aging. And I think in order to both understand and treat the aging process, we're potentially gonna only treatments for many, perhaps even all of those hallmarks. There's never gonna be a single magic pill that can just make you live forever. Aging is much, much more complicated than that. But by understanding this relatively short list of underlying processes, maybe we can come up with 12, 20 different treatments that can have a really big effect on how long we live. 

One of the most exciting ideas in longevity science at the moment is what's called cellular reprogramming. I sometimes describe this as a treatment that has fallen through a wormhole from the future. This is the idea that we can reset the biological clock inside of our cells. And the idea first came about in the mid 2000s because there was a scientist called Shinya Yamanaka who was trying to find out how to turn regular adult body cells all the way back to the very beginning of their biological existence. And Yamanaka and his team were able to identify four genes that you could insert into a cell and turn back that biological clock. 

Now, he was interested in this from the point of view of creating stem cells, a cell that can create any other kind of cell in the body, which we might be able to use for tissue repair in future. But scientists also noticed, as well as turning back the developmental clock on these cells, it also turns back the aging clock, cells that are given these four Yamanaka factors actually are biologically younger than cells that haven't had the treatment. And so what scientists decided to do was insert these Yamanaka factor genes into mice. 

Now if you do this in a naive way, so there's genes active all the time, it's actually very bad news for the mice, unfortunately. because these stem cells, although they're very powerful in terms of what kind of cell they can become, they are useless at being a liver cell or being a heart cell. And so the mice very quickly died of organ failure. But if you activate these genes only transiently, and the way that scientists did it the first time successfully was essentially to activate them at weekends. So they produced these genes in such a way that they could be activated with the drug and they gave the mice the drug for two days of the week, and then gave them five days off so the Yamanaka factors were then suppressed. They found that this was enough to turn back the biological clock in those cells, but without turning back the developmental clock and turn them into these stem cells. And that meant the mice stayed a little bit healthier. We now know that they can live a little bit longer with this treatment too.

Now the real challenge is that this is a gene therapy treatment. It involves delivering four different genes to every single cell in your body. The question is can we, with our puny 2020s biotechnology, make this into a viable treatment, a pill even, that we can actually use in human beings? I really think this idea of cellular reprogramming appeals to a particular tech billionaire sort of mentality. The idea that we can go in and edit the code of life and reprogram our biological age, it's a hugely powerful concept. And if this works, the fact that you can turn back the biological clock all the way to zero, this really is a very, very cool idea. And that's what's led various different billionaires from the Bay Area to invest huge, huge amounts of money in this. 

Altos Labs is the biggest so-called startup in this space. And I wouldn't really call it a startup 'cause it's got funding of $3 billion from amongst other people, Jeff Bezos, the founder of Amazon. Now I'm very excited about this because I think $3 billion is enough to have a good go and see if we can turn this into a viable human treatment. My only concern is that epigenetics is only one of those hallmarks of aging. And so it might be the case that we solve aging inside our individual cells, but we leave other parts of the aging process intact. (...)

Probably the quickest short-term wins in longevity science are going to be repurposed existing drugs. And the reason for this is because we spent many, many years developing these drugs. We understand how they work in humans. We understand a bit about their safety profile. And because these molecules already exist, we've just tried them out in mice, in, you know, various organisms in the lab and found that a subset of them do indeed slow down the aging process. The first trial of a longevity drug that was proposed in humans was for a drug called metformin, which is a pre-existing drug that we prescribe actually for diabetes in this case, and has some indications that it might slow down the aging process in people. (...)

I think one of the ones that's got the most buzz around it at the moment is a drug called rapamycin. This is a drug that's been given for organ transplants. It's sometimes used to coat stents, which these little things that you stick in the arteries around your heart to expand them if you've got a contraction of those arteries that's restricting the blood supply. But we also know from experiments in the lab that can make all kinds of different organisms live longer, everything from single-cell yeast, to worms, to flies, to mice, to marmoset, which are primates. They're very, very evolutionarily close to us as one of the latest results. 

Rapamycin has this really incredible story. It was first isolated in bacteria from a soil sample from Easter Island, which is known as Rapa Nui in the local Polynesians. That's where the drug gets its name. And when it was first isolated, it was discovered to be antifungal. It could stop fungal cells from growing. So that was what we thought we'd use it for initially. But when the scientists started playing around with in the lab, they realized it didn't just stop fungal cells from growing. It also stopped many other kinds of cells as well, things like up to and including human cells. And so the slight disadvantage was that if you used it as an antifungal agent, it would also stop your immune cells from being able to divide, which is obviously be a bit of a sort of counterintuitive way to try and treat a fungal disease. So scientists decided to use it as an immune suppressant. It can stop your immune system from going haywire when you get an organ transplant, for example, and rejecting that new organ. 

It is also developed as an anti-cancer drug. So if it can stop cells dividing or cancer as cells dividing out of control. But the way that rapamycin works is it targets a fundamental central component of cellular metabolism. And we noticed that that seemed to be very, very important in the aging process. And so by tamping it down by less than you would do in a patient where you're trying to suppress their immune system, you can actually rather than stopping the cell dividing entirely, you can make it enter a state where it's much more efficient in its use of resources. It starts this process called autophagy, which is Greek for self-eating, autophagy. And that means it consumes old damaged proteins, and then recycles them into fresh new ones. And that actually is a critical process in slowing down aging, biologically speaking. And in 2009, we found out for the first time that by giving it to mice late in life, you could actually extend their remaining lifespan. They live by 10 or 15% longer. And this was a really incredible result. 

This was the first time a drug had been shown to slow down aging in mammals. And accordingly, scientists have become very, very excited about it. And we've now tried it in loads of different contexts and loads of different animals and loads of different organisms at loads of different times in life. You can even wait until very late in a mouse lifespan to give it rapamycin and you still see most of that same lifespan extension effect. And that's fantastic news potentially for us humans because not all of us, unfortunately, can start taking a drug from birth 'cause most of us were born quite a long time ago. But rapamycin still works even if you give it to mice who are the equivalent of 60 or 70 years old in human terms. And that means that for those of us who are already aged a little bit, Rapamycin could still help us potentially. And there are already biohackers out there trying this out for themselves, hopefully with the help of a doctor to make sure that they're doing everything as safely as possible to try and extend their healthy life. And so the question is: should we do a human trial of rapamycin to find out if it can slow down the aging process in people as well? (...)

We've already got dozens of ideas in the lab for ways to slow down, maybe even reverse the age of things like mice and cells in a dish. And that means we've got a lot of shots on goal. I think it'll be wildly unlucky if none of the things that slow down aging in the lab actually translate to human beings. That doesn't mean that most of them will work, probably most of them won't, but we only need one or two of them to succeed and really make a big difference. And I think a great example of this is GLP-1 drugs, the ozempics, the things that are allowing people to suddenly lose a huge amount of weight. We've been looking for decades for these weight loss drugs, and now we finally found them. It's shown that these breakthroughs are possible, they can come out of left field. And all we need to do in some cases is a human trial to find out if these drugs actually work in people. 

And what that means is that, you know, the average person on planet earth is under the age of 40. They've probably got 40 or 50 years of life expectancy left depending on the country that they live in. And that's an awful lot of time for science to happen. And if then in the next 5 or 10 years, we do put funding toward these human trials, we might have those first longevity drugs that might make you live one or two or five years longer. And that gives scientists even more time to develop the next treatment. And if we think about some more advanced treatments, not just drugs, things like stem cell therapy or gene therapy, those things can sound pretty sci-fi. But actually, we know that these things are already being deployed in hospitals and clinics around the world. They're being deployed for specific serious diseases, for example, where we know that a single gene can be a problem and we can go in and fix that gene and give a child a much better chance at a long, healthy life. 

But as we learn how these technologies work in the context of these serious diseases, we're gonna learn how to make them effective. And most importantly, we're gonna learn how to make them safe. And so we could imagine doing longevity gene edits in human beings, perhaps not in the next five years, but I think it'll be foolish to bet against it happening in the next 20 years, for example. 

by Andrew Steele, The Big Think |  Read more:
Image: Yamanka factors via:
[ed. See also: Researchers Are Using A.I. to Decode the Human Genome (NYT).]

Friday, January 23, 2026

211-mile Ambler Road Project Through Gates of the Arctic National Park Gets Approval

Trump Sacrifices Alaska Wilderness to Help AI Companies

Trump’s approval of the Ambler Road Project is a reversal for the federal government. Only last year, the Bureau of Land Management released its Record of Decision selecting “No Action” on Ambler Road, in cooperation with Alaska tribal councils, the Environmental Protection Agency, the U.S. Fish and Wildlife Service, and many others.

In the document, the impact on fish habitat, water and air quality, disruption of groundwater flow, hazardous materials from spills, and the negative impact on the Western Arctic caribou herd, which has been steadily declining since 2017, were all cited as reasons for denial. The Record of Decision also stated that the Ambler Road Project would forever alter the culture and traditional practices of Alaska Native communities, who have lived and thrived in the region for centuries.

by Gavin Feek, The Intercept |  Read more:
Image: Bonnie Jo Mount/The Washington Post via Getty Images
[ed. I used to permit/mitigate mine development in Alaska. Imagine what a 211-mile gravel road, 30+ years of year-round maintenance, and relentless heavy truck/support traffic will do to the area, its wildlife and nearby native communities (not to mention blasting a massive mining crater, constructing sprawling support facilities, airstrip(s), and discharging millions of gallons of wastewater (from somewhere, to... somewhere).]

AI: Practical Advice for the Worried

A Word On Thinking For Yourself

There are good reasons to worry about AI. This includes good reasons to worry about AI wiping out all value in the universe, or AI killing everyone, or other similar very bad outcomes.

There are also good reasons that AGI, or otherwise transformational AI, might not come to pass for a long time.

As I say in the Q&A section later, I do not consider imminent transformational AI inevitable in our lifetimes: Some combination of ‘we run out of training data and ways to improve the systems, and AI systems max out at not that much more powerful than current ones’ and ‘turns out there are regulatory and other barriers that prevent AI from impacting that much of life or the economy that much’ could mean that things during our lifetimes turn out to be not that strange. These are definitely world types my model says you should consider plausible.

There is also the highly disputed question of how likely it is that if we did create an AGI reasonably soon, it would wipe out all value in the universe. There are what I consider very good arguments that this is what happens unless we solve extremely difficult problems to prevent it, and that we are unlikely to solve those problems in time. Thus I believe this is very likely, although there are some (such as Eliezer Yudkowsky) who consider it more likely still.

That does not mean you should adapt my position, or anyone else’s position, or mostly use social cognition from those around you, on such questions, no matter what those methods would tell you. If this is something that is going to impact your major life decisions, or keep you up at night, you need to develop your own understanding and model, and decide for yourself what you predict. (...)

Overview

There is some probability that humanity will create transformational AI soon, for various definitions of soon. You can and should decide what you think that probability is, and conditional on that happening, your probability of various outcomes.

Many of these outcomes, both good and bad, will radically alter the payoffs of various life decisions you might make now. Some such changes are predictable. Others not.

None of this is new. We have long lived under the very real threat of potential nuclear annihilation. The employees of the RAND corporation, in charge of nuclear strategic planning, famously did not contribute to their retirement accounts because they did not expect to live long enough to need them. Given what we know now about the close calls of the cold war, and what they knew at the time, perhaps this was not so crazy a perspective.

Should this imminent small but very real risk radically change your actions? I think the answer here is a clear no, unless your actions are relevant to nuclear war risks, either personally or globally, in some way, in which case one can shut up and multiply.

This goes back far longer. For much longer than that, various religious folks have expected Judgment Day to arrive soon, often with a date attached. Often they made poor decisions in response to this, even given their beliefs.

There are some people that talk or feel this same way about climate change, as an impending inevitable extinction event for humanity.

Under such circumstances, I would center my position on a simple claim: Normal Life is Worth Living, even if you think P(doom) relatively soon is very high. (...)

More generally, in terms of helping: Burning yourself out, stressing yourself out, tying yourself up in existential angst all are not helpful. It would be better to keep yourself sane and healthy and financially intact, in case you are later offered leverage. Fighting the good fight, however doomed it might be, because it is a far, far better thing to do, is also a fine response, if you keep in mind how easy it is to end up not helping that fight. But do that while also living a normal life, even if that might seem indulgent. You will be more effective for it, especially over time. (...)

On to individual questions to flesh all this out.

Q&A

Q: Should I still save for retirement?
Short Answer: Yes.
Long Answer: Yes, to most (but not all) of the extent that this would otherwise be a concern and action of yours in the ‘normal’ world
. It would be better to say ‘build up asset value over time’ than ‘save for retirement’ in my model. Building up assets gives you resources to influence the future on all scales, whether or not retirement is even involved. I wouldn’t get too attached to labels.

Remember that while it is not something one should do lightly, none of this is lightly, and you can raid retirement accounts with what in context is a modest penalty, in an extreme enough ‘endgame’ situation - it does not even take that many years for the expected value of the compounded tax advantages to exceed the withdraw penalty - the cost of emptying the account, should you need to do that, is only 10% of funds and about a week in the United States (plus now having to pay taxes on it). And that in some extreme future situations, having that cash would be highly valuable, none of which suggests now is the time to empty it, or to not build it up.

The case for saving money does not depend on expecting a future ‘normal’ world. Which is good, because even without AI the future world is likely to not be all that ‘normal.

Q: Should I take on a ton of debt intending to never have to pay it back?

Short Answer: No, except for a mortgage.

Long Answer: Mostly no, except for a mortgage. Save your powder. See my post On AI and Interest Rates for an extended treatment of this question - I feel that is a definitive answer to the supposed ‘gotcha’ question of why doomers don’t take on lots of debt. Taking on a bunch of debt is a limited resource, and good ways to do it are even more limited for most of us. Yes, where you get the opportunity it would be good to lock in long borrow periods at fixed rates if you think things are about to get super weird. But if your plan is ‘the market will realize what is happening and adjust the value of my debt in time for me to profit’ that does not seem, to me, like a good plan. Nor does borrowing now much change your actual constraints on where you run out of money.

Does borrowing money that you have to pay back in 2033 mean you have more money to spend? That depends. What is your intention if 2033 rolls around and the world hasn’t ended? Are you going to pay it back? If so then you need to prepare now to be able to do that. So you didn’t accomplish all that much.

You need very high confidence in High Weirdness Real Soon Now before you can expect to get net rewarded for putting your financial future on quicksand, where you are in real trouble if you get the timing wrong. You also need a good way to spend that money to change the outcome.

Yes, there is a level of confidence in both speed and magnitude, combined with a good way to spend, that would change that, and that I do not believe is warranted. One must notice that you need vastly less certainty than this to be shouting about these issues from the rooftops, or devoting your time to working on them.

Eliezer’s position, as per his most recent podcast is something like ‘AGI could come very soon, seems inevitable by 2050 barring civilizational collapse, and if it happens we almost certainly all die.’ Suppose you really actually believed that. It’s still not enough to do much with debt unless you have a great use of money - there’s still a lot of probability mass that the money is due back while you’re still alive, potentially right before it might matter.

Yes, also, this changes if you think you can actually change the outcome for the better by spending money now, money loses impact over time, so your discount factor should be high. That however does not seem to be the case that I see being made.

Q: Does buying a house make sense?

A: Maybe. It is an opportunity to borrow money at low interest rates with good tax treatment. It also potentially ties up capital and ties you down to a particular location, and is not as liquid as some other forms of capital. So ask yourself how psychologically hard it would be to undo that. In terms of whether it looks like a good investment in a world with useful but non-transformational AI, an AI could figure out how to more efficiently build housing, but would that cause more houses to be built?

Q: Does it make sense to start a business?

A: Yes, although not because of AI. It is good to start a business. Of course, if the business is going to involve AI, carefully consider whether you are making the situation worse.

Q: Does It Still Make Sense to Try and Have Kids?

Short Answer: Yes.

Long Answer: Yes. Kids are valuable and make the world and your own world better, even if the world then ends. I would much rather exist for a bit than never exist at all. Kids give you hope for the future and something to protect, get you to step up. They get others to take you more seriously. Kids teach you many things that help one think better about AI. You think they take away your free time, but there is a limit to how much creative work one can do in a day. This is what life is all about. Missing out on this is deeply sad. Don’t let it pass you by.

Is there a level of working directly on the problem, or being uniquely positioned to help with the problem, where I would consider changing this advice? Yes, there are a few names where I think this is not so clear, but I am thinking of a very small number of names right now, and yours is not one of them.

You can guess how I would answer most other similar questions. I do not agree with Buffy Summers that the hardest thing in this world is to live in it. I do think she knows better than any of us that not living in this world is not the way to save it.

Q: Should I talk to my kids about how there’s a substantial chance they won’t get to grow up?

A: I would not (and will not) hide this information from my kids, any more than I would hide the risk from nuclear war, but ‘you may not get to grow up’ is not a helpful thing to say to (or to emphasize to) kids. Talking to your kids about this (in the sense of ‘talk to your kids about drugs’) is only going to distress them to no purpose. While I don’t believe in hiding stuff from kids, I also don’t think this is something it is useful to hammer into them. Kids should still get to be and enjoy being kids. (...)

Q: Should I just try to have a good time while I can?

A: No, because my model says that this doesn’t work. It is empty. You can have fun for a day, a week, a month, perhaps a year, but after a while it rings hollow, feels empty, and your future will fill you with dread. Certainly it makes sense to shift this on the margin, get your key bucket list items in early, put a higher marginal priority on fun - even more so than you should have been doing anyway. But I don’t think my day-to-day life experience would improve for very long by taking this kind of path. Then again, each of us is different.

That all assumes you have ruled out attempting to improve our chances. Personally, even if I had to go down, I’d rather go down fighting. Insert rousing speech here.

Q: How Long Do We Have? What is the Timeline?

Short Answer: Unknown. Look at the arguments and evidence. Form your own opinion.

Long Answer: High uncertainty about when this will happen if it happens, whether or not one has high uncertainty about whether it happens at all within our lifetimes. Eliezer’s answer was that he would be very surprised if it didn’t happen by 2050, but that within that range little would surprise him and he has low confidence. Others have longer or shorter means and medians in their timelines. Mine are substantially longer and less confident than Eliezer’s. This is a question you must decide for yourself. The key is that there is uncertainty, so lots of difference scenarios matter.

by Zvi Mowshowitz, Don't Worry About the Vase |  Read more:
Image: via Linkedin Image Generator
[ed. See also: The AI doomers feel undeterred (MIT).]

Thursday, January 22, 2026

ChatGPT Self Portrait

[ed. Do AI's have feelings?]

@gmltony: Go to your ChatGPT and send this prompt: “Create an image of how I treat you”. Share your image result.

via: Zvi Mowshowitz, (Don't Worry About the Vase)
[ed. Yikes. It does make one think a bit more about the question of AI rights and legal personhood. More at the link.]

Tuesday, January 20, 2026

It's Not Normal

Samantha: This town has a weird smell that you're all probably used to…but I'm not.
Mrs Krabappel: It'll take you about six weeks, dear. 
-The Simpsons, "Bart's Friend Falls in Love," S3E23, May 7, 1992
We are living through weird times, and they've persisted for so long that you probably don't even notice it. But these times are not normal.

Now, I realize that this covers a lot of ground, and without detracting from all the other ways in which the world is weird and bad, I want to focus on one specific and pervasive and awful way in which this world is not normal, in part because this abnormality has a defined cause, a precise start date, and an obvious, actionable remedy.

6 years, 5 months and 22 days after Fox aired "Bart's Friend Falls in Love," Bill Clinton signed a new bill into law: the Digital Millennium Copyright Act of 1998 (DMCA).

Under Section 1201 of the DMCA, it's a felony to modify your own property in ways that the manufacturer disapproves of, even if your modifications accomplish some totally innocuous, legal, and socially beneficial goal. Not a little felony, either: DMCA 1201 provides for a five year sentence and a $500,000 fine for a first offense.

Back when the DMCA was being debated, its proponents insisted that their critics were overreacting. They pointed to the legal barriers to invoking DMCA 1201, and insisted that these new restrictions would only apply to a few marginal products in narrow ways that the average person would never even notice.

But that was obvious nonsense, obvious even in 1998, and far more obvious today, more than a quarter-century on. In order for a manufacturer to criminalize modifications to your own property, they have to satisfy two criteria: first, they must sell you a device with a computer in it; and second, they must design that computer with an "access control" that you have to work around in order to make a modification.

For example, say your toaster requires that you scan your bread before it will toast it, to make sure that you're only using a special, expensive kind of bread that kicks back a royalty to the manufacturer. If the embedded computer that does the scanning ships from the factory with a program that is supposed to prevent you from turning off the scanning step, then it is a felony to modify your toaster to work with "unauthorized bread":

If this sounds outlandish, then a) You definitely didn't walk the floor at CES last week, where there were a zillion "cooking robots" that required proprietary feedstock; and b) You haven't really thought hard about your iPhone (which will not allow you to install software of your choosing):

But back in 1998, computers – even the kind of low-powered computers that you'd embed in an appliance – were expensive and relatively rare. No longer! Today, manufacturers source powerful "System on a Chip" (SoC) processors at prices ranging from $0.25 to $8. These are full-fledged computers, easily capable of running an "access control" that satisfies DMCA 1201.

Likewise, in 1998, "access controls" (also called "DRM," "technical protection measures," etc) were a rarity in the field. That was because computer scientists broadly viewed these measures as useless. A determined adversary could always find a way around an access control, and they could package up that break as a software tool and costlessly, instantaneously distribute it over the internet to everyone in the world who wanted to do something that an access control impeded. Access controls were a stupid waste of engineering resources and a source of needless complexity and brittleness:

But – as critics pointed out in 1998 – chips were obviously going to get much cheaper, and if the US Congress made it a felony to bypass an access control, then every kind of manufacturer would be tempted to add some cheap SoCs to their products so they could add access controls and thereby felonize any uses of their products that cut into their profits. Basically, the DMCA offered manufacturers a bargain: add a dollar or two to the bill of materials for your product, and in return, the US government will imprison any competitors who offer your customers a "complementary good" that improves on it.

It's even worse than this: another thing that was obvious in 1998 was that once a manufacturer added a chip to a device, they would probably also figure out a way to connect it to the internet. Once that device is connected to the internet, the manufacturer can push software updates to it at will, which will be installed without user intervention. What's more, by using an access control in connection with that over-the-air update mechanism, the manufacturer can make it a felony to block its updates.

Which means that a manufacturer can sell you a device and then mandatorily update it at a later date to take away its functionality, and then sell that functionality back to you as a "subscription":

A thing that keeps happening:

And happening:

And happening:

In fact, it happens so often I've coined a term for it, "The Darth Vader MBA" (as in, "I'm altering the deal. Pray I don't alter it any further"):

Here's what this all means: any manufacturer who devotes a small amount of engineering work and incurs a small hardware expense can extinguish private property rights altogether.

What do I mean by private property? Well, we can look to Blackstone's 1753 treatise:
The right of property; or that sole and despotic dominion which one man claims and exercises over the external things of the world, in total exclusion of the right of any other individual in the universe.
You can't own your iPhone. If you take your iPhone to Apple and they tell you that it is beyond repair, you have to throw it away. If the repair your phone needs involves "parts pairing" (where a new part won't be recognized until an Apple technician "initializes" it through a DMCA-protected access control), then it's a felony to get that phone fixed somewhere else. If Apple tells you your phone is no longer supported because they've updated their OS, then it's a felony to wipe the phone and put a different OS on it (because installing a new OS involves bypassing an "access control" in the phone's bootloader). If Apple tells you that you can't have a piece of software – like ICE Block, an app that warns you if there are nearby ICE killers who might shoot you in the head through your windshield, which Apple has barred from its App Store on the grounds that ICE is a "protected class" – then you can't install it, because installing software that isn't delivered via the App Store involves bypassing an "access control" that checks software to ensure that it's authorized (just like the toaster with its unauthorized bread).

It's not just iPhones: versions of this play out in your medical implants (hearing aid, insulin pump, etc); appliances (stoves, fridges, washing machines); cars and ebikes; set-top boxes and game consoles; ebooks and streaming videos; small appliances (toothbrushes, TVs, speakers), and more.

Increasingly, things that you actually own are the exception, not the rule.

And this is not normal. The end of ownership represents an overturn of a foundation of modern civilization. The fact that the only "people" who can truly own something are the transhuman, immortal colony organisms we call "Limited Liability Corporations" is an absolutely surreal reversal of the normal order of things.

It's a reversal with deep implications: for one thing, it means that you can't protect yourself from raids on your private data or ready cash by adding privacy blockers to your device, which would make it impossible for airlines or ecommerce sites to guess about how rich/desperate you are before quoting you a "personalized price":

It also means you can't stop your device from leaking information about your movements, or even your conversations – Microsoft has announced that it will gather all of your private communications and ship them to its servers for use by "agentic AI": (...)

Microsoft has also confirmed that it provides US authorities with warrantless, secret access to your data:

This is deeply abnormal. Sure, greedy corporate control freaks weren't invented in the 21st century, but the laws that let those sociopaths put you in prison for failing to arrange your affairs to their benefit – and your own detriment – are.

But because computers got faster and cheaper over decades, the end of ownership has had an incremental rollout, and we've barely noticed that it's happened. Sure, we get irritated when our garage-door opener suddenly requires us to look at seven ads every time we use the app that makes it open or close:

But societally, we haven't connected that incident to this wider phenomenon. It stinks here, but we're all used to it.

It's not normal to buy a book and then not be able to lend it, sell it, or give it away. Lending, selling and giving away books is older than copyright. It's older than publishing. It's older than printing. It's older than paper. It is fucking weird (and also terrible) (obviously) that there's a new kind of very popular book that you can go to prison for lending, selling or giving away.

We're just a few cycles away from a pair of shoes that can figure out which shoelaces you're using, or a dishwasher that can block you from using third-party dishes:

It's not normal, and it has profound implications for our security, our privacy, and our society. It makes us easy pickings for corporate vampires who drain our wallets through the gadgets and tools we rely on. It makes us easy pickings for fascists and authoritarians who ally themselves with corporate vampires by promising them tax breaks in exchange for collusion in the destruction of a free society.

I know that these problems are more important than whether or not we think this is normal. But still. It. Is. Just. Not. Normal.

by Cory Doctorow, Pluralistic |  Read more:
Image: uncredited
[ed. Anything labeled 'smart' is usually suspect. What's particularly dangerous is if successive generations fall prey to what conservation biology calls shifting baseline syndrome (forgetting or never really missing something that's been lost, so we don't grieve or fight to restore it). For a deep dive into why everything keeps getting worse see Mr. Doctorow's new book: Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025.]

Monday, January 19, 2026

The Boring Reason We Don't Have $7 Rideshares

New York, Baltimore, and DC have a rideshare app called Empower that charges 20-40% less than Uber. Drivers like it too because they keep 100% of the fare. Drivers pay a monthly fee instead.

The most common fare I’ve paid on Empower over the last six months is $7.65.

For a recent trip from downtown to the airport, Uber wanted $32. Empower wanted $17.25.


I use it constantly, and so do a lot of car-less people I know. That price difference is a pretty big deal!

For many, it can be the difference between getting to the clinic or skipping an appointment. Between getting a ride after a night shift or walking home alone after buses stop running.

DC is trying to shut Empower down, primarily over liability insurance. DC law requires $1 million in coverage per ride.

The $1 million requirement isn’t sized to typical accidents. When $100,000 is the limit available for an insurance claim, 96% of personal auto claims settle below $100,000.

The high ceiling shifts incentives: plaintiffs' attorneys have reason to pursue cases they'd otherwise drop and push for larger settlements. Fraud rings have emerged to exploit these policies. The American Transit Insurance Company, which focuses on NY rideshare insurance, estimates 60-70 percent of its claims are fraudulent. Uber recently filed racketeering lawsuits against networks of law firms and clinics allegedly staging fake accidents in New York, Florida, and California.

That $1 million requirement traces back to Uber’s early days. When the company was fighting for legality across America, taxi commissions called ridesharing dangerous. To win over skeptical politicians, Uber proposed $1 million in coverage, matching limousine services and interstate charter bus companies, not taxis. It became the national template. Had Uber aimed to match taxi limits, the mandates would be $100,000 to $300,000.

Now Uber is advocating to lower the $1 million mandates. The company (and its drivers) complain that insurance is around 30% of fares, particularly in states like California, New Jersey, and New York which also require additional $1 million uninsured motorist coverage and/or no-fault insurance. Even in DC, with very strong anti-fraud protections, the base $1 million requirement makes up about 5% of every fare—roughly a quarter of Empower’s advertised price advantage. (...)

Empower shows people want options. The app doesn’t let you schedule rides in advance, store multiple cards, or earn airline miles. Drivers don’t always turn off their music. Empower’s not trying to target the same audience as Uber. But the New York Times estimates Empower handles 10% of DC’s ride share market. People are comfortable with the rideshare industry’s scrappy options.

I think the core question is: now that society has accepted rideshare, should we revisit the rules that helped us get there?

Coverage of the potential shutdown rarely focuses on who stands to lose most: price-sensitive riders. Most coverage focuses on Empower’s lack of commercial insurance without explaining that the mandate is three to ten times higher than what taxis carry. Few explore whether or how Empower’s model actually differs: drivers can set their own prices. Drivers fund the platform through monthly fees rather than a cut of each fare. Drivers who get commercial insurance can also use it for private clients.

People now trust and rely on this mode of transportation. Ridesharing has become pseudo-infrastructure for car-less Americans and a tool against drunk driving. In areas of Houston where rideshare first rolled out, drunk driving incidents appear to have dropped 38%.

We should want rideshare to remain affordable, especially as we build the excellent public transit we need.

by Abi Olivera, Positive Sum |  Read more:
Image: uncredited
[ed. Learn something new every day. I'll certainly look into this new company. The pricing of Uber is getting crazy (I've never used Lyft). Unfortunately, expansion won't be easy. As noted: High mandates also act as a moat. In DC, becoming a licensed rideshare company requires a $5,000 application fee, a $250,000 security fee, and infrastructure for that $1 million coverage. You have to be well-capitalized before you serve your first rider. This is likely why we see few bare-bones apps or local competitors to turn to when Lyft and Uber are surging.]

Sunday, January 18, 2026

The Monkey’s Paw Curls

[ed. More than anyone probably wants to know (or can understand) about prediction markets.]

Isn’t “may you get exactly what you asked for” one of those ancient Chinese curses?

Since we last spoke, prediction markets have gone to the moon, rising from millions to billions in monthly volume.


For a few weeks in October, Polymarket founder Shayne Coplan was the world’s youngest self-made billionaire (now it’s some AI people). Kalshi is so accurate that it’s getting called a national security threat.

The catch is, of course, that it’s mostly degenerate gambling, especially sports betting. Kalshi is 81% sports by monthly volume. Polymarket does better - only 37% - but some of the remainder is things like this $686,000 market on how often Elon Musk will tweet this week - currently dominated by the “140 - 164 times” category.

(ironically, this seems to be a regulatory difference - US regulators don’t mind sports betting, but look unfavorably on potentially “insensitive” markets like bets about wars. Polymarket has historically been offshore, and so able to concentrate on geopolitics; Kalshi has been in the US, and so stuck mostly to sports. But Polymarket is in the process of moving onshore; I don’t know if this will affect their ability to offer geopolitical markets)

Degenerate gambling is bad. Insofar as prediction markets have acted as a Trojan Horse to enable it, this is bad. Insofar as my advocacy helped make this possible, I am bad. I can only plead that it didn’t really seem plausible, back in 2021, that a presidential administration would keep all normal restrictions on sports gambling but also let prediction markets do it as much as they wanted. If only there had been some kind of decentralized forecasting tool that could have given me a canonical probability on this outcome!

Still, it might seem that, whatever the degenerate gamblers are doing, we at least have some interesting data. There are now strong, minimally-regulated, high-volume prediction markets on important global events. In this column, I previously claimed this would revolutionize society. Has it?


I don’t feel revolutionized. Why not?

The problem isn’t that the prediction markets are bad. There’s been a lot of noise about insider trading and disputed resolutions. But insider trading should only increase accuracy - it’s bad for traders, but good for information-seekers - and my impression is that the disputed resolutions were handled as well as possible. When I say I don’t feel revolutionized, it’s not because I don’t believe it when it says there’s a 20% chance Khameini will be out before the end of the month. The several thousand people who have invested $6 million in that question have probably converged upon the most accurate probability possible with existing knowledge, just the way prediction markets should.

I actually like this. Everyone is talking about the protests in Iran, and it’s hard to gauge their importance, and knowing that there’s a 20% chance Khameini is removed by February really does help to place them in context. The missing link seems to be between “it’s now possible to place global events in probabilistic context → society revolutionized”.

Here are some possibilities:

Maybe people just haven’t caught on yet? Most news sources still don’t cite prediction markets, even when many people would care about their outcome. For example, the Khameini market hasn’t gotten mentioned in articles about the Iran protests, even though “will these protests succeed in toppling the regime?” is the obvious first question any reader would ask.

Maybe the problem is that probabilities don’t matter? Maybe there’s some State Department official who would change plans slightly over a 20% vs. 40% chance of Khameini departure, or an Iranian official for whom that would mean the difference between loyalty and defection, and these people are benefiting slightly, but not enough that society feels revolutionized.

Maybe society has been low-key revolutionized and we haven’t noticed? Very optimistically, maybe there aren’t as many “obviously the protests will work, only a defeatist doomer traitor would say they have any chance of failing!” “no, obviously the protests will fail, you’re a neoliberal shill if you think they could work” takes as there used to be. Maybe everyone has converged to a unified assessment of probabilistic knowledge, and we’re all better off as a result.

Maybe Polymarket and Kalshi don’t have the right questions. Ask yourself: what are the big future-prediction questions that important disagreements pivot around? When I try this exercise, I get things like:
  • Will the AI bubble pop? Will scaling get us all the way to AGI? Will AI be misaligned?
  • Will Trump turn America into a dictatorship? Make it great again? Somewhere in between?
  • Will YIMBY policies lower rents? How much?
  • Will selling US chips to China help them win the AI race?
  • Will kidnapping Venezuela’s president weaken international law in some meaningful way that will cause trouble in the future?
  • If America nation-builds Venezuela, for whatever definition of nation-build, will that work well, or backfire?
Some of these are long-horizon, some are conditional, and some are hard to resolve. There are potential solutions to all these problems. But why worry about them when you can go to the moon on sports bets?

Annals of The Rulescucks

The new era of prediction markets has provided charming additions to the language, including “rulescuck” - someone who loses an otherwise-prescient bet based on technicalities of the resolution criteria.

Resolution criteria are the small print explaining what counts as the prediction market topic “happening'“. For example, in the Khameini example above, Khameini qualifies as being “out of power” if:
…he resigns, is detained, or otherwise loses his position or is prevented from fulfilling his duties as Supreme Leader of Iran within this market's timeframe. The primary resolution source for this market will be a consensus of credible reporting.
You can imagine ways this definition departs from an exact common-sensical concept of “out of power” - for example, if Khameini gets stuck in an elevator for half an hour and misses a key meeting, does this count as him being “prevented from fulfilling his duties”? With thousands of markets getting resolved per month, chances are high that at least one will hinge upon one of these edge cases.

Kalshi resolves markets by having a staff member with good judgment decide whether or not the situation satisfies the resolution criteria.

Polymarket resolves markets by . . . oh man, how long do you have? There’s a cryptocurrency called UMA. UMA owners can stake it to vote on Polymarket resolutions in an associated contract called the UMA Oracle. Voters on the losing side get their cryptocurrency confiscated and given to the winners. This creates a Keynesian beauty contest, ie a situation where everyone tries to vote for the winning side. The most natural Schelling point is the side which is actually correct. If someone tries to attack the oracle by buying lots of UMA and voting for the wrong side, this incentivizes bystanders to come in and defend the oracle by voting for the right side, since (conditional on there being common knowledge that everyone will do this) that means they get free money at the attackers’ expense. But also, the UMA currency goes up in value if people trust the oracle and plan to use it more often, and it goes down if people think the oracle is useless and may soon get replaced by other systems. So regardless of their other incentives, everyone who owns the currency has an incentive to vote for the true answer so that people keep trusting the oracle. This system works most of the time, but tends towards so-called “oracle drama” where seemingly prosaic resolutions might lie at the end of a thrilling story of attacks, counterattacks, and escalations.

Here are some of the most interesting alleged rulescuckings of 2026:

Mr Ozi: Will Zelensky wear a suit? Ivan Cryptoslav calls this “the most infamous example in Polymarket history”. Ukraine’s president dresses mostly in military fatigues, vowing never to wear a suit until the war is over. As his sartorial notoriety spread, Polymarket traders bet over $100 million on the question of whether he would crack in any given month. At the Pope’s funeral, Zelensky showed up in a respectful-looking jacket which might or might not count. Most media organizations refused to describe it as a “suit”, so the decentralized oracle ruled against. But over the next few months, Zelensky continued to straddle the border of suithood, and the media eventually started using the word “suit” in their articles. This presented a quandary for the oracle, which was supposed to respect both the precedent of its past rulings, and the consensus of media organizations. Voters switched sides several times until finally settling on NO; true suit believers were unsatisfied with this decision. For what it’s worth, the Twitter menswear guy told Wired that “It meets the technical definition, [but] I would also recognize that most people would not think of that as a suit.”

[more examples...]

With one exception, these aren’t outright oracle failures. They’re honest cases of ambiguous rules.

Most of the links end with pleas for Polymarket to get better at clarifying rules. My perspective is that the few times I’ve talked to Polymarket people, I’ve begged them to implement various cool features, and they’ve always said “Nope, sorry, too busy figuring out ways to make rules clearer”. Prediction market people obsess over maximally finicky resolution criteria, but somehow it’s never enough - you just can’t specify every possible state of the world beforehand.

The most interesting proposal I’ve seen in this space is to make LLMs do it; you can train them on good rulesets, and they’re tolerant enough of tedium to print out pages and pages of every possible edge case without going crazy. It’ll be fun the first time one of them hallucinates, though.

…And Miscellaneous N’er-Do-Wells

I include this section under protest.

The media likes engaging with prediction markets through dramatic stories about insider trading and market manipulation. This is as useful as engaging with Waymo through stories about cats being run over. It doesn’t matter whether you can find one lurid example of something going wrong. What matters is the base rates, the consequences, and the alternatives. Polymarket resolves about a thousand markets a month, and Kalshi closer to five thousand. It’s no surprise that a few go wrong; it’s even less surprise that there are false accusations of a few going wrong.

Still, I would be remiss to not mention this at all, so here are some of the more interesting stories:

by Scott Alexander, Astral Codex Ten |  Read more:
Images: uncredited

Saturday, January 17, 2026

The Dilbert Afterlife

Sixty-eight years of highly defective people

Thanks to everyone who sent in condolences on my recent death from prostate cancer at age 68, but that was Scott Adams. I (Scott Alexander) am still alive.

Still, the condolences are appreciated. Scott Adams was a surprisingly big part of my life. I may be the only person to have read every Dilbert book before graduating elementary school. For some reason, 10-year-old-Scott found Adams’ stories of time-wasting meetings and pointy-haired bosses hilarious. No doubt some of the attraction came from a more-than-passing resemblance between Dilbert’s nameless corporation and the California public school system. We’re all inmates in prisons with different names.

But it would be insufficiently ambitious to stop there. Adams’ comics were about the nerd experience. About being cleverer than everyone else, not just in the sense of being high IQ, but in the sense of being the only sane man in a crazy world where everyone else spends their days listening to overpaid consultants drone on about mission statements instead of doing anything useful. There’s an arc in Dilbert where the boss disappears for a few weeks and the engineers get to manage their own time. Productivity shoots up. Morale soars. They invent warp drives and time machines. Then the boss returns, and they’re back to being chronically behind schedule and over budget. This is the nerd outlook in a nutshell: if I ran the circus, there’d be some changes around here.

Yet the other half of the nerd experience is: for some reason this never works. Dilbert and his brilliant co-workers are stuck watching from their cubicles while their idiot boss racks in bonuses and accolades. If humor, like religion, is an opiate of the masses, then Adams is masterfully unsubtle about what type of wound his art is trying to numb.

This is the basic engine of Dilbert: everyone is rewarded in exact inverse proportion to their virtue. Dilbert and Alice are brilliant and hard-working, so they get crumbs. Wally is brilliant but lazy, so he at least enjoys a fool’s paradise of endless coffee and donuts while his co-workers clean up his messes. The P.H.B. is neither smart nor industrious, so he is forever on top, reaping the rewards of everyone else’s toil. Dogbert, an inveterate scammer with a passing resemblance to various trickster deities, makes out best of all.

The repressed object at the bottom of the nerd subconscious, the thing too scary to view except through humor, is that you’re smarter than everyone else, but for some reason it isn’t working. Somehow all that stuff about small talk and sportsball and drinking makes them stronger than you. No equation can tell you why. Your best-laid plans turn to dust at a single glint of Chad’s perfectly-white teeth.

Lesser lights may distance themselves from their art, but Adams radiated contempt for such surrender. He lived his whole life as a series of Dilbert strips. Gather them into one of his signature compendia, and the title would be Dilbert Achieves Self Awareness And Realizes That If He’s So Smart Then He Ought To Be Able To Become The Pointy-Haired Boss, Devotes His Whole Life To This Effort, Achieves About 50% Success, Ends Up In An Uncanny Valley Where He Has Neither The Virtues Of The Honest Engineer Nor Truly Those Of The Slick Consultant, Then Dies Of Cancer Right When His Character Arc Starts To Get Interesting.

If your reaction is “I would absolutely buy that book”, then keep reading, but expect some detours.

Fugitive From The Cubicle Police

The niche that became Dilbert opened when Garfield first said “I hate Mondays”. The quote became a popular sensation, inspiring t-shirts, coffee mugs, and even a hit single. But (as I’m hardly the first to point out) why should Garfield hate Mondays? He’s a cat! He doesn’t have to work!

In the 80s and 90s, saying that you hated your job was considered the height of humor. Drew Carey: “Oh, you hate your job? There’s a support group for that. It’s called everybody, and they meet at the bar.”


This was merely the career subregion of the supercontinent of Boomer self-deprecating jokes, whose other prominences included “I overeat”, “My marriage is on the rocks”, “I have an alcohol problem”, and “My mental health is poor”.

Arguably this had something to do with the Bohemian turn, the reaction against the forced cheer of the 1950s middle-class establishment of company men who gave their all to faceless corporations and then dropped dead of heart attacks at 60. You could be that guy, proudly boasting to your date about how you traded your second-to-last patent artery to complete a spreadsheet that raised shareholder value 14%. Or you could be the guy who says “Oh yeah, I have a day job working for the Man, but fuck the rat race, my true passion is white water rafting”. When your father came home every day looking haggard and worn out but still praising his boss because “you’ve got to respect the company or they won’t take care of you”, being able to say “I hate Mondays” must have felt liberating, like the mantra of a free man.

This was the world of Dilbert’s rise. You’d put a Dilbert comic on your cubicle wall, and feel like you’d gotten away with something. If you were really clever, you’d put the Dilbert comic where Dilbert gets in trouble for putting a comic on his cubicle wall on your cubicle wall, and dare them to move against you.


(again, I was ten at the time. I only know about this because Scott Adams would start each of his book collections with an essay, and sometimes he would talk about letters he got from fans, and many of them would have stories like these.)

But t-shirts saying “Working Hard . . . Or Hardly Working?” no longer hit as hard as they once did. Contra the usual story, Millennials are too earnest to tolerate the pleasant contradiction of saying they hate their job and then going in every day with a smile. They either have to genuinely hate their job - become some kind of dirtbag communist labor activist - or at least pretend to love it. The worm turns, all that is cringe becomes based once more and vice versa. Imagine that guy boasting to his date again. One says: “Oh yeah, I grudgingly clock in every day to give my eight hours to the rat race, but trust me, I’m secretly hating myself the whole time”? The other: “I work for a boutique solar energy startup that’s ending climate change - saving the environment is my passion!” Zoomers are worse still: not even the fig leaf of social good, just pure hustle.

Dilbert is a relic of a simpler time, when the trope could be played straight. But it’s also an artifact of the transition, maybe even a driver of it. Scott Adams appreciated these considerations earlier and more acutely than anyone else. And they drove him nuts.

Stick To Drawing Comics, Monkey Brain

Adams knew, deep in his bones, that he was cleverer than other people. God always punishes this impulse, especially in nerds. His usual strategy is straightforward enough: let them reach the advanced physics classes, where there will always be someone smarter than them, then beat them on the head with their own intellectual inferiority so many times that they cry uncle and admit they’re nothing special.

For Adams, God took a more creative and – dare I say, crueler – route. He created him only-slightly-above-average at everything except for a world-historical, Mozart-tier, absolutely Leonardo-level skill at making silly comics about hating work.


Scott Adams never forgave this. Too self-aware to deny it, too narcissistic to accept it, he spent his life searching for a loophole. You can read his frustration in his book titles: How To Fail At Almost Everything And Still Win Big. Trapped In A Dilbert World. Stick To Drawing Comics, Monkey Brain. Still, he refused to stick to comics. For a moment in the late-90s, with books like The Dilbert Principle and The Dilbert Future, he seemed on his way to be becoming a semi-serious business intellectual. He never quite made it, maybe because the Dilbert Principle wasn’t really what managers and consultants wanted to hear:
I wrote The Dilbert Principle around the concept that in many cases the least competent, least smart people are promoted, simply because they’re the ones you don't want doing actual work. You want them ordering the doughnuts and yelling at people for not doing their assignments—you know, the easy work. Your heart surgeons and your computer programmers—your smart people—aren't in management.
Okay, “I am cleverer than everyone else”, got it. His next venture (c. 1999) was the Dilberito, an attempt to revolutionize food via a Dilbert-themed burrito with the full Recommended Daily Allowance of twenty-three vitamins. I swear I am not making this up. A contemporaneous NYT review said it “could have been designed only by a food technologist or by someone who eats lunch without much thought to taste”. The Onion, in its twenty year retrospective for the doomed comestible, called it a frustrated groping towards meal replacements like Soylent or Huel, long before the existence of a culture nerdy enough to support them. Adams himself, looking back from several years’ distance, was even more scathing: “the mineral fortification was hard to disguise, and because of the veggie and legume content, three bites of the Dilberito made you fart so hard your intestines formed a tail.”

His second foray into the culinary world was a local restaurant called Stacey’s.

by Scott Alexander, Astral Codex Ten |  Read more:
Images: Dilbert/ACX 
[ed. First picture: Adams actually had a custom-built tower on his home shaped like Dilbert’s head.]

Friday, January 16, 2026

Measure Up

“My very dear friend Broadwood—

I have never felt a greater pleasure than in your honor’s notification of the arrival of this piano, with which you are honoring me as a present. I shall look upon it as an altar upon which I shall place the most beautiful offerings of my spirit to the divine Apollo. As soon as I receive your excellent instrument, I shall immediately send you the fruits of the first moments of inspiration I gather from it, as a souvenir for you from me, my very dear Broadwood; and I hope that they will be worthy of your instrument. My dear sir, accept my warmest consideration, from your friend and very humble servant.

—Ludwig van Beethoven”

As musical instruments improved through history, new kinds of music became possible. Sometimes, the improved instrument could make novel sounds; other times, it was louder; and other times stronger, allowing for more aggressive play. Like every technology, musical instruments are the fruit of generations worth of compounding technological refinement.

In a shockingly brief period between the late 18th and early 19th centuries, the piano was transformed technologically, and so too was the function of the music it produced.

To understand what happened, consider the form of classical music known as the “piano sonata.” This is a piece written for solo piano, and it is one of the forms that persisted through the transition, at least in name. In 1790, these were written for an early version of the piano that we now think of as the fortepiano. It sounded like a mix of a modern piano and a harpsichord.

Piano sonatas in the early 1790s were thought of primarily as casual entertainment. It wouldn’t be quite right to call them “background music” as we understand that term today—but they were often played in the background. People would talk over these little keyboard works, play cards, eat, drink.

In the middle of the 1790s, however, the piano started to improve at an accelerated rate. It was the early industrial revolution. Throughout the economy, many things were starting to click into place. Technologies that had kind of worked for a while began to really work. Scale began to be realized. Thicker networks of people, money, ideas, and goods were being built. Capital was becoming more productive, and with this serendipity was becoming more common. Few at the time could understand it, but it was the beginning of a wave—one made in the wake of what we today might call the techno-capital machine.

Riding this wave, the piano makers were among a great many manufacturers who learned to build better machines during this period. And with those improvements, more complex uses of those machines became possible.

Just as this industrial transformation was gaining momentum in the mid-1790s, a well-regarded keyboard player named Ludwig van Beethoven was starting his career in earnest. He, like everyone else, was riding the wave—though he, like everyone else, did not wholly understand it.

Beethoven was an emerging superstar, and he lived in Vienna, the musical capital of the world. It was a hub not just of musicians but also of musical instruments and the people who manufactured them. Some of the finest piano makers of the day—Walter, Graf, and Schanz—were in or around Vienna, and they were in fierce competition with one another. Playing at the city’s posh concert spaces, Beethoven had the opportunity to sample a huge range of emerging pianistic innovations. As his career blossomed, he acquired some of Europe’s finest pianos—including even stronger models from British manufacturers like Broadwood and Sons.

Iron reinforcement enabled piano frames with higher tolerances for louder and longer play. The strings became more robust. More responsive pedals meant a more direct relationship between the player and his tool. Innovations in casting, primitive machine tools, and mechanized woodworking yielded more precise parts. With these parts one could build superior hammer and escapement systems, which in turn led to faster-responding keys. And more of them, too—with higher and lower octaves now available. It is not just that the sound these pianos made was new: These instruments had an enhanced, more responsive user interface.

You could hit these instruments harder. You could play them softer, too. Beethoven’s iconic use of sforzando—rapid swings from soft to loud tones—would have been unplayable on the older pianos. So too would his complex and often rapid solos. In so many ways, then, Beethoven’s characteristic style and sound on the keyboard was technologically impossible for his predecessors to achieve... 

Beethoven was famous for breaking piano strings that were not yet strong enough to render his vision. There was always a relevant margin against which to press. By his final sonata, written in the early 1820s, he was pressing in the direction of early jazz. It was a technological and artistic takeoff from this to this, and from this to this.

Beethoven’s compositions for other instruments followed a structurally similar trajectory: compounding leaps in expressiveness, technical complexity, and thematic ambition, every few years. Here is what one of Mozart’s finest string quartets sounded like. Here is what Beethoven would do with the string quartet by the end of his career.

No longer did audiences talk during concerts. No longer did they play cards and make jokes. Audiences became silent and still, because what was happening to them in the concert hall had changed. A new type of art was emerging, and a new meta-character in human history—the artist—was being born. Beethoven was doing something different, something grander, something more intense, and the way listeners experienced it was different too.

The musical ideas Beethoven introduced to the world originated from his mind, but those ideas would have been unthinkable without a superior instrument.
I bought the instrument I’m using to write this essay in December 2020. I was standing in the frigid cold outside of the Apple Store in the Georgetown neighborhood of Washington, D.C., wearing a KN-95 face mask, separated by six feet from those next to me in line. I had dinner with a friend scheduled that evening. A couple weeks later, the Mayor would temporarily outlaw even that nicety.

I carried this laptop with me every day throughout the remainder of the pandemic. I ran a foundation using this laptop, and after that I orchestrated two career transitions using it. I built two small businesses, and I bought a house. I got married, and I planned a honeymoon with my wife. (...)

In a windowless office on a work trip to Stanford University on November 30, 2022, I discovered ChatGPT on this laptop. I stayed up all night in my hotel playing with the now-primitive GPT-3.5. Using my laptop, I educated myself more deeply about how this mysterious new tool worked.

I thought at first that it was an “answer machine,” a kind of turbocharged search engine. But I eventually came to prefer thinking of these language models as simulators of the internet that, by statistically modeling trillions of human-written words, learned new things about the structure of human-written text.

What might arise from a deeper-than-human understanding of the structures and meta-structures of nearly all the words humans have written for public consumption? What inductive priors might that understanding impart to this cognitive instrument? We know that a raw pretrained model, though deeply flawed, has quite sophisticated inductive priors with no additional human effort. With a great deal of additional human effort, we have made these systems quite useful little helpers, even if they still have their quirks and limitations.

But what if you could teach a system to guide itself through that digital landscape of modeled human thoughts to find better, rather than likelier, answers? What if the machine had good intellectual taste, because it could consider options, recognize mistakes, and decide on a course of cognitive action? Or what if it could, at least, simulate those cognitive processes? And what if that machine improved as quickly as we have seen AI advance so far? This is no longer science fiction; this research has been happening inside of the world’s leading AI firms, and with models like OpenAI’s o1 and o3, we see undoubtedly that progress is being made.

What would it mean for a machine to match the output of a human genius, word for word? What would it mean for a machine to exceed it? In at least some domains, even if only a very limited number at first, it seems likely that we will soon breach these thresholds. It is very hard to say how far this progress will go; as they say, experts disagree.

This strange simulator is “just math,”—it is, ultimately, ones and zeroes, electrons flowing through processed sand. But the math going on inside it is more like biochemistry than it is like arithmetic. The language model is, ultimately, still an instrument, but it is a strange one. Smart people, working in a field called mechanistic interpretability, are bettering our understanding all the time, but our understanding remains highly imperfect, and it will probably never be complete. We don’t quite have precise control yet over these instruments, but our control is getting better with time. We do not yet know how to make our control systems “good enough,” because we don’t quite know what “good enough” means yet—though here too, we are trying. We are searching.

As these instruments improve, the questions we ask them will have to get harder, smarter, and more detailed. This isn’t to say, necessarily, that we will need to become better “prompt engineers.” Instead, it is to suggest that we will need to become more curious. These new instruments will demand that we formulate better questions, and formulating better questions, often, is at least the seed of formulating better answers.

The input and the output, the prompt and the response, the question and the answer, the keyboard and the music, the photons and the photograph. We push at our instruments, we measure them up, and in their way, they measure us. (...)
I don’t like to think about technology in the abstract. Instead, I prefer to think about instruments like this laptop. I think about all the ways in which this instrument is better than the ones that came before it—faster, more reliable, more precise—and why it has improved. And I think about the ways in which this same laptop has become wildly more capable as new software tools came to be. I wonder at the capabilities I can summon with this keyboard now compared with when I was standing in that socially distanced line at the Apple Store four years ago.

I also think about the young Beethoven, playing around, trying to discover the capabilities of instruments with better keyboards, larger range, stronger frames, and suppler pedals. I think about all the uncoordinated work that had to happen—the collective and yet unplanned cultivation of craftsmanship, expertise, and industrial capacity—to make those pianos. I think about the staggering number of small industrial miracles that underpinned Beethoven’s keyboards, and the incomprehensibly larger number of industrial miracles that underpin the keyboard in front of me today. (...)

This past weekend, I replaced my MacBook Air with a new laptop. I wonder what it will be possible to do with this tremendous machine in a few years, or in a few weeks. New instruments for expression, and for intellectual exploration, will be built, and I will learn to use nearly all of them with my new laptop’s keyboard. It is now clear that a history-altering amount of cognitive potential will be at my fingertips, and yours, and everyone else’s. Like any technology, these new instruments will be much more useful to some than to others—but they will be useful in some way to almost everyone.

And just like the piano, what we today call “AI” will enable intellectual creations of far greater complexity, scale, and ambition—and greater repercussions, too. Higher dynamic range. I hope that among the instrument builders there will be inveterate craftsmen, and I hope that young Beethovens, practicing a wholly new kind of art, will emerge among the instrument players.

by Dean Ball, Hyperdimensional |  Read more:
Image: 1827 Broadwood & Sons grand piano/Wikipedia
[ed. Thoughtful essay throughout, well deserving of a full reading (even if you're just interested in Beethoven). On the hysterical end of the spectrum, here's what state legislators are proposing: The AI Patchwork Emerges. An update on state AI law in 2026 (so far) (Hyperdimensional):]
***
State legislative sessions are kicking into gear, and that means a flurry of AI laws are already under consideration across America. In prior years, the headline number of introduced state AI laws has been large: famously, 2025 saw over 1,000 state bills related to AI in some way. But as I pointed out, the vast majority of those laws were harmless: creating committees to study some aspect of AI and make policy recommendations, imposing liability on individuals who distribute AI-generated child pornography, and other largely non-problematic bills. The number of genuinely substantive bills—the kind that impose novel regulations on AI development or diffusion—was relatively small.

In 2026, this is no longer the case: there are now numerous substantive state AI bills floating around covering liability, algorithmic pricing, transparency, companion chatbots, child safety, occupational licensing, and more. In previous years, it was possible for me to independently cover most, if not all, of the interesting state AI bills at the level of rigor I expect of myself, and that my readers expect of me. This is no longer the case. There are simply too many of them.