Friday, January 23, 2026

AI: Practical Advice for the Worried

A Word On Thinking For Yourself

There are good reasons to worry about AI. This includes good reasons to worry about AI wiping out all value in the universe, or AI killing everyone, or other similar very bad outcomes.

There are also good reasons that AGI, or otherwise transformational AI, might not come to pass for a long time.

As I say in the Q&A section later, I do not consider imminent transformational AI inevitable in our lifetimes: Some combination of ‘we run out of training data and ways to improve the systems, and AI systems max out at not that much more powerful than current ones’ and ‘turns out there are regulatory and other barriers that prevent AI from impacting that much of life or the economy that much’ could mean that things during our lifetimes turn out to be not that strange. These are definitely world types my model says you should consider plausible.

There is also the highly disputed question of how likely it is that if we did create an AGI reasonably soon, it would wipe out all value in the universe. There are what I consider very good arguments that this is what happens unless we solve extremely difficult problems to prevent it, and that we are unlikely to solve those problems in time. Thus I believe this is very likely, although there are some (such as Eliezer Yudkowsky) who consider it more likely still.

That does not mean you should adapt my position, or anyone else’s position, or mostly use social cognition from those around you, on such questions, no matter what those methods would tell you. If this is something that is going to impact your major life decisions, or keep you up at night, you need to develop your own understanding and model, and decide for yourself what you predict. (...)

Overview

There is some probability that humanity will create transformational AI soon, for various definitions of soon. You can and should decide what you think that probability is, and conditional on that happening, your probability of various outcomes.

Many of these outcomes, both good and bad, will radically alter the payoffs of various life decisions you might make now. Some such changes are predictable. Others not.

None of this is new. We have long lived under the very real threat of potential nuclear annihilation. The employees of the RAND corporation, in charge of nuclear strategic planning, famously did not contribute to their retirement accounts because they did not expect to live long enough to need them. Given what we know now about the close calls of the cold war, and what they knew at the time, perhaps this was not so crazy a perspective.

Should this imminent small but very real risk radically change your actions? I think the answer here is a clear no, unless your actions are relevant to nuclear war risks, either personally or globally, in some way, in which case one can shut up and multiply.

This goes back far longer. For much longer than that, various religious folks have expected Judgment Day to arrive soon, often with a date attached. Often they made poor decisions in response to this, even given their beliefs.

There are some people that talk or feel this same way about climate change, as an impending inevitable extinction event for humanity.

Under such circumstances, I would center my position on a simple claim: Normal Life is Worth Living, even if you think P(doom) relatively soon is very high. (...)

More generally, in terms of helping: Burning yourself out, stressing yourself out, tying yourself up in existential angst all are not helpful. It would be better to keep yourself sane and healthy and financially intact, in case you are later offered leverage. Fighting the good fight, however doomed it might be, because it is a far, far better thing to do, is also a fine response, if you keep in mind how easy it is to end up not helping that fight. But do that while also living a normal life, even if that might seem indulgent. You will be more effective for it, especially over time. (...)

On to individual questions to flesh all this out.

Q&A

Q: Should I still save for retirement?
Short Answer: Yes.
Long Answer: Yes, to most (but not all) of the extent that this would otherwise be a concern and action of yours in the ‘normal’ world
. It would be better to say ‘build up asset value over time’ than ‘save for retirement’ in my model. Building up assets gives you resources to influence the future on all scales, whether or not retirement is even involved. I wouldn’t get too attached to labels.

Remember that while it is not something one should do lightly, none of this is lightly, and you can raid retirement accounts with what in context is a modest penalty, in an extreme enough ‘endgame’ situation - it does not even take that many years for the expected value of the compounded tax advantages to exceed the withdraw penalty - the cost of emptying the account, should you need to do that, is only 10% of funds and about a week in the United States (plus now having to pay taxes on it). And that in some extreme future situations, having that cash would be highly valuable, none of which suggests now is the time to empty it, or to not build it up.

The case for saving money does not depend on expecting a future ‘normal’ world. Which is good, because even without AI the future world is likely to not be all that ‘normal.

Q: Should I take on a ton of debt intending to never have to pay it back?

Short Answer: No, except for a mortgage.

Long Answer: Mostly no, except for a mortgage. Save your powder. See my post On AI and Interest Rates for an extended treatment of this question - I feel that is a definitive answer to the supposed ‘gotcha’ question of why doomers don’t take on lots of debt. Taking on a bunch of debt is a limited resource, and good ways to do it are even more limited for most of us. Yes, where you get the opportunity it would be good to lock in long borrow periods at fixed rates if you think things are about to get super weird. But if your plan is ‘the market will realize what is happening and adjust the value of my debt in time for me to profit’ that does not seem, to me, like a good plan. Nor does borrowing now much change your actual constraints on where you run out of money.

Does borrowing money that you have to pay back in 2033 mean you have more money to spend? That depends. What is your intention if 2033 rolls around and the world hasn’t ended? Are you going to pay it back? If so then you need to prepare now to be able to do that. So you didn’t accomplish all that much.

You need very high confidence in High Weirdness Real Soon Now before you can expect to get net rewarded for putting your financial future on quicksand, where you are in real trouble if you get the timing wrong. You also need a good way to spend that money to change the outcome.

Yes, there is a level of confidence in both speed and magnitude, combined with a good way to spend, that would change that, and that I do not believe is warranted. One must notice that you need vastly less certainty than this to be shouting about these issues from the rooftops, or devoting your time to working on them.

Eliezer’s position, as per his most recent podcast is something like ‘AGI could come very soon, seems inevitable by 2050 barring civilizational collapse, and if it happens we almost certainly all die.’ Suppose you really actually believed that. It’s still not enough to do much with debt unless you have a great use of money - there’s still a lot of probability mass that the money is due back while you’re still alive, potentially right before it might matter.

Yes, also, this changes if you think you can actually change the outcome for the better by spending money now, money loses impact over time, so your discount factor should be high. That however does not seem to be the case that I see being made.

Q: Does buying a house make sense?

A: Maybe. It is an opportunity to borrow money at low interest rates with good tax treatment. It also potentially ties up capital and ties you down to a particular location, and is not as liquid as some other forms of capital. So ask yourself how psychologically hard it would be to undo that. In terms of whether it looks like a good investment in a world with useful but non-transformational AI, an AI could figure out how to more efficiently build housing, but would that cause more houses to be built?

Q: Does it make sense to start a business?

A: Yes, although not because of AI. It is good to start a business. Of course, if the business is going to involve AI, carefully consider whether you are making the situation worse.

Q: Does It Still Make Sense to Try and Have Kids?

Short Answer: Yes.

Long Answer: Yes. Kids are valuable and make the world and your own world better, even if the world then ends. I would much rather exist for a bit than never exist at all. Kids give you hope for the future and something to protect, get you to step up. They get others to take you more seriously. Kids teach you many things that help one think better about AI. You think they take away your free time, but there is a limit to how much creative work one can do in a day. This is what life is all about. Missing out on this is deeply sad. Don’t let it pass you by.

Is there a level of working directly on the problem, or being uniquely positioned to help with the problem, where I would consider changing this advice? Yes, there are a few names where I think this is not so clear, but I am thinking of a very small number of names right now, and yours is not one of them.

You can guess how I would answer most other similar questions. I do not agree with Buffy Summers that the hardest thing in this world is to live in it. I do think she knows better than any of us that not living in this world is not the way to save it.

Q: Should I talk to my kids about how there’s a substantial chance they won’t get to grow up?

A: I would not (and will not) hide this information from my kids, any more than I would hide the risk from nuclear war, but ‘you may not get to grow up’ is not a helpful thing to say to (or to emphasize to) kids. Talking to your kids about this (in the sense of ‘talk to your kids about drugs’) is only going to distress them to no purpose. While I don’t believe in hiding stuff from kids, I also don’t think this is something it is useful to hammer into them. Kids should still get to be and enjoy being kids. (...)

Q: Should I just try to have a good time while I can?

A: No, because my model says that this doesn’t work. It is empty. You can have fun for a day, a week, a month, perhaps a year, but after a while it rings hollow, feels empty, and your future will fill you with dread. Certainly it makes sense to shift this on the margin, get your key bucket list items in early, put a higher marginal priority on fun - even more so than you should have been doing anyway. But I don’t think my day-to-day life experience would improve for very long by taking this kind of path. Then again, each of us is different.

That all assumes you have ruled out attempting to improve our chances. Personally, even if I had to go down, I’d rather go down fighting. Insert rousing speech here.

Q: How Long Do We Have? What is the Timeline?

Short Answer: Unknown. Look at the arguments and evidence. Form your own opinion.

Long Answer: High uncertainty about when this will happen if it happens, whether or not one has high uncertainty about whether it happens at all within our lifetimes. Eliezer’s answer was that he would be very surprised if it didn’t happen by 2050, but that within that range little would surprise him and he has low confidence. Others have longer or shorter means and medians in their timelines. Mine are substantially longer and less confident than Eliezer’s. This is a question you must decide for yourself. The key is that there is uncertainty, so lots of difference scenarios matter.

by Zvi Mowshowitz, Don't Worry About the Vase |  Read more:
Image: via Linkedin Image Generator