Here’s an example I think about constantly: activists and intellectuals of the 70s and 80s felt absolutely sure that they were doing the right thing to battle nuclear power. At least, I’ve never read about any of them having a smidgen of doubt. Why would they? They were standing against nuclear weapons proliferation, and terrifying meltdowns like Three Mile Island and Chernobyl, and radioactive waste poisoning the water and soil and causing three-eyed fish. They were saving the world. Of course the greedy nuclear executives, the C. Montgomery Burnses, claimed that their good atom-smashing was different from the bad atom-smashing, but they would say that, wouldn’t they?Read carefully, he and I don’t disagree. He’s not scoffing at doomsday predictions, he’s more arguing against people who say that AIs should be banned because they might spread misinformation or gaslight people or whatever.
We now know that, by tying up nuclear power in endless bureaucracy and driving its cost ever higher, on the principle that if nuclear is economically competitive then it ipso facto hasn’t been made safe enough, what the antinuclear activists were really doing was to force an ever-greater reliance on fossil fuels. They thereby created the conditions for the climate catastrophe of today. They weren’t saving the human future; they were destroying it. Their certainty, in opposing the march of a particular scary-looking technology, was as misplaced as it’s possible to be. Our descendants will suffer the consequences.
Still, I think about this argument a lot. I agree he’s right about nuclear power. When it comes out in a few months, I’ll be reviewing a book that makes this same point about institutional review boards: that our fear of a tiny handful of deaths from unethical science has caused hundreds of thousands of deaths from delaying ethical and life-saving medical progress. The YIMBY movement makes a similar point about housing: we hoped to prevent harm by subjecting all new construction to a host of different reviews - environmental, cultural, equity-related - and instead we caused vast harm by creating an epidemic of homelessness and forcing the middle classes to spend increasingly unaffordable sums on rent. This pattern typifies the modern age; any attempt to restore our rightful utopian flying-car future will have to start with rejecting it as vigorously as possible.
So how can I object when Aaronson turns the same lens on AI?
First, you are allowed to use Inside View. If Osama bin Laden is starting a supervirus lab, and objects that you shouldn’t shut him down because “in the past, shutting down progress out of exaggerated fear of potential harm has killed far more people than the progress itself ever could”, you are permitted to respond “yes, but you are Osama bin Laden, and this is a supervirus lab.” You don’t have to give every company trying to build the Torment Nexus a free pass just because they can figure out a way to place their work in a reference class which is usually good. All other technologies fail in predictable and limited ways. If a buggy AI exploded, that would be no worse than a buggy airplane or nuclear plant. The concern is that a buggy AI will pretend to work well, bide its time, and plot how to cause maximum damage while undetected. Also it’s smarter than you. Also this might work so well that nobody realizes they’re all buggy until there are millions of them.
But maybe opponents of every technology have some particular story why theirs is a special case. So let me try one more argument, which I think is closer to my true objection.
There’s a concept in finance called Kelly betting. It briefly gained some fame last year as a thing that FTX failed at, before people realized FTX had failed at many more fundamental things. It works like this (warning - I am bad at math and may have gotten some of this wrong): suppose you start with $1000. You’re at a casino with one game: you can, once per day, bet however much you want on a coin flip, double-or-nothing. You’re slightly psychic, so you have a 75% chance of guessing the coin flip right. That means that on average, you’ll increase your money by 50% each time you bet. Clearly this is a great opportunity. But how much do you bet per day?
Tempting but wrong answer: bet all of it each time. After all, on average you gain money each flip - each $1 invested in the coin flip game becomes $1.50. If you bet everything, then after five coin flips you’ll have (on average) $7,500. But if you just bet $1 each time , then (on average), you’ll only have $1,008. So obviously bet as much as possible, right?
But after five coin flips of $1000, there’s an 76% chance that you’ve lost all your money. Increase to 50 coin flips, and there’s a 99.999999….% chance that you’ve lost all your money. So although technically this has the highest “average utility”, all of this is coming from one super-amazing sliver of probability-space where you own more money than exists in the entire world. In every other timeline, you’re broke.
So how much should you bet? $1 is too little. These flips do, on average, increase your money by 50%; it would take forever to get anywhere betting $1 at a time. You want something that’s high enough to increase your wealth quickly, but not so high that it’s devastating and you can’t come back from it on the rare occasions when you lose.
In this case, if I understand the Kelly math right, you should bet half each time. But the lesson I take from this isn’t just the exact math. It’s: even if you know a really good bet, don’t bet everything at once.
So how can I object when Aaronson turns the same lens on AI?
First, you are allowed to use Inside View. If Osama bin Laden is starting a supervirus lab, and objects that you shouldn’t shut him down because “in the past, shutting down progress out of exaggerated fear of potential harm has killed far more people than the progress itself ever could”, you are permitted to respond “yes, but you are Osama bin Laden, and this is a supervirus lab.” You don’t have to give every company trying to build the Torment Nexus a free pass just because they can figure out a way to place their work in a reference class which is usually good. All other technologies fail in predictable and limited ways. If a buggy AI exploded, that would be no worse than a buggy airplane or nuclear plant. The concern is that a buggy AI will pretend to work well, bide its time, and plot how to cause maximum damage while undetected. Also it’s smarter than you. Also this might work so well that nobody realizes they’re all buggy until there are millions of them.
But maybe opponents of every technology have some particular story why theirs is a special case. So let me try one more argument, which I think is closer to my true objection.
There’s a concept in finance called Kelly betting. It briefly gained some fame last year as a thing that FTX failed at, before people realized FTX had failed at many more fundamental things. It works like this (warning - I am bad at math and may have gotten some of this wrong): suppose you start with $1000. You’re at a casino with one game: you can, once per day, bet however much you want on a coin flip, double-or-nothing. You’re slightly psychic, so you have a 75% chance of guessing the coin flip right. That means that on average, you’ll increase your money by 50% each time you bet. Clearly this is a great opportunity. But how much do you bet per day?
Tempting but wrong answer: bet all of it each time. After all, on average you gain money each flip - each $1 invested in the coin flip game becomes $1.50. If you bet everything, then after five coin flips you’ll have (on average) $7,500. But if you just bet $1 each time , then (on average), you’ll only have $1,008. So obviously bet as much as possible, right?
But after five coin flips of $1000, there’s an 76% chance that you’ve lost all your money. Increase to 50 coin flips, and there’s a 99.999999….% chance that you’ve lost all your money. So although technically this has the highest “average utility”, all of this is coming from one super-amazing sliver of probability-space where you own more money than exists in the entire world. In every other timeline, you’re broke.
So how much should you bet? $1 is too little. These flips do, on average, increase your money by 50%; it would take forever to get anywhere betting $1 at a time. You want something that’s high enough to increase your wealth quickly, but not so high that it’s devastating and you can’t come back from it on the rare occasions when you lose.
In this case, if I understand the Kelly math right, you should bet half each time. But the lesson I take from this isn’t just the exact math. It’s: even if you know a really good bet, don’t bet everything at once.
by Scott Alexander, Slate Star Codex/ACX | Read more:
Image: Hansueli Krapf/Wikipedia
[ed. See also: We're sorry we created the Torment Nexus (Charlie Stross/Charlie's Diary):]
"Hi. I'm Charlie Stross, and I tell lies for money. That is, I'm a science fiction writer: I have about thirty novels in print, translated into a dozen languages, I've won a few awards, and I've been around long enough that my wikipedia page is a mess of mangled edits.
And rather than giving the usual cheerleader talk making predictions about technology and society, I'd like to explain why I—and other SF authors—are terrible guides to the future. Which wouldn't matter, except a whole bunch of billionaires are in the headlines right now because they pay too much attention to people like me. Because we invented the Torment Nexus as a cautionary tale and they took it at face value and decided to implement it for real.
Obviously, I'm talking about Elon Musk. (He named SpaceX's drone ships after Iain M. Banks spaceships, thereby proving that irony is dead). But he's not the only one. There's Peter Thiel (who funds research into artificial intelligence, life extension, and seasteading. when he's not getting blood transfusions from 18 year olds in hope of living forever). Marc Andreesen of Venture Capitalists Andreesen Horowitz recently published a self-proclaimed "techno-optimist manifesto" promoting the bizarre accelerationist philosophy of Nick Land, among other weirdos, and hyping the current grifter's fantasy of large language models as "artificial intelligence". Jeff Bezos, founder of Amazon, is another. He's another space colonization enthusiast like Elon Musk, but while Musk wants to homestead Mars, Bezos is a fan of Gerard K. O'Neill's 1970s plan to build giant orbital habitat cylinders at the Earth-Moon L5 libration point. And no tour of the idiocracy is complete without mentioning Mark Zuckerberg, billionaire CEO of Facebook, who blew through ten billion dollars trying to create the Metaverse from Neal Stephenson's novel Snow Crash, only for it to turn out that his ambitious commercial virtual reality environment had no legs.
(That was a deliberate pun.)
It'd be amusing if these guys didn't have a combined net worth somewhere in the region of half a trillion euros and the desire to change the human universe, along with a load of unexamined prejudices and a bunch of half-baked politics they absorbed from the predominantly American SF stories they read in their teens. I grew up reading the same stuff but as I also write the modern version of the same stuff for a living I've spent a lot of time lifting up the rocks in the garden of SF to look at what's squirming underneath.
Science fiction influences everything this century, both our media and our physical environment. Media first: about 30% of the big budget movies coming out of the US film industry these days are science fiction or fantasy blockbusters, a massive shift since the 1970s. Computer games are wall-to-wall fantasy and SF—probably a majority of the field, outside of sports and simulation games. (Written fiction is another matter, and SF/F combined amount to something in the range 5-10% of books sold. But reading novels is a minority recreation this century, having to compete with the other media I just named. The golden age of written fiction was roughly 1850 to 1950, give or take a few decades: I make my living in an ageing field, kind of like being a classical music composer or an 8-bit games programmer today.)
Meanwhile the influence of science fiction on our environment seems to have been gathering pace throughout my entire life. The future is a marketing tool. Back in the early 20th century it was anything associated with speed—recall the fad for streamlining everything from railway locomotives to toasters, or putting fins on cars. Since about 1970 it becme more tightly associated with communication and computers.
For an example of the latter trend: a decade or two ago there was a fad for cellular phones designed to resemble the original Star Trek communicator. The communicator was movie visual shorthand for "a military two-way radio, but make it impossibly small". But it turns out that enough people wanted an impossibly small clamshell telephone that once semiconductor and battery technology got good enough to make one, they made the Motorola Razr a runaway bestseller.
It's becoming increasingly unusual to read a report of a new technology or scientific discovery that doesn't breathlessly use the phrase "it seems like science fiction". The news cycle is currently dominated by hype about artificial intelligence (a gross mis-characterisation of machine learning algorithms and large language models). A couple of years ago it was breathless hype about cryptocurrency and blockchain technologies—which turned out to be a financial services bubble that drained a lot of small investors' savings accounts into the pockets of people like convicted fraudster Sam Bankman-Fried.
It's also driving politics and law. (...)
Now I've shouted as passing clouds for a bit—or dangerous marketing fads based on popular entertainment of decades past—I'd like to talk about something that I personally find much more worrying: a political ideology common among silicon valley billionaires of a certain age—known by the acronym TESCREAL—that is built on top of a shaky set of assumptions about the future of humanity. It comes straight out of an uncritical reading of the bad science fiction of decades past, and it's really dangerous.
TESCREAL stands for "transhumanism, extropianism, singularitarianism, cosmism, rationalism (in a very specific context), Effective Altruism, and longtermism." It was identified by Timnit Gebru, former technical co-lead of the Ethical Artificial Intelligence Team at Google and founder of the Distributed Artificial Intelligence Research Institute (DAIR), and Émile Torres, a philosopher specialising in existential threats to humanity." [Read more:]
[ed. See also: We're sorry we created the Torment Nexus (Charlie Stross/Charlie's Diary):]
"Hi. I'm Charlie Stross, and I tell lies for money. That is, I'm a science fiction writer: I have about thirty novels in print, translated into a dozen languages, I've won a few awards, and I've been around long enough that my wikipedia page is a mess of mangled edits.
And rather than giving the usual cheerleader talk making predictions about technology and society, I'd like to explain why I—and other SF authors—are terrible guides to the future. Which wouldn't matter, except a whole bunch of billionaires are in the headlines right now because they pay too much attention to people like me. Because we invented the Torment Nexus as a cautionary tale and they took it at face value and decided to implement it for real.
Obviously, I'm talking about Elon Musk. (He named SpaceX's drone ships after Iain M. Banks spaceships, thereby proving that irony is dead). But he's not the only one. There's Peter Thiel (who funds research into artificial intelligence, life extension, and seasteading. when he's not getting blood transfusions from 18 year olds in hope of living forever). Marc Andreesen of Venture Capitalists Andreesen Horowitz recently published a self-proclaimed "techno-optimist manifesto" promoting the bizarre accelerationist philosophy of Nick Land, among other weirdos, and hyping the current grifter's fantasy of large language models as "artificial intelligence". Jeff Bezos, founder of Amazon, is another. He's another space colonization enthusiast like Elon Musk, but while Musk wants to homestead Mars, Bezos is a fan of Gerard K. O'Neill's 1970s plan to build giant orbital habitat cylinders at the Earth-Moon L5 libration point. And no tour of the idiocracy is complete without mentioning Mark Zuckerberg, billionaire CEO of Facebook, who blew through ten billion dollars trying to create the Metaverse from Neal Stephenson's novel Snow Crash, only for it to turn out that his ambitious commercial virtual reality environment had no legs.
(That was a deliberate pun.)
It'd be amusing if these guys didn't have a combined net worth somewhere in the region of half a trillion euros and the desire to change the human universe, along with a load of unexamined prejudices and a bunch of half-baked politics they absorbed from the predominantly American SF stories they read in their teens. I grew up reading the same stuff but as I also write the modern version of the same stuff for a living I've spent a lot of time lifting up the rocks in the garden of SF to look at what's squirming underneath.
Science fiction influences everything this century, both our media and our physical environment. Media first: about 30% of the big budget movies coming out of the US film industry these days are science fiction or fantasy blockbusters, a massive shift since the 1970s. Computer games are wall-to-wall fantasy and SF—probably a majority of the field, outside of sports and simulation games. (Written fiction is another matter, and SF/F combined amount to something in the range 5-10% of books sold. But reading novels is a minority recreation this century, having to compete with the other media I just named. The golden age of written fiction was roughly 1850 to 1950, give or take a few decades: I make my living in an ageing field, kind of like being a classical music composer or an 8-bit games programmer today.)
Meanwhile the influence of science fiction on our environment seems to have been gathering pace throughout my entire life. The future is a marketing tool. Back in the early 20th century it was anything associated with speed—recall the fad for streamlining everything from railway locomotives to toasters, or putting fins on cars. Since about 1970 it becme more tightly associated with communication and computers.
For an example of the latter trend: a decade or two ago there was a fad for cellular phones designed to resemble the original Star Trek communicator. The communicator was movie visual shorthand for "a military two-way radio, but make it impossibly small". But it turns out that enough people wanted an impossibly small clamshell telephone that once semiconductor and battery technology got good enough to make one, they made the Motorola Razr a runaway bestseller.
It's becoming increasingly unusual to read a report of a new technology or scientific discovery that doesn't breathlessly use the phrase "it seems like science fiction". The news cycle is currently dominated by hype about artificial intelligence (a gross mis-characterisation of machine learning algorithms and large language models). A couple of years ago it was breathless hype about cryptocurrency and blockchain technologies—which turned out to be a financial services bubble that drained a lot of small investors' savings accounts into the pockets of people like convicted fraudster Sam Bankman-Fried.
It's also driving politics and law. (...)
Now I've shouted as passing clouds for a bit—or dangerous marketing fads based on popular entertainment of decades past—I'd like to talk about something that I personally find much more worrying: a political ideology common among silicon valley billionaires of a certain age—known by the acronym TESCREAL—that is built on top of a shaky set of assumptions about the future of humanity. It comes straight out of an uncritical reading of the bad science fiction of decades past, and it's really dangerous.
TESCREAL stands for "transhumanism, extropianism, singularitarianism, cosmism, rationalism (in a very specific context), Effective Altruism, and longtermism." It was identified by Timnit Gebru, former technical co-lead of the Ethical Artificial Intelligence Team at Google and founder of the Distributed Artificial Intelligence Research Institute (DAIR), and Émile Torres, a philosopher specialising in existential threats to humanity." [Read more:]
[ed. And, if you're feeling extra gloomy lately and would love nothing more than immersing yourself in more AI speculation, See also: Nick Bostrom: Will AI lead to tyranny? (Undark); and, Thoughts on responsible scaling policies and regulation (Less Wrong).]