But Geoff Hinton has started to worry, and so have I. I’d heard about Hinton’s concerns through the grapevine last week, and he acknowledged them publicly yesterday. (...)
My beliefs have not in fact changed. I still don’t think large language models have much to do with superintelligence or artificial general intelligence [AGI]; I still think, with Yann LeCun, that LLMs are an “off-ramp” on the road to AGI. And my scenarios for doom are perhaps not the same as Hinton’s or Musk’s; theirs (from what I can tell) seem to center mainly around what happens if computers rapidly and radically self-improve themselves, which I don’t see as an immediate possibility.
But here’s the thing: although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don’t have to be superintelligent to create serious problems. I am not worried, immediately, about “AGI risk” (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call “MAI risk”—Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed—both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. A company called Adept.AI just raised $350 million dollars to do just that, to allow large language models to access, well, pretty much everything (aiming to “supercharge your capabilities on any software tool or API in the world” with LLMs, despite their clear tendencies towards hallucination and unreliability).
Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but power, which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access.
If an LLM can trick a single human into doing a Captcha, as OpenAI recently observed, it can, in the hands of a bad actor, create all kinds of mayhem. When LLMs were a lab curiosity, known only within the field, they didn’t pose much problem. But now that (a) they are widely known, and of interest to criminals, and (b) increasingly being given access to the external world (including humans), they can do more damage.
Although the AI community often focuses on long-term risk, I am not alone in worrying about serious, immediate implications. Europol came out yesterday with a report considering some of the criminal possibilities, and it’s sobering. (...)
Perhaps coupled with mass AI-generated propaganda, LLM-enhanced terrorism could in turn lead to nuclear war, or to the deliberate spread of pathogens worse than covid-19, etc. Many, many people could die; civilization could be utterly disrupted. Maybe humans would not literally be “wiped from the earth,” but things could get very bad indeed.
How likely is any of this? We have no earthly idea. My 1% number in the tweet was just a thought experiment. But it’s not 0%. (...)
We need to stop worrying (just) about Skynet and robots taking over the world, and think a lot more about what criminals, including terrorists, might do with LLMs, and what, if anything, we might do to stop them.
But we also need to treat LLMs as a dress rehearsal future synthetic intelligence, and ask ourselves hard questions about what on earth we are going to do with future technology, which might well be even more difficult to control. Hinton told CBS, “I think it's very reasonable for people to be worrying about these issues now, even though it's not going to happen in the next year or two”, and I agree.
by Gary Marcus, The Road to AI We Can Trust (Substack) | Read more:
If an LLM can trick a single human into doing a Captcha, as OpenAI recently observed, it can, in the hands of a bad actor, create all kinds of mayhem. When LLMs were a lab curiosity, known only within the field, they didn’t pose much problem. But now that (a) they are widely known, and of interest to criminals, and (b) increasingly being given access to the external world (including humans), they can do more damage.
Although the AI community often focuses on long-term risk, I am not alone in worrying about serious, immediate implications. Europol came out yesterday with a report considering some of the criminal possibilities, and it’s sobering. (...)
Perhaps coupled with mass AI-generated propaganda, LLM-enhanced terrorism could in turn lead to nuclear war, or to the deliberate spread of pathogens worse than covid-19, etc. Many, many people could die; civilization could be utterly disrupted. Maybe humans would not literally be “wiped from the earth,” but things could get very bad indeed.
How likely is any of this? We have no earthly idea. My 1% number in the tweet was just a thought experiment. But it’s not 0%. (...)
We need to stop worrying (just) about Skynet and robots taking over the world, and think a lot more about what criminals, including terrorists, might do with LLMs, and what, if anything, we might do to stop them.
But we also need to treat LLMs as a dress rehearsal future synthetic intelligence, and ask ourselves hard questions about what on earth we are going to do with future technology, which might well be even more difficult to control. Hinton told CBS, “I think it's very reasonable for people to be worrying about these issues now, even though it's not going to happen in the next year or two”, and I agree.
by Gary Marcus, The Road to AI We Can Trust (Substack) | Read more:
Image: uncredited
[ed. Hinton article here. Europol report here. See also: Nick Bostrom's paper Existential Risks - Analyzing Human Extinction Scenarios and Related Hazards (Bangs, Crunches, Shrieks, Whimpers); and, Global Catastrophic Risk (Wikipedia). Have a nice day.]