Thursday, September 7, 2017

​The Worst Lies You've Been Told About the Singularity

You’ve probably heard of a concept known as the Technological Singularity — a nebulous event that’s supposed to happen in the not-too-distant future. Much of the uncertainty surrounding this possibility, however, has led to wild speculation, confusion, and outright denial. Here are the worst myths you’ve been told about the Singularity.

In a nutshell, the Technological Singularity is a term used to describe the theoretical moment in time when artificial intelligence matches and then exceeds human intelligence. The term was popularized by scifi writer Vernor Vinge, but full credit goes to the mathematician John von Neumann, who spoke of [in the words of Stanislaw Ulam] “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”

By “not continue” von Neumann was referring to the potential for humanity to lose control and fall outside the context of its technologies. Today, this technology is assumed to be artificial intelligence, or more accurately, recursively-improving artificial intelligence (RIAI), leading to artificial superintelligence (ASI).

Because we cannot predict the nature and intentions of an artificial superintelligence, we have come to refer to this sociological event horizon the Technological Singularity — a concept that’s open to wide interpretation, and by consequence, gross misunderstanding. Here are the worst:

“The Singularity Is Not Going to Happen”


Oh, I wouldn’t bet against it. The onslaught of Moore’s Law appears to be unhindered, while breakthroughs in brainmapping and artificial intelligence continue apace. There are no insurmountable conceptual or technological hurdles awaiting us.

And what most ASI skeptics fail to understand is that we have yet to even enter the AI era, a time when powerful — but narrow — systems subsume many domains currently occupied by humans. There will be tremendous incentive to develop these systems, both for economics and security. Superintelligence will eventually appear, likely the product of megacorporations and the military.

This myth might actually be the worst of the bunch, something I’ve referred to as Singularity denialism. Aside from maybe weaponized molecular nanotechnology, ASI represents the greatest threat to humanity. This existential threat hasn’t reached the zeitgeist, but it’ll eventually get there, probably after our first AI catastrophe. And mark my words, there will come a day when this pernicious tee-hee-rapture-of-the-nerds rhetoric will be equal to, if not worse than, climate change denialism is today.

“Artificial Superintelligence Will Be Conscious”


Nope. ASI’s probably won’t be conscious. We need to see these systems, of which there will be many types, as pimped-up versions of IBM’s Watson or Deep Blue. They’ll work at incredible speeds, be fueled by insanely powerful processors and algorithms — but there will be nobody home.

To be fair, there is the possibility that an ASI could be designed to be conscious. It might even re-design itself to be self-aware. But should this happen, it would still represent a mind-space vastly different from anything we know of. A machine mind’s subjective experience would scarcely resemble that of our own.

As an aside, this misconception can be tied to the first. Some skeptics argue there will be no Singularity because we’ll never be able to mimic the complexities of human consciousness. But it’s an objection that’s completely irrelevant. An ASI will be powerful, sophisticated, and dangerous, but not because it’s conscious.

by George Dvorsky, io9 |  Read more:
Image: uncredited
[ed. The biggest threat to humans is humans, taking technology to its limits, no matter what the (unintended) consequences.]