Humans are probably not the greatest intelligences in the universe. Earth is a relatively young planet and the oldest civilizations could be billions of years older than us. But even on Earth, Homo sapiens may not be the most intelligent species for that much longer.
The world Go, chess, and Jeopardy champions are now all AIs. AI is projected to outmode many human professions within the next few decades. And given the rapid pace of its development, AI may soon advance to artificial general intelligence—intelligence that, like human intelligence, can combine insights from different topic areas and display flexibility and common sense. From there it is a short leap to superintelligent AI, which is smarter than humans in every respect, even those that now seem firmly in the human domain, such as scientific reasoning and social skills. Each of us alive today may be one of the last rungs on the evolutionary ladder that leads from the first living cell to synthetic intelligence.
What we are only beginning to realize is that these two forms of superhuman intelligence—alien and artificial—may not be so distinct. The technological developments we are witnessing today may have all happened before, elsewhere in the universe. The transition from biological to synthetic intelligence may be a general pattern, instantiated over and over, throughout the cosmos. The universe’s greatest intelligences may be postbiological, having grown out of civilizations that were once biological. (This is a view I share with Paul Davies, Steven Dick, Martin Rees, and Seth Shostak, among others.) To judge from the human experience—the only example we have—the transition from biological to postbiological may take only a few hundred years.
I prefer the term “postbiological” to “artificial” because the contrast between biological and synthetic is not very sharp. Consider a biological mind that achieves superintelligence through purely biological enhancements, such as nanotechnologically enhanced neural minicolumns. This creature would be postbiological, although perhaps many wouldn’t call it an “AI.” Or consider a computronium that is built out of purely biological materials, like the Cylon Raider in the reimagined Battlestar Galactica TV series.
The key point is that there is no reason to expect humans to be the highest form of intelligence there is. Our brains evolved for specific environments and are greatly constrained by chemistry and historical contingencies. But technology has opened up a vast design space, offering new materials and modes of operation, as well as new ways to explore that space at a rate much faster than traditional biological evolution. And I think we already see reasons why synthetic intelligence will outperform us.
An extraterrestrial AI could have goals that conflict with those of biological life
Silicon microchips already seem to be a better medium for information processing than groups of neurons. Neurons reach a peak speed of about 200 hertz, compared to gigahertz for the transistors in current microprocessors. Although the human brain is still far more intelligent than a computer, machines have almost unlimited room for improvement. It may not be long before they can be engineered to match or even exceed the intelligence of the human brain through reverse-engineering the brain and improving upon its algorithms, or through some combination of reverse engineering and judicious algorithms that aren’t based on the workings of the human brain.
In addition, an AI can be downloaded to multiple locations at once, is easily backed up and modified, and can survive under conditions that biological life has trouble with, including interstellar travel. Our measly brains are limited by cranial volume and metabolism; superintelligent AI, in stark contrast, could extend its reach across the Internet and even set up a Galaxy-wide computronium, utilizing all the matter within our galaxy to maximize computations. There is simply no contest. Superintelligent AI would be far more durable than us.
Suppose I am right. Suppose that intelligent life out there is postbiological. What should we make of this? Here, current debates over AI on Earth are telling. Two of the main points of contention—the so-called control problem and the nature of subjective experience—affect our understanding of what other alien civilizations may be like, and what they may do to us when we finally meet.
Ray Kurzweil takes an optimistic view of the postbiological phase of evolution, suggesting that humanity will merge with machines, reaching a magnificent technotopia. But Stephen Hawking, Bill Gates, Elon Musk, and others have expressed the concern that humans could lose control of superintelligent AI, as it can rewrite its own programming and outthink any control measures that we build in. This has been called the “control problem”—the problem of how we can control an AI that is both inscrutable and vastly intellectually superior to us. (...)
Why would nonconscious machines have the same value we place on biological intelligence?
...Raw intelligence is not the only issue to worry about. Normally, we expect that if we encountered advanced alien intelligence, we would likely encounter creatures with very different biologies, but they would still have minds like ours in an important sense—there would be something it is like, from the inside, to be them. Consider that every moment of your waking life, and whenever you are dreaming, it feels like something to be you. When you see the warm hues of a sunrise, or smell the aroma of freshly baked bread, you are having conscious experience. Likewise, there is also something that it is like to be an alien—or so we commonly assume. That assumption needs to be questioned though. Would superintelligent AIs even have conscious experience and, if they did, could we tell? And how would their inner lives, or lack thereof, impact us?
The question of whether AIs have an inner life is key to how we value their existence. Consciousness is the philosophical cornerstone of our moral systems, being key to our judgment of whether someone or something is a self or person rather than a mere automaton. And conversely, whether they are conscious may also be key to how they value us. The value an AI places on us may well hinge on whether it has an inner life; using its own subjective experience as a springboard, it could recognize in us the capacity for conscious experience. After all, to the extent we value the lives of other species, we value them because we feel an affinity of consciousness—thus most of us recoil from killing a chimp, but not from munching on an apple.
But how can beings with vast intellectual differences and that are made of different substrates recognize consciousness in each other? Philosophers on Earth have pondered whether consciousness is limited to biological phenomena. Superintelligent AI, should it ever wax philosophical, could similarly pose a “problem of biological consciousness” about us, asking whether we have the right stuff for experience.
Who knows what intellectual path a superintelligence would take to tell whether we are conscious. But for our part, how can we humans tell whether an AI is conscious? Unfortunately, this will be difficult. Right now, you can tell you are having experience, as it feels like something to be you. You are your own paradigm case of conscious experience. And you believe that other people and certain nonhuman animals are likely conscious, for they are neurophysiologically similar to you. But how are you supposed to tell whether something made of a different substrate can have experience?
by Susan Schneider, Kurzweil Accelerating Intelligence | Read more:
Image:YouTube/Warner Bros
The world Go, chess, and Jeopardy champions are now all AIs. AI is projected to outmode many human professions within the next few decades. And given the rapid pace of its development, AI may soon advance to artificial general intelligence—intelligence that, like human intelligence, can combine insights from different topic areas and display flexibility and common sense. From there it is a short leap to superintelligent AI, which is smarter than humans in every respect, even those that now seem firmly in the human domain, such as scientific reasoning and social skills. Each of us alive today may be one of the last rungs on the evolutionary ladder that leads from the first living cell to synthetic intelligence.
What we are only beginning to realize is that these two forms of superhuman intelligence—alien and artificial—may not be so distinct. The technological developments we are witnessing today may have all happened before, elsewhere in the universe. The transition from biological to synthetic intelligence may be a general pattern, instantiated over and over, throughout the cosmos. The universe’s greatest intelligences may be postbiological, having grown out of civilizations that were once biological. (This is a view I share with Paul Davies, Steven Dick, Martin Rees, and Seth Shostak, among others.) To judge from the human experience—the only example we have—the transition from biological to postbiological may take only a few hundred years.
I prefer the term “postbiological” to “artificial” because the contrast between biological and synthetic is not very sharp. Consider a biological mind that achieves superintelligence through purely biological enhancements, such as nanotechnologically enhanced neural minicolumns. This creature would be postbiological, although perhaps many wouldn’t call it an “AI.” Or consider a computronium that is built out of purely biological materials, like the Cylon Raider in the reimagined Battlestar Galactica TV series.
The key point is that there is no reason to expect humans to be the highest form of intelligence there is. Our brains evolved for specific environments and are greatly constrained by chemistry and historical contingencies. But technology has opened up a vast design space, offering new materials and modes of operation, as well as new ways to explore that space at a rate much faster than traditional biological evolution. And I think we already see reasons why synthetic intelligence will outperform us.
An extraterrestrial AI could have goals that conflict with those of biological life
Silicon microchips already seem to be a better medium for information processing than groups of neurons. Neurons reach a peak speed of about 200 hertz, compared to gigahertz for the transistors in current microprocessors. Although the human brain is still far more intelligent than a computer, machines have almost unlimited room for improvement. It may not be long before they can be engineered to match or even exceed the intelligence of the human brain through reverse-engineering the brain and improving upon its algorithms, or through some combination of reverse engineering and judicious algorithms that aren’t based on the workings of the human brain.
In addition, an AI can be downloaded to multiple locations at once, is easily backed up and modified, and can survive under conditions that biological life has trouble with, including interstellar travel. Our measly brains are limited by cranial volume and metabolism; superintelligent AI, in stark contrast, could extend its reach across the Internet and even set up a Galaxy-wide computronium, utilizing all the matter within our galaxy to maximize computations. There is simply no contest. Superintelligent AI would be far more durable than us.
Suppose I am right. Suppose that intelligent life out there is postbiological. What should we make of this? Here, current debates over AI on Earth are telling. Two of the main points of contention—the so-called control problem and the nature of subjective experience—affect our understanding of what other alien civilizations may be like, and what they may do to us when we finally meet.
Ray Kurzweil takes an optimistic view of the postbiological phase of evolution, suggesting that humanity will merge with machines, reaching a magnificent technotopia. But Stephen Hawking, Bill Gates, Elon Musk, and others have expressed the concern that humans could lose control of superintelligent AI, as it can rewrite its own programming and outthink any control measures that we build in. This has been called the “control problem”—the problem of how we can control an AI that is both inscrutable and vastly intellectually superior to us. (...)
Why would nonconscious machines have the same value we place on biological intelligence?
...Raw intelligence is not the only issue to worry about. Normally, we expect that if we encountered advanced alien intelligence, we would likely encounter creatures with very different biologies, but they would still have minds like ours in an important sense—there would be something it is like, from the inside, to be them. Consider that every moment of your waking life, and whenever you are dreaming, it feels like something to be you. When you see the warm hues of a sunrise, or smell the aroma of freshly baked bread, you are having conscious experience. Likewise, there is also something that it is like to be an alien—or so we commonly assume. That assumption needs to be questioned though. Would superintelligent AIs even have conscious experience and, if they did, could we tell? And how would their inner lives, or lack thereof, impact us?
The question of whether AIs have an inner life is key to how we value their existence. Consciousness is the philosophical cornerstone of our moral systems, being key to our judgment of whether someone or something is a self or person rather than a mere automaton. And conversely, whether they are conscious may also be key to how they value us. The value an AI places on us may well hinge on whether it has an inner life; using its own subjective experience as a springboard, it could recognize in us the capacity for conscious experience. After all, to the extent we value the lives of other species, we value them because we feel an affinity of consciousness—thus most of us recoil from killing a chimp, but not from munching on an apple.
But how can beings with vast intellectual differences and that are made of different substrates recognize consciousness in each other? Philosophers on Earth have pondered whether consciousness is limited to biological phenomena. Superintelligent AI, should it ever wax philosophical, could similarly pose a “problem of biological consciousness” about us, asking whether we have the right stuff for experience.
Who knows what intellectual path a superintelligence would take to tell whether we are conscious. But for our part, how can we humans tell whether an AI is conscious? Unfortunately, this will be difficult. Right now, you can tell you are having experience, as it feels like something to be you. You are your own paradigm case of conscious experience. And you believe that other people and certain nonhuman animals are likely conscious, for they are neurophysiologically similar to you. But how are you supposed to tell whether something made of a different substrate can have experience?
by Susan Schneider, Kurzweil Accelerating Intelligence | Read more:
Image:YouTube/Warner Bros