These days, advancements in artificial intelligence are not only making rich people billions of dollars, but inspiring wild-eyed fear-mongering about the end of civilization. Those concerned include Elon Musk, who has said that the technology could eventually produce an “immortal dictator,” and the late Stephen Hawking, who warned that the sudden explosion of artificial intelligence could be “the worst event in the history of our civilization.” Generally, the fear is that we will produce machines so intelligent that they are capable of becoming smarter and smarter until we no longer have control over them. They will become a new form of life that will rule over us the way we do the rest of the animal kingdom.
As a professional in the AI industry, I can tell you that given the state of the technology, most of these predictions take us so far into the future that they’re closer to science fiction than reasoned analysis. Before we get to the point where computers have an unstoppable “superintelligence,” there are much more pressing developments to worry about. The technology that already exists, or is about to exist, is dangerous enough on its own.
Let me focus on some real-world developments that are terrifyingly immediate. Of the many different kinds of artificial neural networks, algorithms modeled after a rough approximation of how groups of neurons in your brain operate (which make up what is commonly called AI) I will focus on two: Generative Adversarial Networks (GANs) and Recurrent Neural Networks (RNNs).
GANs are good at making counterfeit images, and thus videos as well. A GAN is made up of two neural networks that have each been “trained” on what a certain thing looks like, like a bathroom or an animal or a person of a certain identity. When the training is complete, one network is told to start generating new images of the thing on its own. The other network is presented with a stream of these counterfeit images with real images interspersed and tries to guess which are fakes. Human input tells each network its successes and failures. Each then adjusts itself to try to do better and they push each other to greater and greater heights of success. RNNs work with data that exists as an ordered sequence, such as a record of daily high temperatures in a city, or the words in a paragraph. Processing and generating written and spoken communication are two of the tasks RNNs are most commonly used for.
A computer program that can generate convincing images, or another that can understand human speech and generate it, might not seem world-shaking. But as these “counterfeiters” steadily improve, the implications are immense. GANs can produce photorealistic images and videos of nonexistent people. Magazines and advertisers can simply replace real people with generated pictures, saving money on photo shoots which require lighting, sets, technicians, photographers, and models. Stock photos will no longer be of people pretending to be students, professionals, workmen, etc. They will be computers pretending to be people. Many of the images you see on the internet will be of people who literally do not exist. If that sounds implausible, realize that it’s just another small step in the kind of fakery that occurs already through Photoshop and CGI. It just means that instead of starting with a photo, you can start by asking the computer to generate one. (...)
If you think “fake news” is a problem now, just wait. When an image can be generated of literally anyone doing literally anything with perfect realism, truth is going to get a whole lot slipperier. The videos will soon catch up to the images, too. Already, it’s possible to make a moderately convincing clip that puts words in Barack Obama’s mouth. Fake security camera footage, fake police body camera footage, fake confessions: we are getting close. Marco Rubio has worried that “a foreign intelligence agency could use the technology to produce a fake video of an American politician using a racial epithet or taking a bribe” or a “fake video of a U.S. soldier massacring civilians overseas.” More worrying is what the U.S. military and police forces could do with it themselves. It didn’t take much deception to manipulate the country into supporting the invasion of Iraq. Fake intelligence is going to become a whole lot more difficult to disprove.
AI-generated images and videos are not just going to cast doubt on reporting, but will pose a major challenge for the legal system. Photographic evidence in trials will always be in doubt once generated images can’t be distinguished from real ones by human experts or other AIs. They can also be used as alibis, with claims that the real images are the counterfeit ones. In this dizzying world of forgery and illusion, how is anyone going to know what to believe? So-called “deepfake” videos will make Donald Trump’s claims of “fake news” that much more plausible and difficult to counter.
Mimicking ordinary human speech is coming to be a cinch. Google recently unveiled a new AI assistant that can talk like a person. It even puts “ums” and “uhs” where they need to go. Called Duplex, it can run on a cell phone, and not only sounds like a human but can interact like one. Duplex’s demo used it to call a hair salon and make an appointment. The woman on the line had no idea she wasn’t talking to a person. Google says it is building Duplex “to sound natural, to make the conversation experience comfortable.”
Imagine how tomorrow’s technology could have worked in 2016. Two days before the election, a video appears, showing Hillary Clinton muttering “I can’t believe Wisconsin voters are so stupid,” supposedly caught on a “hot mike” at a rally in Eau Claire. It circulates on Facebook through the usual rightwing channels. Clinton says she never said it, and she didn’t. It doesn’t matter. It’s impossible to tell it’s fake. The fact-checkers look into it, and find that there never was an event in Eau Claire, and that Clinton had never even been to Wisconsin. It doesn’t matter. By that time, the video is at 10 million shares. The “Wisconsin can’t believe you’re so stupid” shirts are already being printed. Clinton loses, Trump becomes president. Catastrophe. (...)
By far the most serious and most frightening AI development is in military technology: armed, fully autonomous attack drones that can be deployed in swarms and might ultimately use their own judgment to decide when and whom to kill. Think that’s an exaggeration? The Department of Defense literally writes on its websites about new plans to improve the “autonomy” of its armed “drone swarms.” Here’s FOX News, which seems excited about the new developments:
No enemy would want to face a swarm of drones on the attack. But enemies of the United States will have to face the overwhelming force of American drone teams that can think for themselves, communicate with each other and work together in hundreds to execute combat missions…. Say you have a bomb maker responsible for killing a busload of children, our military will release 50 robots – a mix of ground robots and flying drones…Their objective? They must isolate the target within 2 square city blocks within 15 to 30 minutes max… It may sound farfetched – but drone swarm tech for combat already exists and has already been proven more than possible.
The focus here is on small quadcopter drones, designed to be deployed en masse to kill urban civilians, rather than the large Predator drones used to murder entire rural wedding parties in Muslim countries. DARPA’s repulsive Twitter account openly boasts about the plan: “Our OFFSET prgm envisions future small-unit infantry forces using unmanned aircraft systems and/or unmanned ground systems in swarms of >250 robots for missions in urban environment.” The Department of Defense is spending heavily in pursuit of this goal—their 2018 budgetary request contained $457 million for R&D in the technology. Combined with our new $275 million drone base in Niger, the United States is going to have a formidable new capacity to inflict deadly harm using killer robots.
Perhaps more telling, the Department of Defense is also spending heavily on counter-drone systems. They know from experience that other entities will acquire this technology, and that they’ll need to fight back. But while the offensive murder technology is likely to be incredibly effective, the defensive efforts aren’t going to work. Why? Because a swarm of cheap drones controlled by AI are almost unstoppable. Indeed, the DoD counter-drone efforts are pathetic and comically macabre: “The Air Force has purchased shotgun shells filled with nets and the Army has snatched up the Dronebuster, a device used to jam the communications of consumer drones…the Army and Navy are developing lasers to take down drones.” Lord help me, shotgun shells with nets! And if a drone is autonomous, communications jamming doesn’t do anything. If you were facing a swarm of drones, communications jamming would disrupt their coordination, making them less effective, but there would still be hundreds of drones trying to kill you.
It’s ironic, given all the fear that powerful members of the tech industry and government have about killer AI taking over the world, that they are silent as we literally build killer robots.
As a professional in the AI industry, I can tell you that given the state of the technology, most of these predictions take us so far into the future that they’re closer to science fiction than reasoned analysis. Before we get to the point where computers have an unstoppable “superintelligence,” there are much more pressing developments to worry about. The technology that already exists, or is about to exist, is dangerous enough on its own.
Let me focus on some real-world developments that are terrifyingly immediate. Of the many different kinds of artificial neural networks, algorithms modeled after a rough approximation of how groups of neurons in your brain operate (which make up what is commonly called AI) I will focus on two: Generative Adversarial Networks (GANs) and Recurrent Neural Networks (RNNs).
GANs are good at making counterfeit images, and thus videos as well. A GAN is made up of two neural networks that have each been “trained” on what a certain thing looks like, like a bathroom or an animal or a person of a certain identity. When the training is complete, one network is told to start generating new images of the thing on its own. The other network is presented with a stream of these counterfeit images with real images interspersed and tries to guess which are fakes. Human input tells each network its successes and failures. Each then adjusts itself to try to do better and they push each other to greater and greater heights of success. RNNs work with data that exists as an ordered sequence, such as a record of daily high temperatures in a city, or the words in a paragraph. Processing and generating written and spoken communication are two of the tasks RNNs are most commonly used for.
A computer program that can generate convincing images, or another that can understand human speech and generate it, might not seem world-shaking. But as these “counterfeiters” steadily improve, the implications are immense. GANs can produce photorealistic images and videos of nonexistent people. Magazines and advertisers can simply replace real people with generated pictures, saving money on photo shoots which require lighting, sets, technicians, photographers, and models. Stock photos will no longer be of people pretending to be students, professionals, workmen, etc. They will be computers pretending to be people. Many of the images you see on the internet will be of people who literally do not exist. If that sounds implausible, realize that it’s just another small step in the kind of fakery that occurs already through Photoshop and CGI. It just means that instead of starting with a photo, you can start by asking the computer to generate one. (...)
These people do not actually exist |
AI-generated images and videos are not just going to cast doubt on reporting, but will pose a major challenge for the legal system. Photographic evidence in trials will always be in doubt once generated images can’t be distinguished from real ones by human experts or other AIs. They can also be used as alibis, with claims that the real images are the counterfeit ones. In this dizzying world of forgery and illusion, how is anyone going to know what to believe? So-called “deepfake” videos will make Donald Trump’s claims of “fake news” that much more plausible and difficult to counter.
Mimicking ordinary human speech is coming to be a cinch. Google recently unveiled a new AI assistant that can talk like a person. It even puts “ums” and “uhs” where they need to go. Called Duplex, it can run on a cell phone, and not only sounds like a human but can interact like one. Duplex’s demo used it to call a hair salon and make an appointment. The woman on the line had no idea she wasn’t talking to a person. Google says it is building Duplex “to sound natural, to make the conversation experience comfortable.”
Imagine how tomorrow’s technology could have worked in 2016. Two days before the election, a video appears, showing Hillary Clinton muttering “I can’t believe Wisconsin voters are so stupid,” supposedly caught on a “hot mike” at a rally in Eau Claire. It circulates on Facebook through the usual rightwing channels. Clinton says she never said it, and she didn’t. It doesn’t matter. It’s impossible to tell it’s fake. The fact-checkers look into it, and find that there never was an event in Eau Claire, and that Clinton had never even been to Wisconsin. It doesn’t matter. By that time, the video is at 10 million shares. The “Wisconsin can’t believe you’re so stupid” shirts are already being printed. Clinton loses, Trump becomes president. Catastrophe. (...)
By far the most serious and most frightening AI development is in military technology: armed, fully autonomous attack drones that can be deployed in swarms and might ultimately use their own judgment to decide when and whom to kill. Think that’s an exaggeration? The Department of Defense literally writes on its websites about new plans to improve the “autonomy” of its armed “drone swarms.” Here’s FOX News, which seems excited about the new developments:
No enemy would want to face a swarm of drones on the attack. But enemies of the United States will have to face the overwhelming force of American drone teams that can think for themselves, communicate with each other and work together in hundreds to execute combat missions…. Say you have a bomb maker responsible for killing a busload of children, our military will release 50 robots – a mix of ground robots and flying drones…Their objective? They must isolate the target within 2 square city blocks within 15 to 30 minutes max… It may sound farfetched – but drone swarm tech for combat already exists and has already been proven more than possible.
The focus here is on small quadcopter drones, designed to be deployed en masse to kill urban civilians, rather than the large Predator drones used to murder entire rural wedding parties in Muslim countries. DARPA’s repulsive Twitter account openly boasts about the plan: “Our OFFSET prgm envisions future small-unit infantry forces using unmanned aircraft systems and/or unmanned ground systems in swarms of >250 robots for missions in urban environment.” The Department of Defense is spending heavily in pursuit of this goal—their 2018 budgetary request contained $457 million for R&D in the technology. Combined with our new $275 million drone base in Niger, the United States is going to have a formidable new capacity to inflict deadly harm using killer robots.
Perhaps more telling, the Department of Defense is also spending heavily on counter-drone systems. They know from experience that other entities will acquire this technology, and that they’ll need to fight back. But while the offensive murder technology is likely to be incredibly effective, the defensive efforts aren’t going to work. Why? Because a swarm of cheap drones controlled by AI are almost unstoppable. Indeed, the DoD counter-drone efforts are pathetic and comically macabre: “The Air Force has purchased shotgun shells filled with nets and the Army has snatched up the Dronebuster, a device used to jam the communications of consumer drones…the Army and Navy are developing lasers to take down drones.” Lord help me, shotgun shells with nets! And if a drone is autonomous, communications jamming doesn’t do anything. If you were facing a swarm of drones, communications jamming would disrupt their coordination, making them less effective, but there would still be hundreds of drones trying to kill you.
It’s ironic, given all the fear that powerful members of the tech industry and government have about killer AI taking over the world, that they are silent as we literally build killer robots.
by Ryan Metz, Current Affairs | Read more:
Image: uncredited
[ed. For proof, look no further than today's news: White House Releases Doctored Video To Back Up Attack on CNN Reporter (With video - TPM).]