Imagine receiving a phone call from your aging mother seeking your help because she has forgotten her banking password.
Except it’s not your mother. The voice on the other end of the phone call just sounds deceptively like her.
It is actually a computer-synthesized voice, a tour-de-force of artificial intelligence technology that has been crafted to make it possible for someone to masquerade via the telephone.
Such a situation is still science fiction — but just barely. It is also the future of crime.
The software components necessary to make such masking technology widely accessible are advancing rapidly. Recently, for example, DeepMind, the Alphabet subsidiary known for a program that has bested some of the top human players in the board game Go, announced that it had designed a program that “mimics any human voice and which sounds more natural than the best existing text-to-speech systems, reducing the gap with human performance by over 50 percent.”
The irony, of course, is that this year the computer security industry, with $75 billion in annual revenue, has started to talk about how machine learning and pattern recognition techniques will improve the woeful state of computer security.
But there is a downside.
“The thing people don’t get is that cybercrime is becoming automated and it is scaling exponentially,” said Marc Goodman, a law enforcement agency adviser and the author of “Future Crimes.” He added, “This is not about Matthew Broderick hacking from his basement,” a reference to the 1983 movie “War Games.”
The alarm about malevolent use of advanced artificial intelligence technologies was sounded earlier this year by James R. Clapper, the director of National Intelligence. In his annual review of security, Mr. Clapper underscored the point that while A.I. systems would make some things easier, they would also expand the vulnerabilities of the online world.
The growing sophistication of computer criminals can be seen in the evolution of attack tools like the widely used malicious program known as Blackshades, according to Mr. Goodman. The author of the program, a Swedish national, was convicted last year in the United States.
The system, which was sold widely in the computer underground, functioned as a “criminal franchise in a box,” Mr. Goodman said. It allowed users without technical skills to deploy computer ransomware or perform video or audio eavesdropping with a mouse click.
The next generation of these tools will add machine learning capabilities that have been pioneered by artificial intelligence researchers to improve the quality of machine vision, speech understanding, speech synthesis and natural language understanding. Some computer security researchers believe that digital criminals have been experimenting with the use of A.I. technologies for more than half a decade.
Except it’s not your mother. The voice on the other end of the phone call just sounds deceptively like her.
It is actually a computer-synthesized voice, a tour-de-force of artificial intelligence technology that has been crafted to make it possible for someone to masquerade via the telephone.
Such a situation is still science fiction — but just barely. It is also the future of crime.
The software components necessary to make such masking technology widely accessible are advancing rapidly. Recently, for example, DeepMind, the Alphabet subsidiary known for a program that has bested some of the top human players in the board game Go, announced that it had designed a program that “mimics any human voice and which sounds more natural than the best existing text-to-speech systems, reducing the gap with human performance by over 50 percent.”
The irony, of course, is that this year the computer security industry, with $75 billion in annual revenue, has started to talk about how machine learning and pattern recognition techniques will improve the woeful state of computer security.
But there is a downside.
“The thing people don’t get is that cybercrime is becoming automated and it is scaling exponentially,” said Marc Goodman, a law enforcement agency adviser and the author of “Future Crimes.” He added, “This is not about Matthew Broderick hacking from his basement,” a reference to the 1983 movie “War Games.”
The alarm about malevolent use of advanced artificial intelligence technologies was sounded earlier this year by James R. Clapper, the director of National Intelligence. In his annual review of security, Mr. Clapper underscored the point that while A.I. systems would make some things easier, they would also expand the vulnerabilities of the online world.
The growing sophistication of computer criminals can be seen in the evolution of attack tools like the widely used malicious program known as Blackshades, according to Mr. Goodman. The author of the program, a Swedish national, was convicted last year in the United States.
The system, which was sold widely in the computer underground, functioned as a “criminal franchise in a box,” Mr. Goodman said. It allowed users without technical skills to deploy computer ransomware or perform video or audio eavesdropping with a mouse click.
The next generation of these tools will add machine learning capabilities that have been pioneered by artificial intelligence researchers to improve the quality of machine vision, speech understanding, speech synthesis and natural language understanding. Some computer security researchers believe that digital criminals have been experimenting with the use of A.I. technologies for more than half a decade.
by John Markoff, NY Times | Read more:
Image: via: