All over Europe, smartphones rang in the middle of the night. Rolling over in bed, blinking open their eyes, civilians reached for the little devices and, in the moment of answering, were effectively drafted as soldiers. They shook themselves awake as they listened to hushed descriptions of a looming threat. Over the next few days and nights, in mid-July of last year, the ranks of these sudden draftees grew, as software analysts and experts in industrial-control systems gathered in makeshift war rooms in assorted NATO countries. Government officials at the highest levels monitored their work. They faced a crisis which did not yet have a name, but which seemed, at first, to have the potential to bring industrial society to a halt.
A self-replicating computer virus, called a worm, was making its way through thousands of computers around the world, searching for small gray plastic boxes called programmable-logic controllers—tiny computers about the size of a pack of crayons, which regulate the machinery in factories, power plants, and construction and engineering projects. These controllers, or P.L.C.’s, perform the critical scut work of modern life. They open and shut valves in water pipes, speed and slow the spinning of uranium centrifuges, mete out the dollop of cream in each Oreo cookie, and time the change of traffic lights from red to green.
Although controllers are ubiquitous, knowledge of them is so rare that many top government officials did not even know they existed until that week in July. Several major Western powers initially feared the worm might represent a generalized attack on all controllers. If the factories shut down, if the power plants went dark, how long could social order be maintained? Who would write a program that could potentially do such things? And why?
As long as the lights were still on, though, the geek squads stayed focused on trying to figure out exactly what this worm intended to do. They were joined by a small citizen militia of amateur and professional analysts scattered across several continents, after private mailing lists for experts on malicious software posted copies of the worm’s voluminous, intricate code on the Web. In terms of functionality, this was the largest piece of malicious software that most researchers had ever seen, and orders of magnitude more complex in structure. (Malware’s previous heavyweight champion, the Conficker worm, was only one-twentieth the size of this new threat.) During the next few months, a handful of determined people finally managed to decrypt almost all of the program, which a Microsoft researcher named “Stuxnet.” On first glimpsing what they found there, they were scared as hell.
“Zero Day”
One month before that midnight summons—on June 17—Sergey Ulasen, the head of the Anti-Virus Kernel department of VirusBlokAda, a small information-technology security company in Minsk, Belarus, sat in his office reading an e-mail report: a client’s computer in Iran just would not stop rebooting. Ulasen got a copy of the virus that was causing the problem and passed it along to a colleague, Oleg Kupreev, who put it into a “debugger”—a software program that examines the code of other programs, including viruses. The men realized that the virus was infecting Microsoft’s Windows operating systems using a vulnerability that had never been detected before. A vulnerability that has not been detected before, and that a program’s creator does not know exists, is called a “zero day.” In the world of computer security, a Windows zero-day vulnerability signals that the author is a pro, and discovering one is a big event. Such flaws can be exploited for a variety of nefarious purposes, and they can sell on the black market for as much as $100,000.read more: