Appear in a photo taken at a protest march, a gay bar, or an abortion clinic, and your friends might recognize you. But a machine probably won't—at least for now. Unless a computer has been tasked to look for you, has trained on dozens of photos of your face, and has high-quality images to examine, your anonymity is safe. Nor is it yet possible for a computer to scour the Internet and find you in random, uncaptioned photos. But within the walled garden of Facebook, which contains by far the largest collection of personal photographs in the world, the technology for doing all that is beginning to blossom.
Catapulting the California-based company beyond other corporate players in the field, Facebook's DeepFace system is now as accurate as a human being at a few constrained facial recognition tasks. The intention is not to invade the privacy of Facebook's more than 1.3 billion active users, insists Yann LeCun, a computer scientist at New York University in New York City who directs Facebook's artificial intelligence research, but rather to protect it. Once DeepFace identifies your face in one of the 400 million new photos that users upload every day, “you will get an alert from Facebook telling you that you appear in the picture,” he explains. “You can then choose to blur out your face from the picture to protect your privacy.” Many people, however, are troubled by the prospect of being identified at all—especially in strangers' photographs. Facebook is already using the system, although its face-tagging system only reveals to you the identities of your “friends.”
DeepFace isn't the only horse in the race. The U.S. government has poured funding into university-based facial recognition research. And in the private sector, Google and other companies are pursuing their own projects to automatically identify people who appear in photos and videos.
Exactly how automated facial recognition will be used—and how the law may limit it—is unclear. But once the technology matures, it is bound to create as many privacy problems as it solves. “The genie is, or soon will be, out of the bottle,” says Brian Mennecke, an information systems researcher at Iowa State University in Ames who studies privacy. “There will be no going back.” (...)
But DeepFace's greatest advantage—and the aspect of the project that has sparked the most rancor—is its training data. The DeepFace paper breezily mentions the existence of a data set called SFC, for Social Face Classification, a library of 4.4 million labeled faces harvested from the Facebook pages of 4030 users. Although users give Facebook permission to use their personal data when they sign up for the website, the DeepFace research paper makes no mention of the consent of the photos' owners.
“Just as creepy as it sounds,” blared the headline of an article in The Huffington Post describing DeepFace a week after it came out. Commenting on The Huffington Post's piece, one reader wrote: “It is obvious that police and other law enforcement authorities will use this technology and search through our photos without us even knowing.” Facebook has confirmed that it provides law enforcement with access to user data when it is compelled by a judge's order.
by John Bohannon, Science | Read more:
Image: William Duke
Catapulting the California-based company beyond other corporate players in the field, Facebook's DeepFace system is now as accurate as a human being at a few constrained facial recognition tasks. The intention is not to invade the privacy of Facebook's more than 1.3 billion active users, insists Yann LeCun, a computer scientist at New York University in New York City who directs Facebook's artificial intelligence research, but rather to protect it. Once DeepFace identifies your face in one of the 400 million new photos that users upload every day, “you will get an alert from Facebook telling you that you appear in the picture,” he explains. “You can then choose to blur out your face from the picture to protect your privacy.” Many people, however, are troubled by the prospect of being identified at all—especially in strangers' photographs. Facebook is already using the system, although its face-tagging system only reveals to you the identities of your “friends.”
DeepFace isn't the only horse in the race. The U.S. government has poured funding into university-based facial recognition research. And in the private sector, Google and other companies are pursuing their own projects to automatically identify people who appear in photos and videos.
Exactly how automated facial recognition will be used—and how the law may limit it—is unclear. But once the technology matures, it is bound to create as many privacy problems as it solves. “The genie is, or soon will be, out of the bottle,” says Brian Mennecke, an information systems researcher at Iowa State University in Ames who studies privacy. “There will be no going back.” (...)
But DeepFace's greatest advantage—and the aspect of the project that has sparked the most rancor—is its training data. The DeepFace paper breezily mentions the existence of a data set called SFC, for Social Face Classification, a library of 4.4 million labeled faces harvested from the Facebook pages of 4030 users. Although users give Facebook permission to use their personal data when they sign up for the website, the DeepFace research paper makes no mention of the consent of the photos' owners.
“Just as creepy as it sounds,” blared the headline of an article in The Huffington Post describing DeepFace a week after it came out. Commenting on The Huffington Post's piece, one reader wrote: “It is obvious that police and other law enforcement authorities will use this technology and search through our photos without us even knowing.” Facebook has confirmed that it provides law enforcement with access to user data when it is compelled by a judge's order.
by John Bohannon, Science | Read more:
Image: William Duke