In September, Michal Kosinski published a study that he feared might end his career. The Economist broke the news first, giving it a self-consciously anodyne title: “Advances in A.I. Are Used to Spot Signs of Sexuality.” But the headlines quickly grew more alarmed. By the next day, the Human Rights Campaign and Glaad, formerly known as the Gay and Lesbian Alliance Against Defamation, had labeled Kosinski’s work “dangerous” and “junk science.” (They claimed it had not been peer reviewed, though it had.) In the next week, the tech-news site The Verge had run an article that, while carefully reported, was nonetheless topped with a scorching headline: “The Invention of A.I. ‘Gaydar’ Could Be the Start of Something Much Worse.”
Kosinski has made a career of warning others about the uses and potential abuses of data. Four years ago, he was pursuing a Ph.D. in psychology, hoping to create better tests for signature personality traits like introversion or openness to change. But he and a collaborator soon realized that Facebook might render personality tests superfluous: Instead of asking if someone liked poetry, you could just see if they “liked” Poetry Magazine. In 2014, they published a study showing that if given 200 of a user’s likes, they could predict that person’s personality-test answers better than their own romantic partner could.
After getting his Ph.D., Kosinski landed a teaching position at the Stanford Graduate School of Business and soon started looking for new data sets to investigate. One in particular stood out: faces. For decades, psychologists have been leery about associating personality traits with physical characteristics, because of the lasting taint of phrenology and eugenics; studying faces this way was, in essence, a taboo. But to understand what that taboo might reveal when questioned, Kosinski knew he couldn’t rely on a human judgment.
Kosinski first mined 200,000 publicly posted dating profiles, complete with pictures and information ranging from personality to political views. Then he poured that data into an open-source facial-recognition algorithm — a so-called deep neural network, built by researchers at Oxford University — and asked it to find correlations between people’s faces and the information in their profiles. The algorithm failed to turn up much, until, on a lark, Kosinski turned its attention to sexual orientation. The results almost defied belief. In previous research, the best any human had done at guessing sexual orientation from a profile picture was about 60 percent — slightly better than a coin flip. Given five pictures of a man, the deep neural net could predict his sexuality with as much as 91 percent accuracy. For women, that figure was lower but still remarkable: 83 percent.
Much like his earlier work, Kosinski’s findings raised questions about privacy and the potential for discrimination in the digital age, suggesting scenarios in which better programs and data sets might be able to deduce anything from political leanings to criminality. But there was another question at the heart of Kosinski’s paper, a genuine mystery that went almost ignored amid all the media response: How was the computer doing what it did? What was it seeing that humans could not?
It was Kosinski’s own research, but when he tried to answer that question, he was reduced to a painstaking hunt for clues. At first, he tried covering up or exaggerating parts of faces, trying to see how those changes would affect the machine’s predictions. Results were inconclusive. But Kosinski knew that women, in general, have bigger foreheads, thinner jaws and longer noses than men. So he had the computer spit out the 100 faces it deemed most likely to be gay or straight and averaged the proportions of each. It turned out that the faces of gay men exhibited slightly more “feminine” proportions, on average, and that the converse was true for women. If this was accurate, it could support the idea that testosterone levels — already known to mold facial features — help mold sexuality as well.
But it was impossible to say for sure. Other evidence seemed to suggest that the algorithms might also be picking up on culturally driven traits, like straight men wearing baseball hats more often. Or — crucially — they could have been picking up on elements of the photos that humans don’t even recognize. “Humans might have trouble detecting these tiny footprints that border on the infinitesimal,” Kosinski says. “Computers can do that very easily.”
It has become commonplace to hear that machines, armed with machine learning, can outperform humans at decidedly human tasks, from playing Go to playing “Jeopardy!” We assume that is because computers simply have more data-crunching power than our soggy three-pound brains. Kosinski’s results suggested something stranger: that artificial intelligences often excel by developing whole new ways of seeing, or even thinking, that are inscrutable to us. It’s a more profound version of what’s often called the “black box” problem — the inability to discern exactly what machines are doing when they’re teaching themselves novel skills — and it has become a central concern in artificial-intelligence research. In many arenas, A.I. methods have advanced with startling speed; deep neural networks can now detect certain kinds of cancer as accurately as a human. But human doctors still have to make the decisions — and they won’t trust an A.I. unless it can explain itself.
This isn’t merely a theoretical concern. In 2018, the European Union will begin enforcing a law requiring that any decision made by a machine be readily explainable, on penalty of fines that could cost companies like Google and Facebook billions of dollars. The law was written to be powerful and broad and fails to define what constitutes a satisfying explanation or how exactly those explanations are to be reached. It represents a rare case in which a law has managed to leap into a future that academics and tech companies are just beginning to devote concentrated effort to understanding. As researchers at Oxford dryly noted, the law “could require a complete overhaul of standard and widely used algorithmic techniques” — techniques already permeating our everyday lives.
Those techniques can seem inescapably alien to our own ways of thinking. Instead of certainty and cause, A.I. works off probability and correlation. And yet A.I. must nonetheless conform to the society we’ve built — one in which decisions require explanations, whether in a court of law, in the way a business is run or in the advice our doctors give us. The disconnect between how we make decisions and how machines make them, and the fact that machines are making more and more decisions for us, has birthed a new push for transparency and a field of research called explainable A.I., or X.A.I. Its goal is to make machines able to account for the things they learn, in ways that we can understand. But that goal, of course, raises the fundamental question of whether the world a machine sees can be made to match our own.
[ed. See also: Caught in the Web]
Kosinski has made a career of warning others about the uses and potential abuses of data. Four years ago, he was pursuing a Ph.D. in psychology, hoping to create better tests for signature personality traits like introversion or openness to change. But he and a collaborator soon realized that Facebook might render personality tests superfluous: Instead of asking if someone liked poetry, you could just see if they “liked” Poetry Magazine. In 2014, they published a study showing that if given 200 of a user’s likes, they could predict that person’s personality-test answers better than their own romantic partner could.
After getting his Ph.D., Kosinski landed a teaching position at the Stanford Graduate School of Business and soon started looking for new data sets to investigate. One in particular stood out: faces. For decades, psychologists have been leery about associating personality traits with physical characteristics, because of the lasting taint of phrenology and eugenics; studying faces this way was, in essence, a taboo. But to understand what that taboo might reveal when questioned, Kosinski knew he couldn’t rely on a human judgment.
Kosinski first mined 200,000 publicly posted dating profiles, complete with pictures and information ranging from personality to political views. Then he poured that data into an open-source facial-recognition algorithm — a so-called deep neural network, built by researchers at Oxford University — and asked it to find correlations between people’s faces and the information in their profiles. The algorithm failed to turn up much, until, on a lark, Kosinski turned its attention to sexual orientation. The results almost defied belief. In previous research, the best any human had done at guessing sexual orientation from a profile picture was about 60 percent — slightly better than a coin flip. Given five pictures of a man, the deep neural net could predict his sexuality with as much as 91 percent accuracy. For women, that figure was lower but still remarkable: 83 percent.
Much like his earlier work, Kosinski’s findings raised questions about privacy and the potential for discrimination in the digital age, suggesting scenarios in which better programs and data sets might be able to deduce anything from political leanings to criminality. But there was another question at the heart of Kosinski’s paper, a genuine mystery that went almost ignored amid all the media response: How was the computer doing what it did? What was it seeing that humans could not?
It was Kosinski’s own research, but when he tried to answer that question, he was reduced to a painstaking hunt for clues. At first, he tried covering up or exaggerating parts of faces, trying to see how those changes would affect the machine’s predictions. Results were inconclusive. But Kosinski knew that women, in general, have bigger foreheads, thinner jaws and longer noses than men. So he had the computer spit out the 100 faces it deemed most likely to be gay or straight and averaged the proportions of each. It turned out that the faces of gay men exhibited slightly more “feminine” proportions, on average, and that the converse was true for women. If this was accurate, it could support the idea that testosterone levels — already known to mold facial features — help mold sexuality as well.
But it was impossible to say for sure. Other evidence seemed to suggest that the algorithms might also be picking up on culturally driven traits, like straight men wearing baseball hats more often. Or — crucially — they could have been picking up on elements of the photos that humans don’t even recognize. “Humans might have trouble detecting these tiny footprints that border on the infinitesimal,” Kosinski says. “Computers can do that very easily.”
It has become commonplace to hear that machines, armed with machine learning, can outperform humans at decidedly human tasks, from playing Go to playing “Jeopardy!” We assume that is because computers simply have more data-crunching power than our soggy three-pound brains. Kosinski’s results suggested something stranger: that artificial intelligences often excel by developing whole new ways of seeing, or even thinking, that are inscrutable to us. It’s a more profound version of what’s often called the “black box” problem — the inability to discern exactly what machines are doing when they’re teaching themselves novel skills — and it has become a central concern in artificial-intelligence research. In many arenas, A.I. methods have advanced with startling speed; deep neural networks can now detect certain kinds of cancer as accurately as a human. But human doctors still have to make the decisions — and they won’t trust an A.I. unless it can explain itself.
This isn’t merely a theoretical concern. In 2018, the European Union will begin enforcing a law requiring that any decision made by a machine be readily explainable, on penalty of fines that could cost companies like Google and Facebook billions of dollars. The law was written to be powerful and broad and fails to define what constitutes a satisfying explanation or how exactly those explanations are to be reached. It represents a rare case in which a law has managed to leap into a future that academics and tech companies are just beginning to devote concentrated effort to understanding. As researchers at Oxford dryly noted, the law “could require a complete overhaul of standard and widely used algorithmic techniques” — techniques already permeating our everyday lives.
Those techniques can seem inescapably alien to our own ways of thinking. Instead of certainty and cause, A.I. works off probability and correlation. And yet A.I. must nonetheless conform to the society we’ve built — one in which decisions require explanations, whether in a court of law, in the way a business is run or in the advice our doctors give us. The disconnect between how we make decisions and how machines make them, and the fact that machines are making more and more decisions for us, has birthed a new push for transparency and a field of research called explainable A.I., or X.A.I. Its goal is to make machines able to account for the things they learn, in ways that we can understand. But that goal, of course, raises the fundamental question of whether the world a machine sees can be made to match our own.
by Cliff Kuang, NY Times | Read more:
Image: Derek Brahney. Source photo: J.R. Eyerman/The Life Picture Collection/Getty[ed. See also: Caught in the Web]