Saturday, December 31, 2016

Power Poser

When big ideas go bad

Amy Cuddy’s TED talk on power poses has been viewed 37 million times. For comparison purposes, Kanye West’s video "Famous," which features naked celebrities in bed together, has been viewed 21 million times. Cuddy’s talk is the second-most-watched video in TED history, behind only Ken Robinson’s "Do Schools Kill Creativity?" — and, at its current pace, will eventually take over the No. 1 spot, thereby making power poses the most popular idea ever on the most popular idea platform. (...)

As scientific ideas go, power poses could hardly be more clickable. For starters, it’s simple to understand: Standing like Wonder Woman or in another confident pose for two minutes is enough, Cuddy informs us, to transform a timid also-ran into a fierce go-getter. Even better, this life hack comes straight from an Ivy League professor who published her findings in a peer-reviewed journal bolstered by charts and percentages and properly formatted citations. This wasn’t feel-good conjecture; this was rock-solid research from a bona fide scientist.

What went unmentioned on those shows, however, was that the study supporting Cuddy’s claims had begun to crumble. Well before the publication of her book, another research team had tried and failed to replicate the most-touted finding — that assuming a power pose leads to significant hormonal changes. In addition, the intriguing discovery that power poses made subjects more willing to take risks seemed dubious. In the wake of the apparent debunking, online science watchdogs sank their teeth into the study, picking apart its methodology and declaring its results risible.

Then, in late September, one of Cuddy’s co-authors, Dana Carney, did something unusual: She posted a detailed mea culpa on her website, siding with the study’s critics. "I do not believe that ‘power pose’ effects are real," wrote Carney, an associate professor of psychology at the University of California at Berkeley’s business school. Her note went on to say that, while the research had been performed in good faith, the data were "flimsy" and the design and analysis, in retrospect, unsound. She discouraged other researchers from wasting their time on power poses.

So how did arguably the most popular idea on the internet end up on the scientific ash heap? For that matter, how could such questionable research migrate from a journal to a viral video to a best seller, circulating for years, retweeted and forwarded and praised by millions, with almost no pushback? The answer tells us something about the practice and promotion of science, and also how both may be changing for the better. (...)

Eva Ranehill was intrigued by power poses. Ranehill, a postdoctoral student in economics at the University of Zurich, had studied gender differences in risk-taking and competitiveness between boys and girls in an attempt to understand and, ideally, combat stereotypes. Maybe, she thought, body posture could play a role in overcoming the gender gaps she had observed.

She decided to give it a go. The design of Ranehill’s study mostly mirrored the original, though there were a few changes. For instance, in the original study, subjects were told how to stand by the experimenters; in the Ranehill study, the instructions were given, by a computer, a less-personal approach intended to eliminate any accidental influences. Also — and this was the biggest difference — Ranehill’s study put 200 subjects through the experiment, more than four times as many as the original.

Ranehill didn’t get the same results. Not even close. Testosterone didn’t go up, cortisol didn’t go down. Standing in a power pose didn’t cause people to take more risks in a gambling game. Ranehill hadn’t set out to undermine power poses; she had wanted to build on the idea. But after trying and failing with 200 subjects, it was obvious that something was amiss. "We started talking to others who had done studies on power poses, and it was clear we were not the only ones who couldn’t replicate it," she says.

Ranehill was disappointed, if not entirely surprised. She knew that in recent years the field of social psychology had been dealing with growing suspicions about the reliability of some of its best-known and most exciting findings. Last year an attempt to replicate 100 randomly selected psychological studies, an effort led by Brian Nosek, executive director of the Center for Open Science, found that fewer than half passed the test. It wasn’t so much a case of a few rotten apples, as some hopeful observers had claimed, but rather an entire barrel gone bad. One of the main culprits of this sorry state of affairs is thought to be sample size. Too few subjects means there’s a much greater chance that a seemingly significant result is just noise in the data.

Andrew Gelman wrote about the Ranehill study last year in Statistical Modeling, Causal Inference, and Social Science, the deceptively dull title of his often-irreverent blog. Gelman is a professor of statistics and political science at Columbia University and director of the university’s Applied Statistics Center. He’s taken it upon himself as a sort of hobby, or perhaps a mission of mercy, to expose and correct what he sees as glaring ineptitude in psychological studies.

One problem Gelman has zeroed in on repeatedly is researcher freedom. There’s too much of it, he says. When conducting a study, researchers get to decide which data to exclude, how to code data, and how to analyze the data they produce. They’re also at liberty to alter their theory to comport with any outcome. When you’re not sure exactly what you’re looking for, it’s tempting to seize on some effect — illusory or not — and proceed to manufacture a narrative about why it matters. Choose what works and discard the rest.

This is sometimes called "p-hacking," a reference to p-value, a tool used to determine a study’s statistical significance. Gelman doesn’t like that term, because he thinks it implies that researchers are intentionally skewing their results. In some cases they are: Psychology has been shown to have its share of charlatans. But in most cases, he believes, researchers are fooling themselves, too. That’s why he prefers the less disparaging and more poetic phrase "garden of forking paths," borrowed from the title of a short story by Jorge Luis Borges. Scientists are, in Gelman’s formulation, leading themselves down the wrong path.

And it happens constantly. You can hear the exasperation in his voice when he talks about the number of flawed studies that worm their way into the pages of seemingly respectable journals. "Once you’re aware of it, you start seeing it everywhere," he says. "It’s like when you’re in New York City and you look around, you don’t notice anything, but when you start looking down at the ground, you see rats everywhere."

Gelman counts power poses among the vermin. "I feel like I care more about the effect of power poses than Amy Cuddy does, in some way, in that I actually care if it really works," he says. "And I don’t think it does."

by Tom Bartlett, Chronicle of Higher Education |  Read more:
Image: uncredited