In a small apartment in a small town in northeastern Mississippi, Sarah Marshall sits at her computer, clicking bubbles for an online survey, as her 1-year-old son plays nearby. She hasn’t done this exact survey before, but the questions are familiar, and she works fast. That’s because Marshall is what you might call a professional survey-taker. In the past five years, she has completed roughly 20,000 academic surveys. This is her 21st so far this week. And it’s only Tuesday.
Marshall is a worker for Amazon’s Mechanical Turk, an online job forum where “requesters” post jobs, and an army of crowdsourced workers complete them, earning fantastically small fees for each task. The work has been called microlabor, and the jobs, known as Human Intelligence Tasks, or HITs, range wildly. Some are tedious: transcribing interviews or cropping photos. Some are funny: prank calling someone’s buddy (that’s worth $1) or writing the title to a pornographic movie based on a collection of dirty screen grabs (6 cents). And others are downright bizarre. One task, for example, asked workers to strap live fish to their chests and upload the photos. That paid $5 — a lot by Mechanical Turk standards.
Mostly, Marshall is a sort of cyber guinea pig, providing a steady stream of data to academic research. This places her squarely inside a growing culture of super-savvy, highly experienced study participants.
As she works, she hears a rustling noise. “Grayson, are you in my garbage can?”
In the kitchen, the trash can’s on its side. Her son has liberated an empty box of cinnamon rolls and dumped the remaining contents on the floor. She goes to him, scoops him up and carries him back to the living room, where he circles the carpet, chattering happily as she resumes typing.
“I’m never going to be absolutely undistracted, ever,” Marshall says, and smiles.
Her employers don’t know that Marshall works while negotiating her toddler’s milk bottles and giving him hugs. They don’t know that she has seen studies similar to theirs maybe hundreds, possibly thousands, of times.
Since its founding in 2005, Mechanical Turk has become an increasingly popular way for university researchers to recruit subjects for online experiments. It’s cheap, easy to use, and the responses, powered by the forum’s 500,000 or so workers, flood in fast.
These factors are such a draw for researchers that, in certain academic fields, crowdsourced workers are outpacing psychology students — the traditional go-to study subjects. And the studies are a huge draw for many workers, who tend to participate again and again and again.
These aren’t obscure studies that Turkers are feeding. They span dozens of fields of research, including social, cognitive and clinical psychology, economics, political science and medicine. They teach us about human behavior. They deal in subjects like energy conservation, adolescent alcohol use, managing money and developing effective teaching methods.
“Most of what’s happening in these studies involves trying to understand human behavior,” said Yale University’s David Rand. “Understanding bias and prejudice, and how you make financial decisions, and how you make decisions generally that involve taking risks, that kind of thing. And there are often very clear policy implications.”
As the use of online crowdsourcing in research continues to grow, some are asking the question: How reliable are the data that these modern-day research subjects generate?
Marshall is a worker for Amazon’s Mechanical Turk, an online job forum where “requesters” post jobs, and an army of crowdsourced workers complete them, earning fantastically small fees for each task. The work has been called microlabor, and the jobs, known as Human Intelligence Tasks, or HITs, range wildly. Some are tedious: transcribing interviews or cropping photos. Some are funny: prank calling someone’s buddy (that’s worth $1) or writing the title to a pornographic movie based on a collection of dirty screen grabs (6 cents). And others are downright bizarre. One task, for example, asked workers to strap live fish to their chests and upload the photos. That paid $5 — a lot by Mechanical Turk standards.
Mostly, Marshall is a sort of cyber guinea pig, providing a steady stream of data to academic research. This places her squarely inside a growing culture of super-savvy, highly experienced study participants.
As she works, she hears a rustling noise. “Grayson, are you in my garbage can?”
In the kitchen, the trash can’s on its side. Her son has liberated an empty box of cinnamon rolls and dumped the remaining contents on the floor. She goes to him, scoops him up and carries him back to the living room, where he circles the carpet, chattering happily as she resumes typing.
“I’m never going to be absolutely undistracted, ever,” Marshall says, and smiles.
Her employers don’t know that Marshall works while negotiating her toddler’s milk bottles and giving him hugs. They don’t know that she has seen studies similar to theirs maybe hundreds, possibly thousands, of times.
Since its founding in 2005, Mechanical Turk has become an increasingly popular way for university researchers to recruit subjects for online experiments. It’s cheap, easy to use, and the responses, powered by the forum’s 500,000 or so workers, flood in fast.
These factors are such a draw for researchers that, in certain academic fields, crowdsourced workers are outpacing psychology students — the traditional go-to study subjects. And the studies are a huge draw for many workers, who tend to participate again and again and again.
These aren’t obscure studies that Turkers are feeding. They span dozens of fields of research, including social, cognitive and clinical psychology, economics, political science and medicine. They teach us about human behavior. They deal in subjects like energy conservation, adolescent alcohol use, managing money and developing effective teaching methods.
“Most of what’s happening in these studies involves trying to understand human behavior,” said Yale University’s David Rand. “Understanding bias and prejudice, and how you make financial decisions, and how you make decisions generally that involve taking risks, that kind of thing. And there are often very clear policy implications.”
As the use of online crowdsourcing in research continues to grow, some are asking the question: How reliable are the data that these modern-day research subjects generate?
by Jenny Marder, PBS Newshour | Read more:
Image: Edel Rodriguez