Julie Mora-Blanco remembers the day, in the summer of 2006, when the reality of her new job sunk in. A recent grad of California State University, Chico, Mora-Blanco had majored in art, minored in women’s studies, and spent much of her free time making sculptures from found objects and blown-glass. Struggling to make rent and working a post-production job at Current TV, she’d jumped at the chance to work at an internet startup called YouTube. Maybe, she figured, she could pull in enough money to pursue her lifelong dream: to become a hair stylist.
It was a warm, sunny morning, and she was sitting at her desk in the company’s office, located above a pizza shop in San Mateo, an idyllic and affluent suburb of San Francisco. Mora-Blanco was one of 60-odd twenty-somethings who’d come to work at the still-unprofitable website.
Mora-Blanco’s team — 10 people in total — was dubbed The SQUAD (Safety, Quality, and User Advocacy Department). They worked in teams of four to six, some doing day shifts and some night, reviewing videos around the clock. Their job? To protect YouTube’s fledgling brand by scrubbing the site of offensive or malicious content that had been flagged by users, or, as Mora-Blanco puts it, "to keep us from becoming a shock site." The founders wanted YouTube to be something new, something better — "a place for everyone" — and not another eBaum’s World, which had already become a repository for explicit pornography and gratuitous violence.
Mora-Blanco sat next to Misty Ewing-Davis, who, having been on the job a few months, counted as an old hand. On the table before them was a single piece of paper, folded in half to show a bullet-point list of instructions: Remove videos of animal abuse. Remove videos showing blood. Remove visible nudity. Remove pornography. Mora-Blanco recalls her teammates were a "mish-mash" of men and women; gay and straight; slightly tipped toward white, but also Indian, African-American, and Filipino. Most of them were friends, friends of friends, or family. They talked and made jokes, trying to make sense of the rules. "You have to find humor," she remembers. "Otherwise it’s just painful."
Videos arrived on their screens in a never-ending queue. After watching a couple seconds apiece, SQUAD members clicked one of four buttons that appeared in the upper right hand corner of their screens: "Approve" — let the video stand; "Racy" — mark video as 18-plus; "Reject" — remove video without penalty; "Strike" — remove video with a penalty to the account. Click, click, click. But that day Mora-Blanco came across something that stopped her in her tracks.
"Oh, God," she said.
Mora-Blanco won’t describe what she saw that morning. For everyone’s sake, she says, she won’t conjure the staggeringly violent images which, she recalls, involved a toddler and a dimly lit hotel room.
Ewing-Davis calmly walked Mora-Blanco through her next steps: hit "Strike," suspend the user, and forward the person’s account details and the video to the SQUAD team’s supervisor. From there, the information would travel to the CyberTipline, a reporting system launched by the National Center for Missing and Exploited Children (NCMEC) in 1998. Footage of child exploitation was the only black-and-white zone of the job, with protocols outlined and explicitly enforced by law since the late 1990s.
The video disappeared from Mora-Blanco’s screen. The next one appeared.
Ewing-Davis said, "Let’s go for a walk."
Okay. This is what you’re doing, Mora-Blanco remembers thinking as they paced up and down the street. You’re going to be seeing bad stuff.
Almost a decade later, the video and the child in it still haunt her. "In the back of my head, of all the images, I still see that one," she said when we spoke recently. "I really didn’t have a job description to review or a full understanding of what I’d be doing. I was a young 25-year-old and just excited to be getting paid more money. I got to bring a computer home!" Mora-Blanco’s voice caught as she paused to collect herself. "I haven’t talked about this in a long time."
Mora-Blanco is one of more than a dozen current and former employees and contractors of major internet platforms from YouTube to Facebook who spoke to us candidly about the dawn of content moderation. Many of these individuals are going public with their experiences for the first time. Their stories reveal how the boundaries of free speech were drawn during a period of explosive growth for a high-stakes public domain, one that did not exist for most of human history. As law professor Jeffrey Rosen first said many years ago of Facebook, these platforms have "more power in determining who can speak and who can be heard around the globe than any Supreme Court justice, any king or any president."
by Catherine Buni & Soraya Chemaly, The Verge | Read more:
It was a warm, sunny morning, and she was sitting at her desk in the company’s office, located above a pizza shop in San Mateo, an idyllic and affluent suburb of San Francisco. Mora-Blanco was one of 60-odd twenty-somethings who’d come to work at the still-unprofitable website.
Mora-Blanco’s team — 10 people in total — was dubbed The SQUAD (Safety, Quality, and User Advocacy Department). They worked in teams of four to six, some doing day shifts and some night, reviewing videos around the clock. Their job? To protect YouTube’s fledgling brand by scrubbing the site of offensive or malicious content that had been flagged by users, or, as Mora-Blanco puts it, "to keep us from becoming a shock site." The founders wanted YouTube to be something new, something better — "a place for everyone" — and not another eBaum’s World, which had already become a repository for explicit pornography and gratuitous violence.
Mora-Blanco sat next to Misty Ewing-Davis, who, having been on the job a few months, counted as an old hand. On the table before them was a single piece of paper, folded in half to show a bullet-point list of instructions: Remove videos of animal abuse. Remove videos showing blood. Remove visible nudity. Remove pornography. Mora-Blanco recalls her teammates were a "mish-mash" of men and women; gay and straight; slightly tipped toward white, but also Indian, African-American, and Filipino. Most of them were friends, friends of friends, or family. They talked and made jokes, trying to make sense of the rules. "You have to find humor," she remembers. "Otherwise it’s just painful."
Videos arrived on their screens in a never-ending queue. After watching a couple seconds apiece, SQUAD members clicked one of four buttons that appeared in the upper right hand corner of their screens: "Approve" — let the video stand; "Racy" — mark video as 18-plus; "Reject" — remove video without penalty; "Strike" — remove video with a penalty to the account. Click, click, click. But that day Mora-Blanco came across something that stopped her in her tracks.
"Oh, God," she said.
Mora-Blanco won’t describe what she saw that morning. For everyone’s sake, she says, she won’t conjure the staggeringly violent images which, she recalls, involved a toddler and a dimly lit hotel room.
Ewing-Davis calmly walked Mora-Blanco through her next steps: hit "Strike," suspend the user, and forward the person’s account details and the video to the SQUAD team’s supervisor. From there, the information would travel to the CyberTipline, a reporting system launched by the National Center for Missing and Exploited Children (NCMEC) in 1998. Footage of child exploitation was the only black-and-white zone of the job, with protocols outlined and explicitly enforced by law since the late 1990s.
The video disappeared from Mora-Blanco’s screen. The next one appeared.
Ewing-Davis said, "Let’s go for a walk."
Okay. This is what you’re doing, Mora-Blanco remembers thinking as they paced up and down the street. You’re going to be seeing bad stuff.
Almost a decade later, the video and the child in it still haunt her. "In the back of my head, of all the images, I still see that one," she said when we spoke recently. "I really didn’t have a job description to review or a full understanding of what I’d be doing. I was a young 25-year-old and just excited to be getting paid more money. I got to bring a computer home!" Mora-Blanco’s voice caught as she paused to collect herself. "I haven’t talked about this in a long time."
Mora-Blanco is one of more than a dozen current and former employees and contractors of major internet platforms from YouTube to Facebook who spoke to us candidly about the dawn of content moderation. Many of these individuals are going public with their experiences for the first time. Their stories reveal how the boundaries of free speech were drawn during a period of explosive growth for a high-stakes public domain, one that did not exist for most of human history. As law professor Jeffrey Rosen first said many years ago of Facebook, these platforms have "more power in determining who can speak and who can be heard around the globe than any Supreme Court justice, any king or any president."
by Catherine Buni & Soraya Chemaly, The Verge | Read more:
Image: Eric Petersen