Tuesday, May 6, 2014

Is Green the New Brown?

I do a lot of driving, most of it highly tedious. Two miles to the grocery store. Six miles to the mall. Twelve miles to work. The sort where every minute seems to count because the whole trip is so wearisome. In that context, it doesn’t take much to piss me off. I start stereotyping. Big pick-up trucks are driven by reckless assholes; European sedans by condescending elitists.

And then there are the bumper stickers, which can drive me batty even when I mostly agree with the political worldview they promote. Does the world really need another “Coexist” message? Or a faded reminder that the owner once believed that Barack Obama was a metonym for change?

Sometimes, though, the stars align to produce a juxtaposition so perverse that it takes my breath away. The other day I was cut off by a Toyota Prius that then proceeded to slam on the brakes, making me miss a crucial left-turn arrow while it rolled through the intersection on red.

I was incensed. The drivers of hybrids are notoriously self-righteous, practically begging everyone else to praise them for saving the world, even though the giant batteries that save them so much money are far from ecologically sound. But in my experience, Prius owners are particularly egregious in this regard.

But the Prius also seems to be the car of choice for overly cautious drivers, the way Volvos were in the 1970s. If I see one in front of me, I change lanes as soon as I can. It’s almost as bad as having a bus ahead of you.

by Charlie Bertsch, Souciant | Read more:
Image: Charlie Bertsch

Monday, May 5, 2014

A Living Wage

The only socialist city councillor in the United States is torn.

On the one hand, Kshama Sawant has claimed an “historic victory” for a populist campaign that pressured Seattle’s mayor, politicians and business owners to embrace by far the highest across-the-board minimum wage in the US at $15 an hour.

On the other, the economics professor accuses the Democratic party establishment and corporate interests of colluding to compromise its implementation as the city council on Monday begins to hammer out the terms for setting pay at more than double the federal minimum wage. Sawant is gearing up to put the issue on the ballot in November’s election if the final legislation is not to her liking – a move Seattle’s mayor has warned could result in “class warfare” as it is likely to pit big business against increasingly vocal low-paid workers and to divide the trade unions.

The Socialist Alternative party’s sole elected representative hailed the looming debate on the legislation as evidence of a growing backlash across the country against the wealthy getting ever richer while working people endure decades of stagnant wages and deepening poverty.

“The fact that the city council of a major city in the US will discuss in the coming weeks raising the minimum wage to $15 is a testament to how working people can push back against the status quo of poverty, inequality and injustice,” she said.

One third of Seattle residents earn less than $15 an hour. A University of Washington study commissioned by the council said the increase would benefit 100,000 people working in the city and reduce poverty by more than one quarter. The pay of full-time workers on today’s minimum wage would increase by about $11,000 a year.

Sawant can claim a good share of the credit for forcing the agenda. Seattle fast-food workers got the movement off the ground early last year in joining nationwide strikes and protests that began in New York. But the Socialist Alternative candidate helped put the $15 demand at the fore of Seattle’s politics by making it the centrepiece of an election campaign she began as a rank outsider against a Democratic incumbent. Sawant won in November with more than 93,000 votes, socialist views, strong denunciations of capitalism and the occasional quoting of Karl Marx evidently no longer an immediate bar to election in the US.

by Chris McGreal, Guardian |  Read more:
Image: Elaine Thompson/AP

Under The Volcano


Americans love Mexican food. We consume nachos, tacos, burritos, tortas, enchiladas, tamales and anything resembling Mexican in enormous quantities. We love Mexican beverages, happily knocking back huge amounts of tequila, mezcal and Mexican beer every year. We love Mexican people—as we sure employ a lot of them. Despite our ridiculously hypocritical attitudes towards immigration, we demand that Mexicans cook a large percentage of the food we eat, grow the ingredients we need to make that food, clean our houses, mow our lawns, wash our dishes, look after our children. As any chef will tell you, our entire service economy—the restaurant business as we know it—in most American cities, would collapse overnight without Mexican workers. Some, of course, like to claim that Mexicans are “stealing American jobs”. But in two decades as a chef and employer, I never had ONE American kid walk in my door and apply for a dishwashing job, a porter’s position—or even a job as prep cook. Mexicans do much of the work in this country that Americans, provably, simply won’t do.

We love Mexican drugs. Maybe not you personally, but “we”, as a nation, certainly consume titanic amounts of them—and go to extraordinary lengths and expense to acquire them. We love Mexican music, Mexican beaches, Mexican architecture, interior design, Mexican films.

So, why don’t we love Mexico?

We throw up our hands and shrug at what happens and what is happening just across the border. Maybe we are embarrassed. Mexico, after all, has always been there for us, to service our darkest needs and desires. Whether it’s dress up like fools and get pass-out drunk and sun burned on Spring break in Cancun, throw pesos at strippers in Tijuana, or get toasted on Mexican drugs, we are seldom on our best behavior in Mexico. They have seen many of us at our worst. They know our darkest desires.

by Anthony Bourdain |  Read more:
Image: uncredited

This is What Comes After Search

The average person with an Android smartphone is using it to search the web, from a browser, only 1.25 times per day, says Roi Carthy, CEO of Tel Aviv-based mobile startup Everything.Me. That isn’t just bad news for Google, which still relies on ads placed along search results for the bulk of its revenue—it also signals a gigantic, fundamental shift in how people interact with the web. It’s a shift upon which fortunes will be made and lost.

Carthy knows how often people use search on Android because once you install his company’s Everything.Me software, it replaces the home screen on an Android smartphone with one that is uniquely customized to you. And then Everything.Me collects data on how often you search, plus a whole lot else, including where you are, where you go, which apps you use, the contents of your calendar, etc.

This kind of data collection is key to how Everything.Me works, and if Carthy and his investors, who have already sunk $37 million into his company are right, it’s the sort of thing many other companies will be doing on smartphones, all in the name of bringing people what comes after search.

Context is the new search

We’re accustomed to turning on our phones and seeing the same set of icons in the same place every time. But Everything.Me upends this interface convention, and shows people different icons depending on the context in which they find themselves. For example, if Everything.Me knows you’re in a new city, it will show you apps that could aid your navigation in that city—like Uber and Lyft—even if you’ve never downloaded them before. Or, based on apps you and people like you have enjoyed in the past, Everything.Me will show you games and entertainment apps under an “I’m bored” tab. (Tabs for different pages full of apps is one way Everything.Me allows users to tell the phone even more about their current context.)

If it’s time to eat, Everything.Me will show you restaurants nearby you might enjoy, and if it’s time to go out, it will show you activities and hotspots you’re likely to want to check out.

Carthy says that, in contrast to the paltry number of times users of Everything.Me are searching the web each day, they’re engaging in context-based interactions with their customized home screens dozens of times a day.

In other words, in the old days, if you wanted to do something—navigate to the restaurant where you’ve got a dinner reservation—you might open a web browser and search for its address. But in the post-search world of context—in which our devices know so much about us that they can guess our intentions—your phone is already displaying a route to that restaurant, as well as traffic conditions, and how long it will take you to get there, the moment you pull your phone out of your pocket.

Most consumer tech giants are piling into context

Context-aware software for smartphones is all the rage among tech giants. In just the past year, Twitter bought Android home screen startup Cover, Apple bought smart assistant Cue, Yahoo bought Cover competitor Aviate, and of course Google has pioneered the field of learning everything about a person so that it can push data to them before they even know they need it, with its Google Now service.

Yahoo CEO Marissa Mayer has been especially explicitly about what this new age of context means. “Contextual search aims to give people the right information, at the right time, by looking at signals such as where they’re located and what they’re doing—such as walking or driving a car,” she said at a recent conference. “Mobile devices tend to provide a lot more of those signals….When I look at things like contextual search, I get really excited.”

Notice that Mayer said “contextual search” and not just “context.” That’s a nod to the fact that software designed to deliver information based on context is still using search engines to get that information, it’s just that the user doesn’t have to interact with the search engine directly.

by Christopher Mims, Quartz | Read more:
Image: Chris Pizzello/AP

White-Collar World

In or around the year 1956, the percentage of American workers who were "white collar" exceeded the percentage that were blue collar for the first time. Although labor statistics had long foretold this outcome, what the shift meant was unclear, and little theoretical work had prepared anyone to understand it. In the preceding years, the United States had quickly built itself up as an industrial powerhouse, emerging from World War II as the world’s leading source of manufactured goods. Much of its national identity was predicated on the idea that it made things. But thanks in part to advances in automation, job growth on the shop floor had slowed to a trickle. Meanwhile, the world of administration and clerical work, and new fields like public relations and marketing, grew inexorably—a paperwork empire annexing whole swaths of the labor force, as people exchanged assembly lines for metal desks, overalls for gray-flannel suits.

It’s hard to retrieve what this moment must have been like: An America that was ever not dominated by white-collar work is pretty difficult to recall. Where cities haven’t fallen prey to deindustrialization and blight, they have gentrified with white-collar workers, expelling what remains of their working classes to peripheries. The old factory lofts, when occupied, play host to meeting rooms and computers; with the spread of wireless technology, nearly every surface can be turned into a desk, every place into an office. We are a nation of paper pushers.

What it means to be a paper pusher, of course, seems to have changed dramatically (not least because actual paper isn’t getting carted around as much as it used to). The success of a show like Mad Men capitalizes on our sense of profound distance from the drinking, smoking, serial-philandering executive egos idolized in the era of the organization man. Many of the problems associated with white-collar work in midcentury—bureaucracy, social conformity, male chauvinism—have, if not gone away, at least come into open question and been seriously challenged. It would be hard to accuse the colorful, open, dog-friendly campuses of Silicon Valley of the beehivelike sameness and drabness that characterized so many 1950s offices, with their steno pools and all-white employees. On the surface, contemporary office life exudes a stronger measure of freedom than it ever did: More and more women have come to occupy higher rungs of the corporate ladder; working from home has become a more common reality, helping to give employees more ostensible control over their workday; people no longer get a job and stick with it, leading to more movement between companies.

At the same time, we are undergoing one of the most prolonged and agonizing desiccations of the white-collar, middle-class ideal in American history. Layoffs feel as common to late capitalist offices as they were to Gilded Age factories; freedom in one’s choice of workplace really reflects the abrogation of a company’s sense of loyalty to its employees; and insecurity has helped to enforce a regime of wage stagnation. In universities, the very phrase "academic labor" has become a byword for dwindling job protection. White-collar workers report experiencing higher levels of stress than their blue-collar counterparts do, and many work long hours without overtime pay. The increasingly darkening mood of frantic busyness—punctuated by bouts of desperate yoga—that has settled over American life owes much to the country’s overall shift to a white-collar world, where the rules resemble very little those of the world it left behind.

In other words, what the office has done to American life should be a topic of central importance. But there is still only one book, now more than 60 years old, that has tried to figure out what the new dominance of white-collar work means for society: White Collar: The American Middle Classes, by C. Wright Mills.

Few books inaugurate a field of study and continue to tower over it in the way White Collar has; its title alone is authoritative. It sums up and it commands. Even if we are not all white-collar workers now, white-collar work has become central to social life in ways so ubiquitous as to be invisible. Mills was practically the first to notice this and to explore its ramifications. His findings not only stand alone in the literature on the subject but loom over the others in its eerie prescience and power.

It helped his book that, as a personality, Mills, in his mid-30s when the book came out, was far from any dry middle-manager drone he analyzed, let alone the tweedy sonorousness of his Columbia colleagues Lionel Trilling and Jacques Barzun. Students who witnessed his arrival at class would see him dismount a motorcycle and adjust his leather jacket, lugging a duffel bag crammed with books that he would fling onto the seminar table. His unprofessorial style corresponded to an intellectual nonconformism. A scourge of the blandly complacent, "value neutral" social theory that formed the academic consensus of his day, Mills was also hostile to the orthodox Marxist accents that had been fashionable in the speech of the 1930s. Unfortunately, the dominance especially of the latter made it impossible to understand what class position white-collar workers belonged to, and what it meant. Under the most popular (or "vulgar") version of Marxism, the various strata of clerical and professional workers grouped under the heading "white collar" were supposed to dissolve eventually into the working class: In the terms of left-wing German sociology, they were a Stehkragen, or "stiff collar," proletariat.

Mills was unimpressed by all that. The more he looked at white-collar workers, the more he saw that their work made their lives qualitatively different from those of manual workers. Where manual workers exhibited relatively high rates of unionization—solidarity, in other words—white-collar workers tended to rely on themselves, to insist on their own individual capacity to rise through the ranks—to keep themselves isolated. The kind of work they did was partly rationalized, the labor divided to within an inch of its life. Mills constantly emphasized the tremendous growth of corporations and bureaucracies, the sheer massiveness of American institutions—words like "huge" and "giant" seem to appear on every page of his book. At the same time, so much of their work was incalculably more social than manual labor, a factor that particularly afflicted the roles afforded to female white-collar workers: Salesgirls had to sell their personalities in order to sell their products; women in the office were prized as much for their looks or demeanor as for their skills or capabilities.

What Mills realized was that, where backbreaking labor was the chief problem for industrial workers, psychological instability was the trial that white-collar workers endured, and on a daily basis.

by Nikil Saval, Chronicle of Higher Education |  Read more:
Image: David Plunkert for The Chronicle Review

All the World’s an App

I used to ask the internet everything. I started young. In the late 1980s, my family got its first modem. My father was a computer scientist, and he used it to access his computer at work. It was a silver box the size of a book; I liked its little red lights that told you when it was on and communicating with the world. Before long, I was logging onto message boards to ask questions about telescopes and fossils and plots of science fiction TV shows.

I kept at it for years, buying new hardware, switching browsers and search engines as needed. And then, around 2004, I stopped. Social media swallowed my friends whole, and I wanted no part of it. Friendster and Myspace and Facebook—the first great wave of social networking sites—all felt too invasive and too personal. I didn’t want to share, and I didn’t want to be seen.

So now, 10 years on, Facebook, iMessaging, and Twitter have passed me by. It’s become hard to keep up with people. I get all my news—weddings, moves, births, deaths—second-hand, from people who saw something on someone else’s feed. I never know what’s going on. In return, I have the vain satisfaction of feeling like the last real human being in a world of pods. But I am left wondering: what am I missing out on? And is everyone else missing out on something I still have?

Virginia Woolf famously said that on or about December 1910 human character changed. We don’t yet know if the same thing happened with the release of the iPhone 5—but, as the digital and “real” worlds become harder to distinguish from each other, it seems clear that something is shifting. The ways we interact with each other and with the world have altered. Yet the writing on this subject—whether it’s by social scientists, novelists or self-styled “internet intellectuals”—still doesn’t seem to have registered the full import of this transformation. (...)

The behaviour of teens online can be baffling. But are they really more “risk-averse,” “dependent,” “superficial” and “narcissistic” than kids in the past? And are they in danger in some new, hard-to-track way? Danah Boyd, a researcher at New York University and Microsoft, isn’t so sure. In It’s Complicated, her detailed new anthropological inquiry into the internet habits of American teenagers, she does much to dispel many of the alarmist myths that surround young people and social media.

Boyd has spent over a decade interviewing teens about their use of social media, and in the process has developed a nuanced feel for how they live their online lives. Throughout It’s Complicated, she shows teens to be gifted at alternating between different languages and modes of self-presentation, assuming different personas for different audiences and switching platforms (say, between Facebook and Twitter and Ask.fm) based on their individual interests and levels of privacy. She also suggests that many of the fears associated with teens and the internet—from bullying to addiction—are overblown. She argues convincingly, for instance, that “Social media has not radically altered the dynamics of bullying, but it has made these dynamics more visible to more people.”

Social media may not lead to more bullying or addiction, but it does create lots of drama. Boyd and her sometime-collaborator Alice Marwick define drama as “performative, interpersonal conflict that takes place in front of an active, engaged audience, often on social media.” Essentially, “drama” is what keeps school from being boring, and what makes it such hell. It’s also the reason teenagers spend so much time online. The lure isn’t technology itself, or the utopian dream of a space in which anyone could become anything, which drew many young people to the internet in its early bulletin-board and newsgroup days; it’s socialising. Teens go online to “be with friends on their own terms, without adult supervision, and in public”—and Boyd argues that this is now much more difficult than it used to be. She portrays the US as a place in which teens are barred from public spaces such as parks and malls, and face constant monitoring from parents, teachers and the state. This is a paranoid country, in which parents try to channel all their children’s free time into structured activities and are so afraid of predators that they don’t allow their children outside alone. In this “culture of fear” social media affords teens one of their few avenues for autonomous expression.

Parents never understand; but Boyd makes the case that adult cluelessness about the multiple uses teens find for social media—everything from sharing jokes to showing off for university recruiters—can be especially harmful now. She tells the story of a teenager from south central Los Angeles who writes an inspiring college entrance essay about his desire to escape his gang-ridden neighbourhood. But when admissions officers at the Ivy League university to which he’s applying Google him, they are shocked to discover that his MySpace profile is filled with gang symbolism and references to gang activities. They do not consider that this might be a survival strategy instead of a case of outright deception.

by Jacob Mikanowski, Prospect |  Read more:
Image: uncredited 

Serf Board

In the summer of 2009, the U.S. economy lost 9 million jobs: between April and October of that year, the national unemployment rate would rise to 10 percent as the stock market plummeted to nearly half its value.

Money stood at the forefront of collective anxiety: every day seemed to generate new tales of friends getting laid off or of more companies’ having closed up shop. That summer, I discovered people were resorting to making money online using a service called Amazon Mechanical Turk, or “MTurk” as it’s colloquially known. MTurk was started in 2005 by Amazon CEO Jeff Bezos and director of Amazon Web services Peter Cohen as a way to solve problems with Amazon’s ever-expanding data set. The premise was that a distributed crowd of humans could easily complete tasks that computers found too challenging. The jobs, called Human Intelligence Tasks (HITs) on the service, might be to match the name of an item—say a 95-ounce box of Tide detergent—to the correct product image. The typical pay for HITs like this range from $0.01 to $0.20 and often need to be completed within a limited amount of time.

With my curiosity piqued, I began surveying workers on MTurk, asking them to tell their stories through short memoirs. What started as research about a tool I might use in my artistic practice became a much deeper experience. The stories I heard were parables of everyday life, success, and struggle. Now, five years later, I came back to them to see if the story of crowdsourced labor has changed.

MTurk drew its name and conceptual model from the 18th- entury invention created by Hungarian nobleman Wolfgang von Kempelen, who created a mechanical chess-playing automaton that supposedly “defeated nearly every opponent it faced.” In truth, it was a hoax: A real, human chess master was hiding inside the machine.

To extend this conceit to Amazon’s platform, Bezos coined a clever, glossy euphemism, describing the service as “artificial artificial intelligence.” Looking through the FAQ page for MTurk gives a better sense of what this artificial artificial intelligence might entail:
When we think of interfaces between human beings and computers, we usually assume that the human being is the one requesting that a task be completed, and the computer is completing the task and providing the results. What if this process were reversed and a computer program could ask a human being to perform a task and return the results? What if it could coordinate many human beings to perform a task?
At first, MTurk seemed appealing as a tool that could complete work like any other software with the unique exception of being powered by an unnamed, globally distributed group of people. I envisioned doing a kind of conceptual exploration of this virtual workspace, which could then lead to future collaborative projects with the platform. But soon, I found myself preoccupied by a truly basic question: Who were the people fulfilling these requests? Who were the chess players within the machine?

My hunch was that the workers using MTurk were middle-class skilled workers like myself. To test this hypothesis, I used MTurk to hire some of them to tell me why they were on it. Since MTurk tasks needed to take only minutes to complete, I requested a brief, 250-word account and let them know that I would share their story on a Tumblr, which I titled The Mechanical Turk Diaries. I decided to pay $0.25 per story, which at the time seemed like a high rate relative to other HITs on the platform.

by Jason Huff, TNI |  Read more:
Image: uncredited

A Brief History of Auto-Tune

A recording engineer once told me a story about a time when he was tasked with “tuning” the lead vocals from a recording session (identifying details have been changed to protect the innocent). Polishing-up vocals is an increasingly common job in the recording business, with some dedicated vocal producers even making it their specialty. Being able to comp, tune, and repair the timing of a vocal take is now a standard skill set among engineers, but in this case things were not going smoothly. Whereas singers usually tend towards being either consistently sharp or flat (“men go flat, women go sharp” as another engineer explained), in this case the vocalist was all over the map, making it difficult to always know exactly what note they were even trying to hit. Complicating matters further was the fact that this band had a decidedly lo-fi, garage-y reputation, making your standard-issue, Glee-grade tuning job decidedly inappropriate.

Undaunted, our engineer pulled up the Auto-Tune plugin inside Pro-Tools and set to work tuning the vocal, to use his words, “artistically” – that is, not perfectly, but enough to keep it from being annoyingly off-key. When the band heard the result, however, they were incensed – “this sounds way too good! Do it again!” The engineer went back to work, this time tuning “even more artistically,” going so far as to pull the singer’s original performance out of tune here and there to compensate for necessary macro-level tuning changes elsewhere.

The product of the tortuous process of tuning and re-tuning apparently satisfied the band, but the story left me puzzled… Why tune the track at all? If the band was so committed to not sounding overproduced, why go to such great lengths to make it sound like you didn’t mess with it? This, I was told, simply wasn’t an option. The engineer couldn’t in good conscience let the performance go un-tuned. Digital pitch correction, it seems, has become the rule, not the exception, so much so that the accepted solution for too much pitch correction is more pitch correction.

Since 1997, recording engineers have used Auto-Tune (or, more accurately, the growing pantheon of digital pitch correction plugins for which Auto-Tune, Kleenex-like, has become the household name) to fix pitchy vocal takes, lend T-Pain his signature vocal sound, and reveal the hidden vocal talents of political pundits. It’s the technology that can make the tone-deaf sing in key, make skilled singers perform more consistently, and make MLK sound like Akon. And at 17 years of age, “The Gerbil,” as some like to call Auto-Tune, is getting a little long in the tooth (certainly by meme standards.) The next U.S. presidential election will include a contingent of voters who have never drawn air that wasn’t once rippled by Cher’s electronically warbling voice in the pre-chorus of “Believe.” A couple of years after that, the Auto-Tune patent will expire and its proprietary status will dissolve into to the collective ownership of the public domain.

Growing pains aside, digital vocal tuning doesn’t seem to be leaving any time soon. Exact numbers are hard to come by, but it’s safe to say that the vast majority of commercial music produced in the last decade or so has most likely been digitally tuned. Future Music editor Daniel Griffiths has ballpark-estimated that, as early as 2010, pitch correction was used in about 99% of recorded music. Reports of its death are thus premature at best. If pitch correction is seems banal it doesn’t mean it’s on the decline; rather, it’s a sign that we are increasingly accepting its underlying assumptions and internalizing the habits of thought and listening that go along with them.

Headlines in tech journalism are typically reserved for the newest, most groundbreaking gadgets. Often, though, the really interesting stuff only happens once a technology begins to lose its novelty, recede into the background, and quietly incorporate itself into fundamental ways we think about, perceive, and act in the world. Think, for example, about all the ways your embodied perceptual being has been shaped by and tuned-in to, say, the very computer or mobile device you’re reading this on. Setting value judgments aside for a moment, then, it’s worth thinking about where pitch correction technology came from, what assumptions underlie the way it works and how we work with it, and what it means that it feels like “old news.”

by Owen Marshall, Sounding Out |  Read more:
Image: Ethan Hein

Sunday, May 4, 2014

WHCD 2014



[ed. Too funny. Apparently the annual White House Correspondents Dinner circle jerk took place again last night. I love Joe Biden and JL-D.]

Trust But Verify - How Airbnb and Lyft Finally Got Americans to Trust Each Other

In about 40 minutes, Cindy Manit will let a complete stranger into her car. An app on her windshield-mounted iPhone will summon her to a corner in San Francisco’s South of Market neighborhood, where a russet-haired woman in an orange raincoat and coffee-colored boots will slip into the front seat of her immaculate 2006 Mazda3 hatchback and ask for a ride to the airport. Manit has picked up hundreds of random people like this. Once she took a fare all the way across the Golden Gate Bridge to Sausalito. Another time she drove a clown to a Cirque du Soleil after-party.

“People might think I’m a little too trusting,” Manit says as she drives toward Potrero Hill, “but I don’t think so.”

Manit, a freelance yoga instructor and personal trainer, signed up in August 2012 as a driver for Lyft, the then-nascent ride-sharing company that lets anyone turn their car into an ad hoc taxi. Today the company has thousands of drivers, has raised $333 million in venture funding, and is considered one of the leading participants in the so-called sharing economy, in which businesses provide marketplaces for individuals to rent out their stuff or labor. Over the past few years, the sharing economy has matured from a fringe movement into a legitimate economic force, with companies like Airbnb and Uber the constant subject of IPO rumors. (One of these startups may well have filed an S-1 by the time you read this.) No less an authority than New York Times columnist Thomas Friedman has declared this the age of the sharing economy, which is “producing both new entrepreneurs and a new concept of ownership.”

The sharing economy has come on so quickly and powerfully that regulators and economists are still grappling to understand its impact. But one consequence is already clear: Many of these companies have us engaging in behaviors that would have seemed unthinkably foolhardy as recently as five years ago. We are hopping into strangers’ cars (Lyft, Sidecar, Uber), welcoming them into our spare rooms (Airbnb), dropping our dogs off at their houses (DogVacay, Rover), and eating food in their dining rooms (Feastly). We are letting them rent our cars (RelayRides, Getaround), our boats (Boatbound), our houses (HomeAway), and our power tools (Zilok). We are entrusting complete strangers with our most valuable possessions, our personal experiences—and our very lives. In the process, we are entering a new era of Internet-enabled intimacy.

This is not just an economic breakthrough. It is a cultural one, enabled by a sophisticated series of mechanisms, algorithms, and finely calibrated systems of rewards and punishments. It’s a radical next step for the ­person-to-person marketplace pioneered by eBay: a set of digi­tal tools that enable and encourage us to trust our fellow human beings.

Manit is 30 years old but has the delicate frame of an adolescent. She wears a thin kelly-green hoodie and distressed blue jeans, and her cropped dark hair pokes out from under her purple stocking cap. Yet despite her seemingly vulnerable appearance, she says she has never felt threatened or uneasy while driving for Lyft. “It’s not just some person off the street,” she says, tooling under the 101 off-ramp and ticking off the ways in which driving for Lyft is different from picking up a random hitchhiker. Lyft riders must link their account to their Facebook profile; their photo pops up on Manit’s iPhone when they request a ride. Every rider has been rated by their previous Lyft drivers, so Manit can spot bad apples and avoid them. And they have to register with a credit card, so the ride is guaranteed to be paid for before they even get into her car. “I’ve never done anything like this, where I pick up random people,” Manit says, “but I’ve gotten used to it.”

Then again, Manit has what academics call a low trust threshold. That is, she is predisposed to engage in behavior that other people might consider risky. “I don’t want to live my life always guarding myself. I put it out there,” she says. “But when I told my friends and family about it—even my partner at the time—they were like, uh, are you sure? This seems kind of creepy.”

by Jason Tanz, Wired |  Read more:
Image: Gus Powell

Glenn Greenwald and Michael Hayden Debate Surveillance


Every year, Canada's Munk debates feature high-level, high-profile debates on burning policy issues. This year, they debated surveillance, and the participants were Glenn Greendwald and Reddit co-founder Alexis Ohanian on the anti-surveillance side and former NSA and CIA chief Michael Hayden and Harvard law professor Alan Dershowitz on the pro-surveillance side. Although the debating partners do a lot in this, the real freight is carried by Hayden and Greenwald, both of whom are more fact-intensive than the others.

I have a bias here, but I think that Greenwald wiped up the floor with Hayden (the post-debate polls from the room support this view). It was particularly useful to have Hayden being grilled by a well-informed opponent who was allowed to go after the easy dismissals and glib deflections. Normally, he gets to deliver some well-polished talking points and walk away -- this was something I hadn't seen before.

This is just about the best video you're going to watch on the surveillance debate. It kicks off around the 30m mark.

by Cory Doctorow, Boing Boing

Can't Explain


[ed. Repost. Original version here.]