Tuesday, May 31, 2016
Steph Curry and the Warrior's Astonishing Season
[ed. Curry is like a young Tiger Woods. Totally dominating (and a beauty to watch).]
A perfectly executed jump shot is among the most beautiful things in all of sports. Dunks are more ostentatiously athletic, but they end in a staccato thud, a brute pounding of the basket. Like baseball swings, tennis shots, and tackles in football, dunks have the quality of abrupt endings, of momentous and violent collisions between moving objects and determined people. A jump shot, by contrast, is all flow. An athlete takes one or two steps, pauses to gather his body, and then leaps—straight up, if he’s doing things correctly. As he rises, he holds the ball in the palm of one hand while the other gently cradles it from behind. The shooting arm is bent at a ninety-degree angle, its upper half parallel to the floor. The shooting elbow travels upward on a narrow path within inches of the torso, to insure an efficient and steady glide; the other arm folds over, framing the face of the shooter. Then the shooting arm uncoils, exploding into a rigid staff: the ball is launched upward, at an angle, usually, between forty and sixty degrees relative to the shooter’s body. At the top of the player’s leap, before letting the ball go, his shooting hand offers its final adjustments by flicking quickly forward and letting the ball roll backward off the fingertips, insuring that it spins in the opposite direction from which it’s travelling. The backspin works to insure that whatever the ball grazes—the backboard, a piece of the rim—will have minimal effect on its path. When all of this is done just so, the ball drops noiselessly into the woven nylon net, rippling the ropes as little as possible on its way through. By then, a confident shooter will already be on his way down the floor, perhaps with a hand raised in triumph as thousands of people scream and high-five strangers. A perfect shot is swift and noiseless and, when delivered at the right time, can end an opponent’s chances. This is why the man who has lately become its most storied practitioner has been called the Baby-Faced Assassin.
That killer, Stephen Curry, of the Golden State Warriors, has the most precise and stunning jump shot in the history of the National Basketball Association. Unlike other shooters, who pause with the ball at hip height to gather and set themselves, his is a single-motion shot: he enters a deep knee bend and cocks his hands to shoot in one swift, upward-flinging movement. He can charge down the floor at top speed, then halt completely and fire a shot in less than a second. And that shot is far more likely to go in than one taken by nearly anyone else. His form is precise, his jumps are generally upright, and his shooting forearm is never more than five degrees from vertical. His follow-through is textbook. Curry executes perfection faster than anyone else in professional basketball.
In March, I made my way to Oracle Arena, in Oakland, where I live, to see him do this in person. The Warriors’ home court sits beside one of the least attractive freeways in one of the least attractive parts of the city. The 880, which stretches from downtown Oakland to the rival tech hub San Jose, is narrow and crowded, marked with potholes, jammed with smoke-spewing semis, and hemmed in by graffiti-covered walls on one side and crumbling office parks on the other. The arena rises from an area of forgotten roads and weed-covered lots. (...)
Inside the arena, nearly twenty thousand fans were giddy. Sellouts were common at Oracle even during the team’s lean years; on this night, thousands arrived early just to watch Curry perform his increasingly legendary warmup routine. He started with a series of dribble drills and layups. As fans amassed in the lower seats, he began flinging mid-range jumpers, nailing shot after shot. Soon he was working beyond the three-point arc, a team assistant feeding him balls that he sent, almost without fail, through the hoop. Then he started in on his proprietary material: shots launched from well beyond the arc, far outside any region where reasonable players are comfortable. The crowd yelled for each make, groaned at each miss. Curry was either unaware of our presence or teasing us by pretending to be. By the time he reached half-court, it had become sheer performance, a circus act. With each shot taken from the mid-court logo, you could hear people inhaling breaths that weren’t let out until the ball landed.
Then Curry prepared for his final trick. A security guard cleared out the hallway that leads to the home locker room, barking at wayward reporters and the occasional oblivious visitor who happened to wander through with a hot dog and an all-access pass. Curry set up halfway down the tunnel, nearly fifty feet from the basket, directly to one side of it. He could no longer be seen from the stands. Suddenly a ball flew from the tunnel as though shot by a cannon; it bounced off the front rim, and the crowd let out a moan. The security guard, clearly relishing his role, bounced another ball down the hallway. It, too, was thrown into the air, this time sailing through the hoop, hitting nothing but net. The arena erupted. Curry shot three more from the tunnel, making one. The guard announced that the show was over and handed Curry a marker with which the point guard signed hats, posters, and jerseys for kids who had been granted special access. He waved to the crowd and ran down the hallway at full speed, disappearing into the locker room. The game hadn’t even begun, and everyone present had already been thoroughly entertained.
A perfectly executed jump shot is among the most beautiful things in all of sports. Dunks are more ostentatiously athletic, but they end in a staccato thud, a brute pounding of the basket. Like baseball swings, tennis shots, and tackles in football, dunks have the quality of abrupt endings, of momentous and violent collisions between moving objects and determined people. A jump shot, by contrast, is all flow. An athlete takes one or two steps, pauses to gather his body, and then leaps—straight up, if he’s doing things correctly. As he rises, he holds the ball in the palm of one hand while the other gently cradles it from behind. The shooting arm is bent at a ninety-degree angle, its upper half parallel to the floor. The shooting elbow travels upward on a narrow path within inches of the torso, to insure an efficient and steady glide; the other arm folds over, framing the face of the shooter. Then the shooting arm uncoils, exploding into a rigid staff: the ball is launched upward, at an angle, usually, between forty and sixty degrees relative to the shooter’s body. At the top of the player’s leap, before letting the ball go, his shooting hand offers its final adjustments by flicking quickly forward and letting the ball roll backward off the fingertips, insuring that it spins in the opposite direction from which it’s travelling. The backspin works to insure that whatever the ball grazes—the backboard, a piece of the rim—will have minimal effect on its path. When all of this is done just so, the ball drops noiselessly into the woven nylon net, rippling the ropes as little as possible on its way through. By then, a confident shooter will already be on his way down the floor, perhaps with a hand raised in triumph as thousands of people scream and high-five strangers. A perfect shot is swift and noiseless and, when delivered at the right time, can end an opponent’s chances. This is why the man who has lately become its most storied practitioner has been called the Baby-Faced Assassin.
That killer, Stephen Curry, of the Golden State Warriors, has the most precise and stunning jump shot in the history of the National Basketball Association. Unlike other shooters, who pause with the ball at hip height to gather and set themselves, his is a single-motion shot: he enters a deep knee bend and cocks his hands to shoot in one swift, upward-flinging movement. He can charge down the floor at top speed, then halt completely and fire a shot in less than a second. And that shot is far more likely to go in than one taken by nearly anyone else. His form is precise, his jumps are generally upright, and his shooting forearm is never more than five degrees from vertical. His follow-through is textbook. Curry executes perfection faster than anyone else in professional basketball.
In March, I made my way to Oracle Arena, in Oakland, where I live, to see him do this in person. The Warriors’ home court sits beside one of the least attractive freeways in one of the least attractive parts of the city. The 880, which stretches from downtown Oakland to the rival tech hub San Jose, is narrow and crowded, marked with potholes, jammed with smoke-spewing semis, and hemmed in by graffiti-covered walls on one side and crumbling office parks on the other. The arena rises from an area of forgotten roads and weed-covered lots. (...)
Inside the arena, nearly twenty thousand fans were giddy. Sellouts were common at Oracle even during the team’s lean years; on this night, thousands arrived early just to watch Curry perform his increasingly legendary warmup routine. He started with a series of dribble drills and layups. As fans amassed in the lower seats, he began flinging mid-range jumpers, nailing shot after shot. Soon he was working beyond the three-point arc, a team assistant feeding him balls that he sent, almost without fail, through the hoop. Then he started in on his proprietary material: shots launched from well beyond the arc, far outside any region where reasonable players are comfortable. The crowd yelled for each make, groaned at each miss. Curry was either unaware of our presence or teasing us by pretending to be. By the time he reached half-court, it had become sheer performance, a circus act. With each shot taken from the mid-court logo, you could hear people inhaling breaths that weren’t let out until the ball landed.
Then Curry prepared for his final trick. A security guard cleared out the hallway that leads to the home locker room, barking at wayward reporters and the occasional oblivious visitor who happened to wander through with a hot dog and an all-access pass. Curry set up halfway down the tunnel, nearly fifty feet from the basket, directly to one side of it. He could no longer be seen from the stands. Suddenly a ball flew from the tunnel as though shot by a cannon; it bounced off the front rim, and the crowd let out a moan. The security guard, clearly relishing his role, bounced another ball down the hallway. It, too, was thrown into the air, this time sailing through the hoop, hitting nothing but net. The arena erupted. Curry shot three more from the tunnel, making one. The guard announced that the show was over and handed Curry a marker with which the point guard signed hats, posters, and jerseys for kids who had been granted special access. He waved to the crowd and ran down the hallway at full speed, disappearing into the locker room. The game hadn’t even begun, and everyone present had already been thoroughly entertained.
Monday, May 30, 2016
How PreCheck Made Airport Security Lines Longer
[ed. How about just going back to what we had before - metal detectors and x-rays for carry on baggage? Put that bloated TSA budget to better use by employing more Sky Marshals in the air? (Probably because that would mean fewer seats for the airlines to charge).]
- Lots more people are flying.
- Budget cutbacks have forced the Transportation Security Administration to make do with fewer screeners.
- The TSA is a bungling bureaucracy.
Still -- in part because growth in air travel, tight federal budgets and bureaucratic bungling all seem somewhat inevitable at this point -- it's important to look for other causes of the security-line bottleneck. And yes, people have been looking, and finding.
Bloomberg View’s editorial board argues today, for example, that the TSA’s list of items that are prohibited on planes is a big part of the problem. The list is outdated and largely nonsensical, yet ninnies in Congress object to any proposed change. It’s a classic case of “security theater” trumping actual security.
Another angle, pushed this month by Homeland Security Secretary Jeh Johnson and Democratic Senators Ed Markey and Richard Blumenthal, is that the fees airlines charge for checked baggage are causing travelers to bring more, bigger carry-ons that slow security checks. My initial thought was that these bag charges couldn't be the problem because they've been around for years. But it turns out that American became the first legacy carrier to impose a fee for checking even one bag in May 2008, just as air travel was beginning a steep decline. Now that passenger numbers are finally setting new records again, it's not unreasonable to think that the baggage charges are at least part of the security-line problem.
Finally, there’s what I think may be the most interesting culprit of all: A seemingly good idea for reducing security waits that at this point is probably making them longer. This would be TSA PreCheck, the streamlined screening program (you can keep your shoes and light jacket on and your laptop and toiletries in your luggage, and usually get to go through a metal detector instead of a full-body scanner) that was rolled out in 2011 and expanded in 2013 as what then-TSA chief John Pistole called the “happy lane.” As Bloomberg’s Alan Levin and Jeff Plungis reported Thursday:
For the small portion of travelers now in the program that provides access to short lines, it may fit John Pistole’s description. But for passengers who don’t participate, it has contributed to security screening delays and growing tensions at airports because far fewer people signed up than the agency projected.Devoting staff and machines to PreCheck screening lanes with hardly anybody going through them means taking them away from general screening lanes with lots of people going through them. Some airport TSA managers have reacted to the crush of traffic this spring by shutting down PreCheck lanes, which is understandable but leaves no one happy. I experienced this early one Saturday morning this month at Houston’s George Bush Intercontinental Airport. Those of us with the telltale checkmark on our boarding passes got to stand in a separate, shorter line to get our IDs checked, but were then put in the same security screening line as everybody else. When we finally got close to the machines, I told a TSA worker that I had PreCheck and asked if I needed to take my laptop out or my shoes off. Laptop comes out but shoes can stay on, she said, then allowed me to cut in front of several people and go through the metal detector instead of the scanner -- without ever looking at my boarding pass to verify that I was in fact PreChecked. Now that's security!
by Justin Fox, Bloomberg | Read more:
Image: Paul J. Richards
The Complete List of 'OK, Google' Commands
With Google Now, you can use voice commands to create reminders, get help with trivia questions, and, yes, even find out "what does the fox say?"
And, with today's update from Google, the voice assistant's reponse will sound more natural than ever. (Though, of all the voice assistants, Google Now was already the most natural-sounding one.)
If you can't get enough of talking to your phone (or your AndroidWear watch), we put together a long list of OK, Google commands to help you get more done with just your voice.
OK, Google
There are two ways to say a command.
And, with today's update from Google, the voice assistant's reponse will sound more natural than ever. (Though, of all the voice assistants, Google Now was already the most natural-sounding one.)
If you can't get enough of talking to your phone (or your AndroidWear watch), we put together a long list of OK, Google commands to help you get more done with just your voice.
OK, Google
There are two ways to say a command.
- With newer Android devices, just say "OK Google," followed by a question or task. For example, if I wanted to know the weather, I could say "OK Google, what's the weather like today?" and a few seconds later Google Now would provide the forecast.
- Tap on the microphone button in the Google search bar, and skip the "OK, Google" portion of the conversation. If the search bar isn't on a home screen, swipe right from the primary home screen to see Google Now.
The (almost) complete list of Google commands
We searched high and low for a complete list of "OK Google" commands, but came up short. So we put one together ourselves. Below is a list of commands we have verified work on Android. Odds are it's not entirely complete, since Google did not share one with us -- we asked.
If you know of a command missing from our list, please leave a comment and we will be sure to included it.
We searched high and low for a complete list of "OK Google" commands, but came up short. So we put one together ourselves. Below is a list of commands we have verified work on Android. Odds are it's not entirely complete, since Google did not share one with us -- we asked.
If you know of a command missing from our list, please leave a comment and we will be sure to included it.
The basics
- Open [app name]. Example: "Open Gmail."
- Go to [website]. Ex.: "Go to CNET.com."
- Call [contact name]. Ex.: "Call Mom."
- Text or Send text to [contact name]. Ex.: "Text Wife I'm running late."
- Email or Send email. Ex.: "Email Wife subject Hi message I'm running late, sorry." You can also add CC and BCC recipients.
- Show me my last messages. This will present a list of recent messages, and read them to you, giving you a chance to reply.
- Create a calendar event or Schedule an appointment. Ex.: "Create appointment Go on a walk tomorrow at 10 a.m."
- Set an alarm for [specific time, or amount of time]. Ex.: "Set alarm for 10 a.m." Or "Set alarm for 20 minutes from now."
- Set a timer for [X] minutes.
- Note to self [contents of note].
- Start a list for [list name].
- Send Hangout message to [contact name].
- Remind me to [do a task]. Ex.: "Remind me to get dog food at Target," will create a location-based reminder. "Remind me to take out the trash tomorrow morning," will give you a time-based reminder.
- Show me my pictures from [location]. Ex.: "Show me my pictures from San Francisco."
- Show me my calendar.
- When's my next meeting?
- Where is my next meeting?
- Post to Twitter.
- Post to Google+
- Show me [app category] apps. Ex.: "Show me gaming apps."
- Start a run.
- Show me emails from [contact name].
by Jason Cipriani, CNET | Read more:
Image: James Martin
Sunday, May 29, 2016
All the Songs Are Now Yours
This is a new world. Cloud streaming represents a bigger revolution than the one brought in by CDs or MP3s, which made music merely cheaper to buy and easier to carry around. Just five years ago, if you wanted to listen legally to a specific song, you bought it (on CD, on MP3), which, assuming finite resources, meant you had to choose which song to buy, which in turn meant you didn’t buy other songs you had considered buying. Then, a person with $10 to spend could have purchased five or six songs, or, if he was an antiquarian, an album. Now, with $10, that same person can subscribe to a streaming service for a month and hear all five or six songs he would have purchased with that money, plus 20 million or so others.
Spotify estimates that fully four million of the songs it carries have never been streamed, not even once. There is a vast musical frontier waiting to be explored, but it is already mapped: using various search methods, you could find every bluegrass song ever written with the word “banana” in it, or every Finnish death metal album, or Billy Bragg and Wilco’s recording of the Woody Guthrie lyric “Walt Whitman’s Niece,” or a gospel group called Walt Whitman and the Soul Children of Chicago, or Whitman himself, reading “America” on an album called 100 Great Poems: Classic Poets and Beatnik Freaks.
Freed from the anguish of choosing, music listeners can discover all kinds of weird, nettlesome, unpleasant, sublime, sweet, or perplexing musical paths. These paths branch off constantly, so that by the end of a night that started with the Specials, you’re listening to Górecki’s Miserere, not by throwing a dart, but by following the quite specific imperatives of each moment’s needs, each instant’s curiosities. It is like an open-format video game, where you make the world by advancing through it.
None of these lines of affinity or innuendo exists until a single person’s mind makes them, and to look back upon the path that any given night at home has taken you (a queue shows you the songs you’ve played; you can back it up and replay those songs in order, if you like) is to learn something about your own intuitions as they reveal themselves musically. Spotify will give you data about what songs or artists you’ve listened to the most. It is always a surprise; in 2015, my top song was “The Surrey with the Fringe on Top,” as sung by Blossom Dearie, my top album was Fleetwood Mac’s Tusk, and my top artist was Bach. The person who made those choices is not a person I’m conscious of being.
This is one of the most amazing things about streaming music. Not that you can listen to Albanian bluegrass or child yodelers from Maine, but that you can recreate your past with a level of detail never before possible. Or, taking it a step further: you can recover experiences from your past, passions, insights, shames you’d forgotten. I’m forty-four; everything I heard before I was twelve or so forms a kind of musical preconscious that was only occasionally accessible, in flashes and glimpses, before streaming came along. It is as though a box of photographs was discovered of my childhood. Not just the events these photographs capture, but the entire ambient world of the past, its ashtrays and TVtrays, the color of the carpet, my mother’s favorite sweater: all of it comes back, foreground and background, when you rediscover the music that was on when you were a child.
The brutal winnowing of the past, selecting perhaps two dozen pop songs a year, a few albums—this vast oversimplification, this canceling of all that the culture deems tangential or unimportant—is now itself a thing of the past. Kids growing up in this environment are having a genuinely new human experience: nothing in the past is lost, which means temporal sequence itself—where the newest things are closest and most vivid, while the oldest things dwell in the dark backward and abysm of time—gets lost. Everything exists on one plane, so it is harder than before to know exactly where we stand in time.
Every Song Ever is a brilliant guide to listening to music in this new environment, where concentration must become aware of itself as its criteria shift abruptly from genre to genre, composer to composer, culture to culture. Ratliff proposes a “language” that will allow such lateral moves between unlike compositions. It is, as he writes, “not specifically musical,” referring, instead, to “generalized human activity”:
by Dan Chiasson, NY Review of Books | Read more:
Image: Miguel Rio Branco/Magnum Photos
Spotify estimates that fully four million of the songs it carries have never been streamed, not even once. There is a vast musical frontier waiting to be explored, but it is already mapped: using various search methods, you could find every bluegrass song ever written with the word “banana” in it, or every Finnish death metal album, or Billy Bragg and Wilco’s recording of the Woody Guthrie lyric “Walt Whitman’s Niece,” or a gospel group called Walt Whitman and the Soul Children of Chicago, or Whitman himself, reading “America” on an album called 100 Great Poems: Classic Poets and Beatnik Freaks.
Freed from the anguish of choosing, music listeners can discover all kinds of weird, nettlesome, unpleasant, sublime, sweet, or perplexing musical paths. These paths branch off constantly, so that by the end of a night that started with the Specials, you’re listening to Górecki’s Miserere, not by throwing a dart, but by following the quite specific imperatives of each moment’s needs, each instant’s curiosities. It is like an open-format video game, where you make the world by advancing through it.
None of these lines of affinity or innuendo exists until a single person’s mind makes them, and to look back upon the path that any given night at home has taken you (a queue shows you the songs you’ve played; you can back it up and replay those songs in order, if you like) is to learn something about your own intuitions as they reveal themselves musically. Spotify will give you data about what songs or artists you’ve listened to the most. It is always a surprise; in 2015, my top song was “The Surrey with the Fringe on Top,” as sung by Blossom Dearie, my top album was Fleetwood Mac’s Tusk, and my top artist was Bach. The person who made those choices is not a person I’m conscious of being.
This is one of the most amazing things about streaming music. Not that you can listen to Albanian bluegrass or child yodelers from Maine, but that you can recreate your past with a level of detail never before possible. Or, taking it a step further: you can recover experiences from your past, passions, insights, shames you’d forgotten. I’m forty-four; everything I heard before I was twelve or so forms a kind of musical preconscious that was only occasionally accessible, in flashes and glimpses, before streaming came along. It is as though a box of photographs was discovered of my childhood. Not just the events these photographs capture, but the entire ambient world of the past, its ashtrays and TVtrays, the color of the carpet, my mother’s favorite sweater: all of it comes back, foreground and background, when you rediscover the music that was on when you were a child.
The brutal winnowing of the past, selecting perhaps two dozen pop songs a year, a few albums—this vast oversimplification, this canceling of all that the culture deems tangential or unimportant—is now itself a thing of the past. Kids growing up in this environment are having a genuinely new human experience: nothing in the past is lost, which means temporal sequence itself—where the newest things are closest and most vivid, while the oldest things dwell in the dark backward and abysm of time—gets lost. Everything exists on one plane, so it is harder than before to know exactly where we stand in time.
Every Song Ever is a brilliant guide to listening to music in this new environment, where concentration must become aware of itself as its criteria shift abruptly from genre to genre, composer to composer, culture to culture. Ratliff proposes a “language” that will allow such lateral moves between unlike compositions. It is, as he writes, “not specifically musical,” referring, instead, to “generalized human activity”:
Therefore, perhaps not “melody,” “harmony,” “rhythm,” “sonata form,” “oratorio.” Perhaps, instead, repetition, or speed, or slowness, or density, or discrepancy, or stubbornness, or sadness. Intentionally, these are not musical terms per se…. Music and life are inseparable. Music is part of our physical and intellectual formation…. We build an autobiography and a self-image with music, and we know, even as we’re building them, that they’re going to change.It is part of the astonishment of our current era that a statement that might once have seemed an empty, Music 101 platitude—“music and life are inseparable”—is now an acute observation with important technical ramifications. It is now possible for a person to synchronize the outside world to music, to make the world a manifestation of the music she chooses to hear. A record of those choices, viewed years after the fact, suggests the fine-grained emotional and imaginative lives we live while apparently doing nothing, or nothing of note. Play the songs you heard on February 2, 2013, in the order in which you played them, and you can recreate not just the emotions but the suspense and surprise of emotion as it changes in time.
by Dan Chiasson, NY Review of Books | Read more:
Image: Miguel Rio Branco/Magnum Photos
Why You Will Marry the Wrong Person
[ed. This is a more recent formulation of an earlier article posted here.]
Partly, it’s because we have a bewildering array of problems that emerge when we try to get close to others. We seem normal only to those who don’t know us very well. In a wiser, more self-aware society than our own, a standard question on any early dinner date would be: “And how are you crazy?”
Perhaps we have a latent tendency to get furious when someone disagrees with us or can relax only when we are working; perhaps we’re tricky about intimacy after sex or clam up in response to humiliation. Nobody’s perfect. The problem is that before marriage, we rarely delve into our complexities. Whenever casual relationships threaten to reveal our flaws, we blame our partners and call it a day. As for our friends, they don’t care enough to do the hard work of enlightening us. One of the privileges of being on our own is therefore the sincere impression that we are really quite easy to live with.
Our partners are no more self-aware. Naturally, we make a stab at trying to understand them. We visit their families. We look at their photos, we meet their college friends. All this contributes to a sense that we’ve done our homework. We haven’t. Marriage ends up as a hopeful, generous, infinitely kind gamble taken by two people who don’t know yet who they are or who the other might be, binding themselves to a future they cannot conceive of and have carefully avoided investigating.
For most of recorded history, people married for logical sorts of reasons: because her parcel of land adjoined yours, his family had a flourishing business, her father was the magistrate in town, there was a castle to keep up, or both sets of parents subscribed to the same interpretation of a holy text. And from such reasonable marriages, there flowed loneliness, infidelity, abuse, hardness of heart and screams heard through the nursery doors. The marriage of reason was not, in hindsight, reasonable at all; it was often expedient, narrow-minded, snobbish and exploitative. That is why what has replaced it — the marriage of feeling — has largely been spared the need to account for itself.
What matters in the marriage of feeling is that two people are drawn to each other by an overwhelming instinct and know in their hearts that it is right. Indeed, the more imprudent a marriage appears (perhaps it’s been only six months since they met; one of them has no job or both are barely out of their teens), the safer it can feel. Recklessness is taken as a counterweight to all the errors of reason, that catalyst of misery, that accountant’s demand. The prestige of instinct is the traumatized reaction against too many centuries of unreasonable reason.
But though we believe ourselves to be seeking happiness in marriage, it isn’t that simple. What we really seek is familiarity — which may well complicate any plans we might have had for happiness. We are looking to recreate, within our adult relationships, the feelings we knew so well in childhood. The love most of us will have tasted early on was often confused with other, more destructive dynamics: feelings of wanting to help an adult who was out of control, of being deprived of a parent’s warmth or scared of his anger, of not feeling secure enough to communicate our wishes. How logical, then, that we should as grown-ups find ourselves rejecting certain candidates for marriage not because they are wrong but because they are too right — too balanced, mature, understanding and reliable — given that in our hearts, such rightness feels foreign. We marry the wrong people because we don’t associate being loved with feeling happy.
We make mistakes, too, because we are so lonely. No one can be in an optimal frame of mind to choose a partner when remaining single feels unbearable. We have to be wholly at peace with the prospect of many years of solitude in order to be appropriately picky; otherwise, we risk loving no longer being single rather more than we love the partner who spared us that fate.
Finally, we marry to make a nice feeling permanent. We imagine that marriage will help us to bottle the joy we felt when the thought of proposing first came to us: Perhaps we were in Venice, on the lagoon, in a motorboat, with the evening sun throwing glitter across the sea, chatting about aspects of our souls no one ever seemed to have grasped before, with the prospect of dinner in a risotto place a little later. We married to make such sensations permanent but failed to see that there was no solid connection between these feelings and the institution of marriage.
Indeed, marriage tends decisively to move us onto another, very different and more administrative plane, which perhaps unfolds in a suburban house, with a long commute and maddening children who kill the passion from which they emerged. The only ingredient in common is the partner. And that might have been the wrong ingredient to bottle.
The good news is that it doesn’t matter if we find we have married the wrong person.
by Alain de Botton, NY Times | Read more:
Image: Marion FayolleBook Review: Age of Em
So, what is the Age of Em?
According to Hanson, AI is really hard and won’t be invented in time to shape the posthuman future. But sometime a century or so from now, scanning technology, neuroscience, and computer hardware will advance enough to allow emulated humans, or “ems”. Take somebody’s brain, scan it on a microscopic level, and use this information to simulate it neuron-by-neuron on a computer. A good enough simulation will map inputs to outputs in exactly the same way as the brain itself, effectively uploading the person to a computer. Uploaded humans will be much the same as biological humans. Given suitable sense-organs, effectuators, virtual avatars, or even robot bodies, they can think, talk, work, play, love, and build in much the same way as their “parent”. But ems have three very important differences from biological humans.
First, they have no natural body. They will never need food or water; they will never get sick or die. They can live entirely in virtual worlds in which any luxuries they want – luxurious penthouses, gluttonous feasts, Ferraris – can be conjured out of nothing. They will have some limited ability to transcend space, talking to other ems’ virtual presences in much the same way two people in different countries can talk on the Internet.
Second, they can run at different speeds. While a normal human brain is stuck running at the speed that physics allow, a computer simulating a brain can simulate it faster or slower depending on preference and hardware availability. With enough parallel hardware, an em could experience a subjective century in an objective week. Alternatively, if an em wanted to save hardware it could process all its mental operations v e r y s l o w l y and experience only a subjective week every objective century.
Third, just like other computer data, ems can be copied, cut, and pasted. One uploaded copy of Robin Hanson, plus enough free hardware, can become a thousand uploaded copies of Robin Hanson, each living in their own virtual world and doing different things. The copies could even converse with each other, check each other’s work, duel to the death, or – yes – have sex with each other. And if having a thousand Robin Hansons proves too much, a quick ctrl-x and you can delete any redundant ems to free up hard disk space for Civilization 6 (coming out this October!)
Would this count as murder? Hanson predicts that ems will have unusually blase attitudes toward copy-deletion. If there are a thousand other copies of me in the world, then going to sleep and not waking up just feels like delegating back to a different version of me. If you’re still not convinced, Hanson’s essay Is Forgotten Party Death? is a typically disquieting analysis of this proposition. But whether it’s true or not is almost irrelevant – at least some ems will think this way, and they will be the ones who tend to volunteer to be copied for short term tasks that require termination of the copy afterwards. If you personally aren’t interested in participating, the economy will leave you behind.
The ability to copy ems as many times as needed fundamentally changes the economy and the idea of economic growth. Imagine Google has a thousand positions for Ruby programmers. Instead of finding a thousand workers, they can find one very smart and very hard-working person and copy her a thousand times. With unlimited available labor supply, wages plummet to subsistence levels. “Subsistence levels” for ems are the bare minimum it takes to rent enough hardware from Amazon Cloud to run an em. The overwhelming majority of ems will exist at such subsistence levels. On the one hand, if you’ve got to exist on a subsistence level, a virtual world where all luxuries can be conjured from thin air is a pretty good place to do it. On the other, such starvation wages might leave ems with little or no leisure time.
Sort of. This gets weird. There’s an urban legend about a “test for psychopaths”. You tell someone a story about a man who attends his mother’s funeral. He met a really pretty girl there and fell in love, but neglected to get her contact details before she disappeared. How might he meet her again? If they answer “kill his father, she’ll probably come to that funeral too”, they’re a psychopath – ordinary people would have a mental block that prevents them from even considering such a drastic solution. And I bring this up because after reading Age of Em I feel like Robin Hanson would be able to come up with some super-solution even the psychopaths can’t think of, some plan that gets the man a threesome with the girl and her even hotter twin sister at the cost of wiping out an entire continent. Everything about labor relations in Age of Em is like this. (...)
There are a lot of similarities between Hanson’s futurology and (my possibly erroneous interpretation of) the futurology of Nick Land. I see Land as saying, like Hanson, that the future will be one of quickly accelerating economic activity that comes to dominate a bigger and bigger portion of our descendents’ lives. But whereas Hanson’s framing focuses on the participants in such economic activity, playing up their resemblances with modern humans, Land takes a bigger picture. He talks about the economy itself acquiring a sort of self-awareness or agency, so that the destiny of civilization is consumed by the imperative of economic growth.
Imagine a company that manufactures batteries for electric cars. The inventor of the batteries might be a scientist who really believes in the power of technology to improve the human race. The workers who help build the batteries might just be trying to earn money to support their families. The CEO might be running the business because he wants to buy a really big yacht. And the whole thing is there to eventually, somewhere down the line, let a suburban mom buy a car to take her kid to soccer practice. Like most companies the battery-making company is primarily a profit-making operation, but the profit-making-ness draws on a lot of not-purely-economic actors and their not-purely-economic subgoals.
Now imagine the company fires all its employees and replaces them with robots. It fires the inventor and replaces him with a genetic algorithm that optimizes battery design. It fires the CEO and replaces him with a superintelligent business-running algorithm. All of these are good decisions, from a profitability perspective. We can absolutely imagine a profit-driven shareholder-value-maximizing company doing all these things. But it reduces the company’s non-masturbatory participation in an economy that points outside itself, limits it to just a tenuous connection with soccer moms and maybe some shareholders who want yachts of their own.
Now take it further. Imagine there are no human shareholders who want yachts, just banks who lend the company money in order to increase their own value. And imagine there are no soccer moms anymore; the company makes batteries for the trucks that ship raw materials from place to place. Every non-economic goal has been stripped away from the company; it’s just an appendage of Global Development.
Now take it even further, and imagine this is what’s happened everywhere. There are no humans left; it isn’t economically efficient to continue having humans. Algorithm-run banks lend money to algorithm-run companies that produce goods for other algorithm-run companies and so on ad infinitum. Such a masturbatory economy would have all the signs of economic growth we have today. It could build itself new mines to create raw materials, construct new roads and railways to transport them, build huge factories to manufacture them into robots, then sell the robots to whatever companies need more robot workers. It might even eventually invent space travel to reach new worlds full of raw materials. Maybe it would develop powerful militaries to conquer alien worlds and steal their technological secrets that could increase efficiency. It would be vast, incredibly efficient, and utterly pointless. The real-life incarnation of those strategy games where you mine Resources to build new Weapons to conquer new Territories from which you mine more Resources and so on forever.
But this seems to me the natural end of the economic system. Right now it needs humans only as laborers, investors, and consumers. But robot laborers are potentially more efficient, companies based around algorithmic trading are already pushing out human investors, and most consumers already aren’t individuals – they’re companies and governments and organizations. At each step you can gain efficiency by eliminating humans, until finally humans aren’t involved anywhere.
True to form, Land doesn’t see this as a dystopia – I think he conflates “maximally efficient economy” with “God”, which is a hell of a thing to conflate – but I do. And I think it provides an important new lens with which to look at the Age of Em.
According to Hanson, AI is really hard and won’t be invented in time to shape the posthuman future. But sometime a century or so from now, scanning technology, neuroscience, and computer hardware will advance enough to allow emulated humans, or “ems”. Take somebody’s brain, scan it on a microscopic level, and use this information to simulate it neuron-by-neuron on a computer. A good enough simulation will map inputs to outputs in exactly the same way as the brain itself, effectively uploading the person to a computer. Uploaded humans will be much the same as biological humans. Given suitable sense-organs, effectuators, virtual avatars, or even robot bodies, they can think, talk, work, play, love, and build in much the same way as their “parent”. But ems have three very important differences from biological humans.
First, they have no natural body. They will never need food or water; they will never get sick or die. They can live entirely in virtual worlds in which any luxuries they want – luxurious penthouses, gluttonous feasts, Ferraris – can be conjured out of nothing. They will have some limited ability to transcend space, talking to other ems’ virtual presences in much the same way two people in different countries can talk on the Internet.
Second, they can run at different speeds. While a normal human brain is stuck running at the speed that physics allow, a computer simulating a brain can simulate it faster or slower depending on preference and hardware availability. With enough parallel hardware, an em could experience a subjective century in an objective week. Alternatively, if an em wanted to save hardware it could process all its mental operations v e r y s l o w l y and experience only a subjective week every objective century.
Third, just like other computer data, ems can be copied, cut, and pasted. One uploaded copy of Robin Hanson, plus enough free hardware, can become a thousand uploaded copies of Robin Hanson, each living in their own virtual world and doing different things. The copies could even converse with each other, check each other’s work, duel to the death, or – yes – have sex with each other. And if having a thousand Robin Hansons proves too much, a quick ctrl-x and you can delete any redundant ems to free up hard disk space for Civilization 6 (coming out this October!)
Would this count as murder? Hanson predicts that ems will have unusually blase attitudes toward copy-deletion. If there are a thousand other copies of me in the world, then going to sleep and not waking up just feels like delegating back to a different version of me. If you’re still not convinced, Hanson’s essay Is Forgotten Party Death? is a typically disquieting analysis of this proposition. But whether it’s true or not is almost irrelevant – at least some ems will think this way, and they will be the ones who tend to volunteer to be copied for short term tasks that require termination of the copy afterwards. If you personally aren’t interested in participating, the economy will leave you behind.
The ability to copy ems as many times as needed fundamentally changes the economy and the idea of economic growth. Imagine Google has a thousand positions for Ruby programmers. Instead of finding a thousand workers, they can find one very smart and very hard-working person and copy her a thousand times. With unlimited available labor supply, wages plummet to subsistence levels. “Subsistence levels” for ems are the bare minimum it takes to rent enough hardware from Amazon Cloud to run an em. The overwhelming majority of ems will exist at such subsistence levels. On the one hand, if you’ve got to exist on a subsistence level, a virtual world where all luxuries can be conjured from thin air is a pretty good place to do it. On the other, such starvation wages might leave ems with little or no leisure time.
Sort of. This gets weird. There’s an urban legend about a “test for psychopaths”. You tell someone a story about a man who attends his mother’s funeral. He met a really pretty girl there and fell in love, but neglected to get her contact details before she disappeared. How might he meet her again? If they answer “kill his father, she’ll probably come to that funeral too”, they’re a psychopath – ordinary people would have a mental block that prevents them from even considering such a drastic solution. And I bring this up because after reading Age of Em I feel like Robin Hanson would be able to come up with some super-solution even the psychopaths can’t think of, some plan that gets the man a threesome with the girl and her even hotter twin sister at the cost of wiping out an entire continent. Everything about labor relations in Age of Em is like this. (...)
There are a lot of similarities between Hanson’s futurology and (my possibly erroneous interpretation of) the futurology of Nick Land. I see Land as saying, like Hanson, that the future will be one of quickly accelerating economic activity that comes to dominate a bigger and bigger portion of our descendents’ lives. But whereas Hanson’s framing focuses on the participants in such economic activity, playing up their resemblances with modern humans, Land takes a bigger picture. He talks about the economy itself acquiring a sort of self-awareness or agency, so that the destiny of civilization is consumed by the imperative of economic growth.
Imagine a company that manufactures batteries for electric cars. The inventor of the batteries might be a scientist who really believes in the power of technology to improve the human race. The workers who help build the batteries might just be trying to earn money to support their families. The CEO might be running the business because he wants to buy a really big yacht. And the whole thing is there to eventually, somewhere down the line, let a suburban mom buy a car to take her kid to soccer practice. Like most companies the battery-making company is primarily a profit-making operation, but the profit-making-ness draws on a lot of not-purely-economic actors and their not-purely-economic subgoals.
Now imagine the company fires all its employees and replaces them with robots. It fires the inventor and replaces him with a genetic algorithm that optimizes battery design. It fires the CEO and replaces him with a superintelligent business-running algorithm. All of these are good decisions, from a profitability perspective. We can absolutely imagine a profit-driven shareholder-value-maximizing company doing all these things. But it reduces the company’s non-masturbatory participation in an economy that points outside itself, limits it to just a tenuous connection with soccer moms and maybe some shareholders who want yachts of their own.
Now take it further. Imagine there are no human shareholders who want yachts, just banks who lend the company money in order to increase their own value. And imagine there are no soccer moms anymore; the company makes batteries for the trucks that ship raw materials from place to place. Every non-economic goal has been stripped away from the company; it’s just an appendage of Global Development.
Now take it even further, and imagine this is what’s happened everywhere. There are no humans left; it isn’t economically efficient to continue having humans. Algorithm-run banks lend money to algorithm-run companies that produce goods for other algorithm-run companies and so on ad infinitum. Such a masturbatory economy would have all the signs of economic growth we have today. It could build itself new mines to create raw materials, construct new roads and railways to transport them, build huge factories to manufacture them into robots, then sell the robots to whatever companies need more robot workers. It might even eventually invent space travel to reach new worlds full of raw materials. Maybe it would develop powerful militaries to conquer alien worlds and steal their technological secrets that could increase efficiency. It would be vast, incredibly efficient, and utterly pointless. The real-life incarnation of those strategy games where you mine Resources to build new Weapons to conquer new Territories from which you mine more Resources and so on forever.
But this seems to me the natural end of the economic system. Right now it needs humans only as laborers, investors, and consumers. But robot laborers are potentially more efficient, companies based around algorithmic trading are already pushing out human investors, and most consumers already aren’t individuals – they’re companies and governments and organizations. At each step you can gain efficiency by eliminating humans, until finally humans aren’t involved anywhere.
True to form, Land doesn’t see this as a dystopia – I think he conflates “maximally efficient economy” with “God”, which is a hell of a thing to conflate – but I do. And I think it provides an important new lens with which to look at the Age of Em.
by Scott Alexander, Slate Star Codex | Read more:
Image: Age of Em
Labels:
Critical Thought,
Culture,
Economics,
Literature,
Philosophy,
Psychology,
Science,
Technology
The Fall of Salon.com
[ed. Reminds me of this piece of advice to young people: "But of course things did have to be this way; our Internet’s strange success contained within it the seeds of its destruction. Once people realize there’s money to be made in something, anything that was once good about it is not long for that world. This is probably where I should put a GIF of a character from a ’90s Nickelodeon show looking sad with the acronym LOL superimposed on it but I like to think I have conveyed that same sentiment using words."]
A Facebook page dedicated to celebrating the 20th anniversary of digital media pioneer Salon is functioning as a crowdsourced eulogy.
Dozens of Salon alumni have, over the past several months, posted their favorite stories from and memories of the once-beloved liberal news site described as a “left-coast, interactive version of The New Yorker,” a progressive powerhouse that over the years has covered politics with a refreshing aggressiveness, in a context that left plenty of room for provocative personal essays and award-winning literary criticism.
“We were inmates who took over the journalistic asylum,” David Talbot, who founded the site in 1995, wrote on the Facebook page. “And we let it rip — we helped create online journalism, making it up as we went along. And we let nobody — investors, advertisers, the jealous media establishment, mad bombers, etc — get in our way.”
They are mourning a publication they barely recognize today.
“Sadly, Salon doesn’t really exist anymore,” wrote Laura Miller, one of Salon’s founding editors who left the site for Slate last fall. “The name is still being used, but the real Salon is gone.”
Salon, which Talbot originally conceived of as a “smart tabloid,” began as a liberal online magazine and was quickly seen as an embodiment of the media’s future. For a while, particularly ahead of the dot-com boom of the late 1990s, it even looked as though it might be a success story. It lured famous writers and tech-company investors and went public in 1999. At the time, Salon was valued at $107 million.
“I think it’s very similar to what a Vox or a Buzzfeed seems today,” said Kerry Lauerman, who joined Salon in 2000 and would serve as the site’s editor in chief from 2010 to 2013. “There was, at first, a lot of money and excitement about Salon. There was no one else, really, in that space. ... It was kind of a brave new world, and Salon was at the forefront.”
Over the last several months, POLITICO has interviewed more than two dozen current and former Salon employees and reviewed years of Salon’s SEC filings. On Monday, after POLITICO had made several unsuccessful attempts to interview Salon CEO Cindy Jeffers, the company dropped a bombshell: Jeffers was leaving the company effective immediately in what was described as an “abrupt departure.”
While the details of Salon’s enormous management and business challenges dominate the internal discussion at the magazine, in liberal intellectual and media circles it is widely believed that the site has lost its way.
“I remember during the Bush years reading them relatively religiously,” Neera Tanden, the president of the Center for American Progress, told POLITICO. “Especially over the last year, they seem to have completely jumped the shark in so many ways. They’ve become — and I think this is sad — they’ve definitely become like a joke, which is terrible for people who care about these progressive institutions.”
So, what happened? (...)
On social media, some have ridiculed the site for its questionable hot takes on the 2016 election and torqued-up lifestyle pieces that wouldn’t pass muster at any serious publication, like “Farewell, once-favorite organ: I am officially breaking up with my penis.” (...)
"We adopted a Huffington Post model, but we didn't have the resources to scale in a way that would've allowed for that kind of a model to actually work. We had 20 people, not 300,” one former staffer said. “I don't think Cindy ever realized that, and instead of modernizing within reason, while protecting the integrity of the brand — which was Salon's most valuable asset, by far — she decided to go full tilt for traffic, and it destroyed the brand."
A Facebook page dedicated to celebrating the 20th anniversary of digital media pioneer Salon is functioning as a crowdsourced eulogy.
Dozens of Salon alumni have, over the past several months, posted their favorite stories from and memories of the once-beloved liberal news site described as a “left-coast, interactive version of The New Yorker,” a progressive powerhouse that over the years has covered politics with a refreshing aggressiveness, in a context that left plenty of room for provocative personal essays and award-winning literary criticism.
“We were inmates who took over the journalistic asylum,” David Talbot, who founded the site in 1995, wrote on the Facebook page. “And we let it rip — we helped create online journalism, making it up as we went along. And we let nobody — investors, advertisers, the jealous media establishment, mad bombers, etc — get in our way.”
They are mourning a publication they barely recognize today.
“Sadly, Salon doesn’t really exist anymore,” wrote Laura Miller, one of Salon’s founding editors who left the site for Slate last fall. “The name is still being used, but the real Salon is gone.”
Salon, which Talbot originally conceived of as a “smart tabloid,” began as a liberal online magazine and was quickly seen as an embodiment of the media’s future. For a while, particularly ahead of the dot-com boom of the late 1990s, it even looked as though it might be a success story. It lured famous writers and tech-company investors and went public in 1999. At the time, Salon was valued at $107 million.
“I think it’s very similar to what a Vox or a Buzzfeed seems today,” said Kerry Lauerman, who joined Salon in 2000 and would serve as the site’s editor in chief from 2010 to 2013. “There was, at first, a lot of money and excitement about Salon. There was no one else, really, in that space. ... It was kind of a brave new world, and Salon was at the forefront.”
Over the last several months, POLITICO has interviewed more than two dozen current and former Salon employees and reviewed years of Salon’s SEC filings. On Monday, after POLITICO had made several unsuccessful attempts to interview Salon CEO Cindy Jeffers, the company dropped a bombshell: Jeffers was leaving the company effective immediately in what was described as an “abrupt departure.”
While the details of Salon’s enormous management and business challenges dominate the internal discussion at the magazine, in liberal intellectual and media circles it is widely believed that the site has lost its way.
“I remember during the Bush years reading them relatively religiously,” Neera Tanden, the president of the Center for American Progress, told POLITICO. “Especially over the last year, they seem to have completely jumped the shark in so many ways. They’ve become — and I think this is sad — they’ve definitely become like a joke, which is terrible for people who care about these progressive institutions.”
So, what happened? (...)
On social media, some have ridiculed the site for its questionable hot takes on the 2016 election and torqued-up lifestyle pieces that wouldn’t pass muster at any serious publication, like “Farewell, once-favorite organ: I am officially breaking up with my penis.” (...)
"We adopted a Huffington Post model, but we didn't have the resources to scale in a way that would've allowed for that kind of a model to actually work. We had 20 people, not 300,” one former staffer said. “I don't think Cindy ever realized that, and instead of modernizing within reason, while protecting the integrity of the brand — which was Salon's most valuable asset, by far — she decided to go full tilt for traffic, and it destroyed the brand."
by Kelsey Sutton and Peter Sterne, Politico | Read more:
Image: Salon
Saturday, May 28, 2016
Notifications Are Broken. Here's How Google Plans To Fix Them
Notifications suck. They're constantly disrupting us with pointless, ill-timed updates we don't need. True, sometimes they give us pleasure—like when they alert us of messages from real people. And sometimes they save our bacon, by reminding us when a deadline is about to slip by. But for the most part, notifications are broken—a direct pipeline of spam flowing from a million app developers right to the top of our smartphone screens."We need to start a movement to fix notifications," Aranda said.
During a frank session at the 2016 I/O developer conference, Google researchers Julie Aranda, Noor Ali-Hasan, and Safia Baig openly admitted that it was time for notifications to get a major design overhaul. "We need to start a movement to fix notifications," said Aranda, a senior UX researcher within Google.
As part of their research into the problem, Aranda and her colleagues conducted a UX study of 18 New Yorkers, to see how they interacted with notifications on their smartphones, what they hated about them, and what could be done to fix notifications in future versions of Android.
According to Google's research, the major problem with notifications is that developers and users want different things from them. Users primarily want a few things from notifications. First and foremost, they want to get notifications from people. "Notifications from other people make you feel your existence is important," said one of their research subjects, Rachael. And some people are more important than others, which is why notifications from people like your spouse, your mom, or your best friend are more important than a direct message on Twitter, or a group text from the people in your bowling league. In addition, users want notifications that help them stay on top of their life—a reminder of an upcoming deadline or doctor's appointment, for example.
But developers want something different from notifications. First and foremost, they design their notifications to fulfill whatever contract it is that they feel that they have with their users. So if you've designed an exercise app, you might alert someone when they haven't worked out that day; if you are a game developer, you might tell them when someone beat their high score. Yet according to Google, research suggests that the majority of users actively resent such notifications. And that's doubly true for the other kind of notifications developers want to send—notifications that essentially serve no function except to remind users that their app is installed on a user's phone. Google calls these "Crying Wolf" notifications and says they're the absolute nadir of notification design.
This disconnect between what users and developers want is so severe that users go through extreme measures to get away from notifications. Google said that it's seeing more and more users foregoing installing potentially notification-spamming apps on their phones when they can access the same service through a website—where notifications will never be an annoyance. And this actually explains a lot about Google's interest in fixing the notifications problem, because people who aren't downloading Android apps aren't locked into the platform, and aren't spending money on Google Play. (...)
One surprising thing revealed by Google's research? Aranda says they found that people tended to open games, social networks, and news apps so often that notifications actually tended to actually drive users away from the apps, not vice versa.
During a frank session at the 2016 I/O developer conference, Google researchers Julie Aranda, Noor Ali-Hasan, and Safia Baig openly admitted that it was time for notifications to get a major design overhaul. "We need to start a movement to fix notifications," said Aranda, a senior UX researcher within Google.
As part of their research into the problem, Aranda and her colleagues conducted a UX study of 18 New Yorkers, to see how they interacted with notifications on their smartphones, what they hated about them, and what could be done to fix notifications in future versions of Android.
According to Google's research, the major problem with notifications is that developers and users want different things from them. Users primarily want a few things from notifications. First and foremost, they want to get notifications from people. "Notifications from other people make you feel your existence is important," said one of their research subjects, Rachael. And some people are more important than others, which is why notifications from people like your spouse, your mom, or your best friend are more important than a direct message on Twitter, or a group text from the people in your bowling league. In addition, users want notifications that help them stay on top of their life—a reminder of an upcoming deadline or doctor's appointment, for example.
But developers want something different from notifications. First and foremost, they design their notifications to fulfill whatever contract it is that they feel that they have with their users. So if you've designed an exercise app, you might alert someone when they haven't worked out that day; if you are a game developer, you might tell them when someone beat their high score. Yet according to Google, research suggests that the majority of users actively resent such notifications. And that's doubly true for the other kind of notifications developers want to send—notifications that essentially serve no function except to remind users that their app is installed on a user's phone. Google calls these "Crying Wolf" notifications and says they're the absolute nadir of notification design.
This disconnect between what users and developers want is so severe that users go through extreme measures to get away from notifications. Google said that it's seeing more and more users foregoing installing potentially notification-spamming apps on their phones when they can access the same service through a website—where notifications will never be an annoyance. And this actually explains a lot about Google's interest in fixing the notifications problem, because people who aren't downloading Android apps aren't locked into the platform, and aren't spending money on Google Play. (...)
One surprising thing revealed by Google's research? Aranda says they found that people tended to open games, social networks, and news apps so often that notifications actually tended to actually drive users away from the apps, not vice versa.
by John Brownlee, Co-Design | Read more:
Image: Google
Is Everything Wrestling?
The charms of professional wrestling — half Shakespeare, half steel-chair shots — may never be universally understood. Every adult fan of the sport has encountered those skeptics who cock their heads and ask, “You do know it’s fake, right?”
Well, sure, but that hasn’t stopped pro wrestling from inching closer and closer to the respectable mainstream. Last year, World Wrestling Entertainment announced a partnership with ESPN, leading to straight-faced wrestling coverage on “SportsCenter.” The biggest action star in the world, Dwayne Johnson, known as the Rock, got his start as an eyebrow-waggling wrestler. When the “Today” show needs a guest host, it enlists the WWE star John Cena to don a suit and crack jokes. No less an emblem of cultivated liberal intelligentsia than Jon Stewart recently hosted wrestling’s annual Summerslam, his first major gig since leaving “The Daily Show.” Wrestling may never be cool, but it is, at the very least, no longer seen as the exclusive province of the unwashed hoi polloi.
This is partly because the rest of the world has caught up to wrestling’s ethos. With each passing year, more and more facets of popular culture become something like wrestling: a stage-managed “reality” in which scripted stories bleed freely into real events, with the blurry line between truth and untruth seeming to heighten, not lessen, the audience’s addiction to the melodrama. The modern media landscape is littered with “reality” shows that audiences happily accept aren’t actually real; that, in essence, is wrestling. (“WWE Raw” leads to “The Real World,” which leads to “Keeping Up With the Kardashians,” and so forth.) The way Beyoncé teased at marital problems in “Lemonade” — writing lyrics people were happy to interpret as literal accusations of her famous husband’s unfaithfulness — is wrestling. The question of whether Steve Harvey meant to announce the wrong Miss Universe winner is wrestling. Did Miley Cyrus and Nicki Minaj authentically snap at each other at last year’s MTV Video Music Awards? The surrounding confusion was straight out of a wrestling playbook.
It’s not just in entertainment, either. For a while, it became trendy to insist that the 2016 presidential election, with all its puffed chests and talk of penis size, seemed more like a wrestling pay-per-view event than a dignified clash of political minds. In politics, as in wrestling, the ultimate goal is simply to get the crowd on your side. And like all the best wrestling villains — or “heels” — Donald Trump is a vivacious, magnetic speaker unafraid to be rude to his opponents; there was even a heelish consistency to his style at early debates, when he actively courted conflict with the moderator, Megyn Kelly, and occasionally paused to let the crowds boo him before shouting back over them. (The connection isn’t just implied, either: Trump was inducted to the WWE’s Hall of Fame in 2013, owing to his participation in several story lines over the years.) Ted Cruz’s rhetorical style, with its dramatic pauses, violent indignation and tendency to see every issue as an epic moral battleground, was sometimes reminiscent of great wrestling heels. The way Rick Perry called Trump’s candidacy a “cancer” that “will lead the Republican Party to perdition” before endorsing Trump and offering to serve as his vice president: this was a tacit admission that all his apocalyptic rhetoric was mainly for show. Pure wrestling, in other words. (...)
What the WWE does care about is keeping control of the way people experience “wrestling” — preferably not as the disreputable carny spectacle it once was, but as a family-friendly, 21st-century entertainment. When recapping wrestling history, it can completely elide the messier incidents: the sex scandals, shady deaths, neglected injuries, drug abuse and more. The audience, meanwhile, knows what the WWE cares about, giving it enough knowledge of wrestling’s inner workings to analyze each narrative not just through its in-world logic (“this guy will win the championship because he seems more driven”) but by considering external forces (“this guy will win the championship because he is well-spoken enough to represent the company when he inevitably shows up on ‘Today’”). Parsing both those layers — the behavior and the meta-behavior, the story told and the story of why it’s being told that way — can be an entertainment in its own right, and speculating on creative decisions has long been a fascination for wrestling fans.
This is how a lot of fields work these days. The audiences and the creators labor alongside each other, building from both ends, to conceive a universe with its own logic: invented worlds that, however false they may be, nevertheless feel good and right and amusing to untangle. Consider the many ways of listening to the song “Sorry,” from Lemonade, in which Beyoncé takes shots at an unnamed woman referred to as “Becky with the good hair”: a person we’re led to believe is having a relationship with the singer’s husband. You can theorize about the real-world identity of “Becky with the good hair,” as the internet did. You can consider the context of the phrase (why “Becky”? why “good hair”?), as the internet did. You can think about why Beyoncé decided to make art suggesting that her real-life husband cheated on her. All of this will be more time-consuming, and thus be interpreted as more meaningful, than if she had said outright, “He did it.” (Or if we said, “It’s just a song.”)
The process of shaping a story by taking all these layers into account seems dangerously similar to what corporations do when they talk about “telling the story of our brand” — only as applied to real people and real events, instead of mascots and promotional stunts. On a spiritual level, it seems distasteful to imagine a living person as a piece being moved around on a narrative chessboard, his every move calculated to advance a maximally entertaining story line. But this is how it all too often works — whether plotted by the public figures themselves or by some canny handler (an adviser, a producer, a PR rep), everyone is looking to sculpt the narrative, to add just the right finishing touch.
So when I think of how politics and pop culture are often compared to wrestling, this is the element that seems most transferable: not the outlandish characters or the jumbo-size threats, but the insistence on telling a great story with no regard for the facts.
Well, sure, but that hasn’t stopped pro wrestling from inching closer and closer to the respectable mainstream. Last year, World Wrestling Entertainment announced a partnership with ESPN, leading to straight-faced wrestling coverage on “SportsCenter.” The biggest action star in the world, Dwayne Johnson, known as the Rock, got his start as an eyebrow-waggling wrestler. When the “Today” show needs a guest host, it enlists the WWE star John Cena to don a suit and crack jokes. No less an emblem of cultivated liberal intelligentsia than Jon Stewart recently hosted wrestling’s annual Summerslam, his first major gig since leaving “The Daily Show.” Wrestling may never be cool, but it is, at the very least, no longer seen as the exclusive province of the unwashed hoi polloi.
This is partly because the rest of the world has caught up to wrestling’s ethos. With each passing year, more and more facets of popular culture become something like wrestling: a stage-managed “reality” in which scripted stories bleed freely into real events, with the blurry line between truth and untruth seeming to heighten, not lessen, the audience’s addiction to the melodrama. The modern media landscape is littered with “reality” shows that audiences happily accept aren’t actually real; that, in essence, is wrestling. (“WWE Raw” leads to “The Real World,” which leads to “Keeping Up With the Kardashians,” and so forth.) The way Beyoncé teased at marital problems in “Lemonade” — writing lyrics people were happy to interpret as literal accusations of her famous husband’s unfaithfulness — is wrestling. The question of whether Steve Harvey meant to announce the wrong Miss Universe winner is wrestling. Did Miley Cyrus and Nicki Minaj authentically snap at each other at last year’s MTV Video Music Awards? The surrounding confusion was straight out of a wrestling playbook.
It’s not just in entertainment, either. For a while, it became trendy to insist that the 2016 presidential election, with all its puffed chests and talk of penis size, seemed more like a wrestling pay-per-view event than a dignified clash of political minds. In politics, as in wrestling, the ultimate goal is simply to get the crowd on your side. And like all the best wrestling villains — or “heels” — Donald Trump is a vivacious, magnetic speaker unafraid to be rude to his opponents; there was even a heelish consistency to his style at early debates, when he actively courted conflict with the moderator, Megyn Kelly, and occasionally paused to let the crowds boo him before shouting back over them. (The connection isn’t just implied, either: Trump was inducted to the WWE’s Hall of Fame in 2013, owing to his participation in several story lines over the years.) Ted Cruz’s rhetorical style, with its dramatic pauses, violent indignation and tendency to see every issue as an epic moral battleground, was sometimes reminiscent of great wrestling heels. The way Rick Perry called Trump’s candidacy a “cancer” that “will lead the Republican Party to perdition” before endorsing Trump and offering to serve as his vice president: this was a tacit admission that all his apocalyptic rhetoric was mainly for show. Pure wrestling, in other words. (...)
What the WWE does care about is keeping control of the way people experience “wrestling” — preferably not as the disreputable carny spectacle it once was, but as a family-friendly, 21st-century entertainment. When recapping wrestling history, it can completely elide the messier incidents: the sex scandals, shady deaths, neglected injuries, drug abuse and more. The audience, meanwhile, knows what the WWE cares about, giving it enough knowledge of wrestling’s inner workings to analyze each narrative not just through its in-world logic (“this guy will win the championship because he seems more driven”) but by considering external forces (“this guy will win the championship because he is well-spoken enough to represent the company when he inevitably shows up on ‘Today’”). Parsing both those layers — the behavior and the meta-behavior, the story told and the story of why it’s being told that way — can be an entertainment in its own right, and speculating on creative decisions has long been a fascination for wrestling fans.
This is how a lot of fields work these days. The audiences and the creators labor alongside each other, building from both ends, to conceive a universe with its own logic: invented worlds that, however false they may be, nevertheless feel good and right and amusing to untangle. Consider the many ways of listening to the song “Sorry,” from Lemonade, in which Beyoncé takes shots at an unnamed woman referred to as “Becky with the good hair”: a person we’re led to believe is having a relationship with the singer’s husband. You can theorize about the real-world identity of “Becky with the good hair,” as the internet did. You can consider the context of the phrase (why “Becky”? why “good hair”?), as the internet did. You can think about why Beyoncé decided to make art suggesting that her real-life husband cheated on her. All of this will be more time-consuming, and thus be interpreted as more meaningful, than if she had said outright, “He did it.” (Or if we said, “It’s just a song.”)
The process of shaping a story by taking all these layers into account seems dangerously similar to what corporations do when they talk about “telling the story of our brand” — only as applied to real people and real events, instead of mascots and promotional stunts. On a spiritual level, it seems distasteful to imagine a living person as a piece being moved around on a narrative chessboard, his every move calculated to advance a maximally entertaining story line. But this is how it all too often works — whether plotted by the public figures themselves or by some canny handler (an adviser, a producer, a PR rep), everyone is looking to sculpt the narrative, to add just the right finishing touch.
So when I think of how politics and pop culture are often compared to wrestling, this is the element that seems most transferable: not the outlandish characters or the jumbo-size threats, but the insistence on telling a great story with no regard for the facts.
by Jeremy Gordon, NY Times | Read more:
Image: Bill Pugliano/Getty Images
The Blasé Surrealism of Haruki Murakami
[ed. I've read most of Murakami's books, including 1Q84. As a stylist he's probably as good as anyone, but whenever I finish one of his books I invariably throw my hands up and go "Is that it?" The Atlantic had a pretty good summary a while ago of what a new reader might expect (slightly edited here): Is the novel’s hero an adrift, feckless man in his mid-30s? Does he have a shrewd girl Friday who doubles as his romantic interest? Does the story begin with the inexplicable disappearance of a person close to the narrator? Is there a metaphysical journey to an alternate plane of reality? Are there gratuitous references to Western novels, films, and popular culture? Which eastern-European composer provides the soundtrack, and will enjoy skyrocketing CD sales in the months ahead—Bartók, Prokofiev, Smetana? Are there ominous omens, signifying nothing; dreams that resist interpretation; cryptic mysteries that will never be resolved? Check, check, check and check. In every book. I'm pretty much done with Murakami, which is too bad because I really do enjoy his writing.]
Three days ago, I began to read 1Q84 by Haruki Murakami. At 1157 pages, 1Q84 is a mammoth novel, a veritable brick of a book, similar in proportion to the unfinished copy of Infinite Jest that currently rests about 15 feet away from me.
In theory, 1Q84 is a poor choice of book for me right now. After all, 2015 has been a rather mild year for me, in terms of how many books I’ve read. In 2014, I think I read close to 30 books, but this year I’ve only finished 5 or 6 (I’ve partially completed several more).
I blame the Internet for this. Or, rather, I blame my own frequent inability to resist the gravity of Twitter or Facebook or Reddit or Instagram or Snapchat. I don’t even want to think about how many books I could have read this year, had my time on social media been replaced by time spent reading books. But, I guess, that’s what I chose to do, so I’ll take responsibility for my actions. I did, at least, manage to read a whole hell of a lot of great essays on the Internet (see this list and this list). (...)
But I’ve digressed somewhat from the intended topic of this essay: Haruki Murakami. Murakami is one such wizard whose works surround me, swallow me, permeate my being, and transport me to worlds that feel no less real than the one in which I’m typing these words.
And so far, his magnum opus, 1Q84, is no exception. As I mentioned earlier, 1Q84 was arguably a poor choice of book for me to begin reading at this time. (...)
So, yes, this renewed tripod of habits is helping me to read more. But that’s not the only catalyst. I correctly suspected that Murakami could draw me back into book-reading because he is a writer who seems unfailingly to write irresistible page-turners. There are a few reasons for this that I can see.
For one, I swear he’s discovered the Platonic ideal combination of steady pacing and incomplete-yet-tantalizing information. Having completed four of his novels now, I can tell you that his novels always seem to revolve around some mystery that needs to be solved, and he does an excellent job of hinting at the grandiose and ominous nature of the mystery within the first few pages, while providing almost no information regarding the mystery’s actual attributes or dimensions. As the novels progress, he gradually reveals the mystery’s shocking, sprawling architecture and all-penetrating implications, dispensing just enough detail at just the right intervals to keep his readers (or, me at least) hopelessly ensnared.
The protagonists in Murakami’s novels tend to be ordinary, solitary people who suddenly find themselves wrapped up in some sort of epic, supernormal circumstances and must undertake a quest that is as much a quest to the heart of their true identity as it is a quest through the external world.
Another trademark of Murakami’s is something I call Blasé Surrealism (let me know if you think of a better name or if one already exists). Blasé Surrealism is characterized by melding mundane, humdrum realism with elements of surrealism and magic realism, while also incorporating (in Murakami’s case, at least) abstract, metaphysical commentary/comparisons intermittently throughout the story. Murakami’s stories are told in a matter-of-fact tone, as if everything that is happening is quite commonplace, and much of it is. But then he’ll nonchalantly introduce a portal to an alternate reality or include a line like, “Hundreds of butterflies flitted in and out of sight like short-lived punctuation marks in a stream of consciousness without beginning or end.”
He’s so casual, so blasé, about this, that the flow of the story isn’t interrupted. Strange, surreal things happen in Murakami novels, but they seem completely natural because he acts like they are. The reader just goes right along with him. Thus Murakami manages effectively to marry normal and abnormal, real and surreal, conventional and magical. The best comparison I can make is to say that reading a Murakami novel is like being in a dream, in that things are clearly off, clearly not the way they typically are, and yet one doesn’t really notice or care, accepting things at face-value. This makes for a uniquely mind-stirring, almost psychedelic, reading experience.
by Jordan Bates, Refine the Mind | Read more:
Image: IQ84
Why I Bought a Chromebook Instead of a Mac
Chromebooks have surpassed sales of Mac laptops in the United States for the first time ever. And that doesn’t surprise me. Because roughly a year ago I made the same switch. Formerly a lifelong Mac user, I bought my first PC ever in the form of a Chromebook. And I’m never looking back.
Driven by the kind of passion that can only be found in the recently converted, I have aided and abetted friends in renouncing the sins of gluttony and pride uniquely found in the House of Apples. I have helped them find salvation with the Book of Chrome. Glory be the Kingdom of Chrome, for your light shines down upon us at a quarter of the price.
Make no mistake, I grew up on Macs. The first computer I remember my Dad bringing home when I was 5 years old was a Mac. Our family computer throughout the 1990s was a Mac. I used that Mac Performa throughout middle school, and it gave me treasured memories of playing Dark Forces and first discovering the internet. My high school graduation present from my parents in 2002 was my first Mac laptop. And I would continue to buy Mac desktops and laptops for the next decade and a half.
But something happened about a year ago when my Macbook Air was running on fumes. I looked at the Macs and gave my brain a half-second to entertain other options. I owned a functioning Mac desktop, which is my primary machine for heavy lifting. But I started to wonder why I wasn’t entertaining other options for my mobile machine.
The biggest consideration was price. When all was said and done, even the cheapest Mac laptop was going to set me back about $1,300 after taxes and AppleCare. And the siren song of a computer under $200 was calling my name. I got the Acer Chromebook with 2GB of RAM and a 16GB drive. It cost a shockingly low $173. And it was worth every penny. It even came with 100GB of Google Drive storage and twelve GoGo inflight internet passes. If you travel enough, the thing literally pays for itself in airline wifi access.
I rarely have to edit video and my photo manipulation needs are minimal. So when I walk down to the coffee shop to work, what the hell do I need doing that can’t be done on a Chromebook? Nothing, is the answer. Precisely nothing. And if you’re being totally honest with yourself you should probably ask the same question.
Computers have essentially become disposable, for better and for worse. We’ve seen this trend in electronics over the past decade and it’s a great thing from the perspective of American consumers. More people can afford e-readers and tablets that now cost just $50. The mid-2000s dream of “one laptop per child,” which sought to bring the price of mobile computers down to $100, has become a reality thanks to Chromebooks and tablets made by companies like Acer, HP, and Amazon. And with more and more of our computing needs being met by web browsers alone, the average consumer is seeing less incentive to buy a Mac.
This trend should obviously terrify Apple. Computers have become fungible commodities, just like HDTVs before them. Which is to say that the average American doesn’t view a TV as high-tech that requires much homework these days. Any TV will do. Look at the screen and look at the price. Does it look like a TV? Yep. Is it cheap? Double yep. Whip out the credit card.
Driven by the kind of passion that can only be found in the recently converted, I have aided and abetted friends in renouncing the sins of gluttony and pride uniquely found in the House of Apples. I have helped them find salvation with the Book of Chrome. Glory be the Kingdom of Chrome, for your light shines down upon us at a quarter of the price.
Make no mistake, I grew up on Macs. The first computer I remember my Dad bringing home when I was 5 years old was a Mac. Our family computer throughout the 1990s was a Mac. I used that Mac Performa throughout middle school, and it gave me treasured memories of playing Dark Forces and first discovering the internet. My high school graduation present from my parents in 2002 was my first Mac laptop. And I would continue to buy Mac desktops and laptops for the next decade and a half.
But something happened about a year ago when my Macbook Air was running on fumes. I looked at the Macs and gave my brain a half-second to entertain other options. I owned a functioning Mac desktop, which is my primary machine for heavy lifting. But I started to wonder why I wasn’t entertaining other options for my mobile machine.
The biggest consideration was price. When all was said and done, even the cheapest Mac laptop was going to set me back about $1,300 after taxes and AppleCare. And the siren song of a computer under $200 was calling my name. I got the Acer Chromebook with 2GB of RAM and a 16GB drive. It cost a shockingly low $173. And it was worth every penny. It even came with 100GB of Google Drive storage and twelve GoGo inflight internet passes. If you travel enough, the thing literally pays for itself in airline wifi access.
I rarely have to edit video and my photo manipulation needs are minimal. So when I walk down to the coffee shop to work, what the hell do I need doing that can’t be done on a Chromebook? Nothing, is the answer. Precisely nothing. And if you’re being totally honest with yourself you should probably ask the same question.
Computers have essentially become disposable, for better and for worse. We’ve seen this trend in electronics over the past decade and it’s a great thing from the perspective of American consumers. More people can afford e-readers and tablets that now cost just $50. The mid-2000s dream of “one laptop per child,” which sought to bring the price of mobile computers down to $100, has become a reality thanks to Chromebooks and tablets made by companies like Acer, HP, and Amazon. And with more and more of our computing needs being met by web browsers alone, the average consumer is seeing less incentive to buy a Mac.
This trend should obviously terrify Apple. Computers have become fungible commodities, just like HDTVs before them. Which is to say that the average American doesn’t view a TV as high-tech that requires much homework these days. Any TV will do. Look at the screen and look at the price. Does it look like a TV? Yep. Is it cheap? Double yep. Whip out the credit card.
by Matt Novak, Gizmodo | Read more:
Image: Shutterstock/Acer
Our Nightmare
Most of us, I imagine, are not consistent political optimists or pessimists. We instead react – and usually overreact – to the short-term political trends before us, unable to look beyond the next election cycle and its immediate impact on ourselves and our political movements. I remember, immediately after the re-election of George W. Bush in 2004, a particularly voluble conservative blogger arguing that it was time for conservatives to “curb stomp” the left, to secure the final victory over liberals and Democrats. Four years later, of course, a very different political revolution appeared to be at hand, and some progressives made the same kind of ill-considered predictions. Neither permanent political victory has come to pass, with Democrats enjoying structural advantages in presidential elections and Republicans making hay with a well-oiled electoral machine in Congressional elections. How long those conditions persist, who can say.
But partisan politics are only a part of the actual political conditions that dictate our lives. Politics, culture, and economics fuse together to create our lived experience. And that experience is bound up in vague but powerful expectations about success, what it means, and who it’s for. There is a future that appears increasingly likely to me, a bleak future, and one which subverts traditional partisan lines. In this future, the meritocratic school of liberalism produces economic outcomes that would be at home with laissez faire economic conservatives, to the detriment of almost all of us.
The future that I envision amounts, depending on your perspective, to either a betrayal of the liberal dream or its completion. In this future, the traditional foundations of liberalism in economic justice and redistribution are amputated from the push for diversity in terms of race, gender, sexual identity, and related issues. (...)
Traditionally, both equality and diversity have been important to liberalism. There are obvious reasons for this connection. To begin with, the persistent inequality and injustice that afflict people of color and women in our society are powerfully represented in economic outcomes, with black and Hispanic Americans and women all suffering from clear and significant gaps in income, wealth, and similar measures of economic success. Economic justice is therefore inseparable from our efforts to truly combat racial and gender inequality. What’s more, the moral case for economic justice stems from the same foundations as the case against racism and sexism, a profound moral duty to provide for all people and to ensure that they live lives of material security and social dignity. The traditional liberal message has therefore been to emphasize the need for diverse institutions and economic justice as intertwined phenomena.
In recent years, however, the liberal imagination has become far less preoccupied with economic issues. Real-world activism retains its focus on economic outcomes, but the media that must function as an incubator of ideas, in any healthy political movement, has grown less and less interested in economic questions as such. Liberal publications devote far less ink, virtual or physical, to core issues of redistribution and worker power than they once did. Follow prominent liberals on Twitter, browse through the world of social justice Tumblr, read socially and culturally liberal websites. You might go weeks without reading the word “union.” Economic issues just aren’t central to the political conceptions of many younger liberals; they devote endless hours to decoding the feminism of Rihanna but display little interest in, say, a guaranteed minimum income or nationalizing the banks. Indeed, the mining of pop cultural minutia for minimally-plausible political content has become such a singular obsession within liberal media that it sometimes appears to be crowding out all over considerations. (...)
As The American Conservative’s Noah Millman once wrote, “the culture war turns politics into a question of identity, of tribalism, and hence narrows the effective choice in elections. We no longer vote for the person who better represents our interests, but for the person who talks our talk, sees the world the way we do, is one of us…. And it’s a good basis for politics from the perspective of economic elites. If the battle between Left and Right is fundamentally over social questions like abortion and gay marriage, then it is not fundamentally over questions like who is making a killing off of government policies and who is getting screwed.” The point is not that those culture war questions are unimportant, but that by treating them as cultural issues, our system pulls them up from their roots in economic foundations and turns them into yet another set of linguistic, symbolic problems. My argument, fundamentally, is that we face a future where strategic superficial diversity among our wealthy elites will only deepen the distraction Millman is describing. Such a future would be disastrous for most women and most people of color, but to many, would represent victory against racism and sexism.
But partisan politics are only a part of the actual political conditions that dictate our lives. Politics, culture, and economics fuse together to create our lived experience. And that experience is bound up in vague but powerful expectations about success, what it means, and who it’s for. There is a future that appears increasingly likely to me, a bleak future, and one which subverts traditional partisan lines. In this future, the meritocratic school of liberalism produces economic outcomes that would be at home with laissez faire economic conservatives, to the detriment of almost all of us.
The future that I envision amounts, depending on your perspective, to either a betrayal of the liberal dream or its completion. In this future, the traditional foundations of liberalism in economic justice and redistribution are amputated from the push for diversity in terms of race, gender, sexual identity, and related issues. (...)
Traditionally, both equality and diversity have been important to liberalism. There are obvious reasons for this connection. To begin with, the persistent inequality and injustice that afflict people of color and women in our society are powerfully represented in economic outcomes, with black and Hispanic Americans and women all suffering from clear and significant gaps in income, wealth, and similar measures of economic success. Economic justice is therefore inseparable from our efforts to truly combat racial and gender inequality. What’s more, the moral case for economic justice stems from the same foundations as the case against racism and sexism, a profound moral duty to provide for all people and to ensure that they live lives of material security and social dignity. The traditional liberal message has therefore been to emphasize the need for diverse institutions and economic justice as intertwined phenomena.
In recent years, however, the liberal imagination has become far less preoccupied with economic issues. Real-world activism retains its focus on economic outcomes, but the media that must function as an incubator of ideas, in any healthy political movement, has grown less and less interested in economic questions as such. Liberal publications devote far less ink, virtual or physical, to core issues of redistribution and worker power than they once did. Follow prominent liberals on Twitter, browse through the world of social justice Tumblr, read socially and culturally liberal websites. You might go weeks without reading the word “union.” Economic issues just aren’t central to the political conceptions of many younger liberals; they devote endless hours to decoding the feminism of Rihanna but display little interest in, say, a guaranteed minimum income or nationalizing the banks. Indeed, the mining of pop cultural minutia for minimally-plausible political content has become such a singular obsession within liberal media that it sometimes appears to be crowding out all over considerations. (...)
As The American Conservative’s Noah Millman once wrote, “the culture war turns politics into a question of identity, of tribalism, and hence narrows the effective choice in elections. We no longer vote for the person who better represents our interests, but for the person who talks our talk, sees the world the way we do, is one of us…. And it’s a good basis for politics from the perspective of economic elites. If the battle between Left and Right is fundamentally over social questions like abortion and gay marriage, then it is not fundamentally over questions like who is making a killing off of government policies and who is getting screwed.” The point is not that those culture war questions are unimportant, but that by treating them as cultural issues, our system pulls them up from their roots in economic foundations and turns them into yet another set of linguistic, symbolic problems. My argument, fundamentally, is that we face a future where strategic superficial diversity among our wealthy elites will only deepen the distraction Millman is describing. Such a future would be disastrous for most women and most people of color, but to many, would represent victory against racism and sexism.
by Fredrik deBoer | Read more:
Image: Getty
The Persian Rug May Not Be Long for This World
For centuries, Iran’s famed carpets have been produced by hand along the nomad trail in this region of high plains around the ancient city of Shiraz.
Sheep grazed in high mountain pastures and shorn only once a year produce a thick, long wool ideal for the tough thread used in carpet making.
But high-quality production of hand-woven carpets is no longer sustainable on the migration route of the nomads, said Hamid Zollanvari, one of Iran’s biggest carpet makers and dealers.
Instead, he had built a factory with 16 huge cooking pots, where on a recent cool, sunny spring day men in blue overalls stirred the pots with long wooden sticks, boiling and coloring the thread. As the colored waters bubbled, they looked like live volcanos. The air smelled of sheep.
Another room was stacked with herbs. Eucalyptus leaves, indigo, black curd, turmeric, acorn shells and alum, ingredients for the different colors. “The Iranian carpet is 100 percent organic,” Mr. Zollanvari declared. “No machinery is involved.”
It is a scene that seems as ageless as the women who sit before the looms and weave the rugs, a process that can take as long as a year. And now even the factory is threatened. With six years of Western sanctions on the carpet business and punishing competition from rugs machine-made in China and India, these are hard times for the craft of Persian rug making. Many veterans wonder whether it can survive.
Over the centuries invaders, politicians and Iran’s enemies have left their mark on Iran’s carpets, said Prof. Hashem Sedghamiz, a local authority on carpets, sitting in the green courtyard of his restored Qajar-dynasty house in Shiraz. The outsiders demanded changes, started using chemicals for coloring and, most recently, imposed sanctions on the rugs. Those were blows, he said, damaging but not destructive.
But now, Mr. Sedghamiz said, the end is near. Ultimately he said, it is modernity — that all-devouring force that is changing societies at breakneck speed — that is killing the Persian carpet, Iran’s pride and joy. “People simply are no longer interested in quality.”
Or in paying for it, he might have added. (...)
One thing is for sure: Iran’s carpets are among the most complex and labor-intensive handicrafts in the world.
It is on the endless green slopes of Fars Province, in Iran’s heartland, that the “mother of all carpets,” among the first in the world, is produced: the hand-woven nomadic Persian rug.
The process starts with around 1.6 million sheep grazed by shepherds from the nomadic Qashqai and Bakhtiari tribes, who produce that tough, long-fibered wool so perfect for carpets.
Women take over from there, making thread from the wool by hand, twisting it with their fingers. The finished thread is bundled and then dyed, using natural ingredients like pomegranate peels for deep red or wine leaves for green. After days of boiling on a wooden fire, the threads are dried by the cool winds that blow in from the north each afternoon.
Only then does the weaving start. Weavers, almost all of them women, spend several months to a year bent over a horizontally placed loom, stringing and knotting thousands of threads. Some follow established patterns, some create their own. When the carpet is finally done, it is cut, washed and put out in the sun to dry.
“It’s so time consuming, real hand work,” said Mr. Zollanvari, the carpet dealer. “A labor of love. And what does it cost?” he asked, before answering the question himself: “Almost nothing.” A 6-by-9-foot handwoven carpet costs around $400 in Shiraz, depending on the pattern and quality.
Sheep grazed in high mountain pastures and shorn only once a year produce a thick, long wool ideal for the tough thread used in carpet making.
But high-quality production of hand-woven carpets is no longer sustainable on the migration route of the nomads, said Hamid Zollanvari, one of Iran’s biggest carpet makers and dealers.
Instead, he had built a factory with 16 huge cooking pots, where on a recent cool, sunny spring day men in blue overalls stirred the pots with long wooden sticks, boiling and coloring the thread. As the colored waters bubbled, they looked like live volcanos. The air smelled of sheep.
Another room was stacked with herbs. Eucalyptus leaves, indigo, black curd, turmeric, acorn shells and alum, ingredients for the different colors. “The Iranian carpet is 100 percent organic,” Mr. Zollanvari declared. “No machinery is involved.”
It is a scene that seems as ageless as the women who sit before the looms and weave the rugs, a process that can take as long as a year. And now even the factory is threatened. With six years of Western sanctions on the carpet business and punishing competition from rugs machine-made in China and India, these are hard times for the craft of Persian rug making. Many veterans wonder whether it can survive.
Over the centuries invaders, politicians and Iran’s enemies have left their mark on Iran’s carpets, said Prof. Hashem Sedghamiz, a local authority on carpets, sitting in the green courtyard of his restored Qajar-dynasty house in Shiraz. The outsiders demanded changes, started using chemicals for coloring and, most recently, imposed sanctions on the rugs. Those were blows, he said, damaging but not destructive.
But now, Mr. Sedghamiz said, the end is near. Ultimately he said, it is modernity — that all-devouring force that is changing societies at breakneck speed — that is killing the Persian carpet, Iran’s pride and joy. “People simply are no longer interested in quality.”
Or in paying for it, he might have added. (...)
One thing is for sure: Iran’s carpets are among the most complex and labor-intensive handicrafts in the world.
It is on the endless green slopes of Fars Province, in Iran’s heartland, that the “mother of all carpets,” among the first in the world, is produced: the hand-woven nomadic Persian rug.
The process starts with around 1.6 million sheep grazed by shepherds from the nomadic Qashqai and Bakhtiari tribes, who produce that tough, long-fibered wool so perfect for carpets.
Women take over from there, making thread from the wool by hand, twisting it with their fingers. The finished thread is bundled and then dyed, using natural ingredients like pomegranate peels for deep red or wine leaves for green. After days of boiling on a wooden fire, the threads are dried by the cool winds that blow in from the north each afternoon.
Only then does the weaving start. Weavers, almost all of them women, spend several months to a year bent over a horizontally placed loom, stringing and knotting thousands of threads. Some follow established patterns, some create their own. When the carpet is finally done, it is cut, washed and put out in the sun to dry.
“It’s so time consuming, real hand work,” said Mr. Zollanvari, the carpet dealer. “A labor of love. And what does it cost?” he asked, before answering the question himself: “Almost nothing.” A 6-by-9-foot handwoven carpet costs around $400 in Shiraz, depending on the pattern and quality.
by Thomas Erdbrink, NY Times | Read more:
Image: Newsha TavakolianSix True Things About Dinner With Obama
Bun Cha is a typical Hanoi dish, decidedly everyday, and much loved by locals . To the consternation, no doubt, of the Secret Service (who were very cool about it) I was recently joined for dinner by the leader of the free world in a working class joint near the old quarter of town for an upcoming episode of Parts Unknown.
by Anthony Bourdain, Li.st | Read more:
Image: uncredited
Friday, May 27, 2016
Lexington Lab Band
[ed. Repost. Best cover band, ever. See also: Kid Charlemagne, Life in the Fast Lane, Voodoo Child, and more.]
Late Night Pack by Vans (burger slip-ons shown above)
via:
[ed. See also: The Skate Mental x Nike SB Janoski “Pepperoni Pizza”]
Thursday, May 26, 2016
Trail Blazing
For a great long while, I thought there was only one kind of bud: whatever the fuck was available. The first time I smoked weed (And by “smoked weed” I mean “got high”), I was by most accounts pretty old — twenty-two. There had been two former, rather desultory attempts. Once, at a bonfire on Repulse Bay Beach in Hong Kong when I was fifteen (Hong Kong is renowned for several things, but marijuana is not one of them), and another time in Texas, in the garage of some skater dude who was a year older, very hot, and had an identical twin I would’ve gladly settled for. I was green, the weed less so.
The first time I ever smoked successfully, I was working in Brooklyn, in the dead of winter, for profoundly exploitative wages. On the upside, the job happened to come with a young, chill boss who daily smoked two blunts wrapped in Vanilla Dutch Masters, and was fairly generous about sharing. The weed was dopey, didn’t have a name, and helped temper the indignation I felt trekking ninety minutes with two train changes and a bus ride — in the snow — to get to work. That was thirteen years ago.
By the time I moved to California in my thirties, weed was becoming legal, and I secured a cannabis card for dubious medical reasons and credible recreational ones. I learned there was not only a dazzling kaleidoscope of marijuana strains to choose from, but that, depending on my hankering, I could calibrate the weed to my desired vibe. What a time to be alive! No more feeling catatonic on a dinner date or hyper-social and chatty at the movie theater — I was on the path to finding The Perfect High. Not, like, One High to Rule Them All, but more like, the superlative vibe for every chill sitch in my life. The perfect high, of course, is largely subjective. We’re all physiological snowflakes with wildly differing operating systems. It’s why some people can have a grand time on edibles (me) but other people (my best friend Brooke) go bat-shit crazy, curling up in the fetal position until the mania subsides.
There are significant differences in how the body metabolizes the nearly one hundred different cannabinoids present in cannabis. Phytocannabinoids, found in cannabis flowers, are the chemical compounds that we respond to. (We also produce cannabinoids in our bodies — called endogenous cannabinoids or endocannabinoids). The cannabinoid system is old, I mean ancient; even worms respond to cannabinoids. It regulates a bunch of basic processes in our bodies — the immune system, memory, sleep and inflammation. We have cannabinoid receptors in all sorts of places.
You guys: we’re basically designed to get high.
Of all the cannabinoids in cannabis, THC (Tetrahydrocannabinol) and CBD (Cannabidiol) are the most famous, with the prevailing agreement that THC is heady and CBD is about the body high. But it’s the ninety-odd other cannabinoids acting in concert with them that make each high unique. This synergistic effect — the harmonious interplay, and the permutations of cannabinoids — is what makes each strain so darned mysterious. Elan Rae, the in-house cannabis expert for Marley Natural (the official Bob Marley cannabis brand) described the “entourage effect,” as it’s called, as “the combined effect of the cannabinoid profile. It doesn’t allow you to specifically ascribe an effect to one cannabinoid.” To wit: it’s not the amount of THC that gets you high, but how it reacts with a slew of other cannabinoids.
So while you may not know the exact chemistry of why you’re getting a certain type of high, it stands to reason that you can use guidelines to land in the neighborhood of the high you’re after. Think of it this way: you want a kicky, effervescent vinho verde for picnics or beaches, a jigger of bourbon for cozy autumnal nights, and nineteen pitchers of pre-mixed margarita if you want a pernicious hangover to cap off an evening of homicidal mania and sexual regret. Similarly, you’ll want a playful, low-impact Sativa for an al fresco activity, and an Indica or Indica-dominant hybrid for cuffin’ season.
And what exactly is the difference between Indica and Sativa? Within the Cannabis genus, they are two separate species. Pretty much everything we smoke is one, the other, or a hybrid of the two. Indicas are mellower and harder-hitting, perfect for Olympiad-level chilling after a long day. They’re often prescribed to people who have trouble sleeping or need to manage pain. The plant phenotypically tends to be shorter and bushier, with thicker individual leaves. Sativas, on the other hand, tend to be neurologically wavier, generally better for a daytime high. They make most of us feel alert, and they’re excellent for idea generation, provided you don’t fall into too many disparate wormholes. The flower looks like the platonic ideal of weed; it’s the kind you get on a pair of Huf socks, or embroidered onto a red, gold, and green hat.
To say there’s a weed for every occasion is an understatement. Like German nouns, there’s an exact cannabis strain to complement “sentimental pessimism” or the “anguish one feels when comparing the shortcomings of reality to an idealized state of the world.” Some weed is built for fucking, and other weed is for ugly-crying at 4AM at season two of Bojack Horseman because you relate way too hard to an anthropomorphized cartoon horse and his drinking problem. (No judgment.)
It is with this knowledge, clear eyes, and a full heart that I went to my reputable Los Angeles medical center (not to be confused with any old run-of-the-mill bongmonger) and secured eight strains to try: Platinum Jack, XJ13, Dutch Treat, Pineapple Express, J1, Gorilla Glue, Berner’s Cookies and NorCal OG.
The first time I ever smoked successfully, I was working in Brooklyn, in the dead of winter, for profoundly exploitative wages. On the upside, the job happened to come with a young, chill boss who daily smoked two blunts wrapped in Vanilla Dutch Masters, and was fairly generous about sharing. The weed was dopey, didn’t have a name, and helped temper the indignation I felt trekking ninety minutes with two train changes and a bus ride — in the snow — to get to work. That was thirteen years ago.
By the time I moved to California in my thirties, weed was becoming legal, and I secured a cannabis card for dubious medical reasons and credible recreational ones. I learned there was not only a dazzling kaleidoscope of marijuana strains to choose from, but that, depending on my hankering, I could calibrate the weed to my desired vibe. What a time to be alive! No more feeling catatonic on a dinner date or hyper-social and chatty at the movie theater — I was on the path to finding The Perfect High. Not, like, One High to Rule Them All, but more like, the superlative vibe for every chill sitch in my life. The perfect high, of course, is largely subjective. We’re all physiological snowflakes with wildly differing operating systems. It’s why some people can have a grand time on edibles (me) but other people (my best friend Brooke) go bat-shit crazy, curling up in the fetal position until the mania subsides.
There are significant differences in how the body metabolizes the nearly one hundred different cannabinoids present in cannabis. Phytocannabinoids, found in cannabis flowers, are the chemical compounds that we respond to. (We also produce cannabinoids in our bodies — called endogenous cannabinoids or endocannabinoids). The cannabinoid system is old, I mean ancient; even worms respond to cannabinoids. It regulates a bunch of basic processes in our bodies — the immune system, memory, sleep and inflammation. We have cannabinoid receptors in all sorts of places.
You guys: we’re basically designed to get high.
Of all the cannabinoids in cannabis, THC (Tetrahydrocannabinol) and CBD (Cannabidiol) are the most famous, with the prevailing agreement that THC is heady and CBD is about the body high. But it’s the ninety-odd other cannabinoids acting in concert with them that make each high unique. This synergistic effect — the harmonious interplay, and the permutations of cannabinoids — is what makes each strain so darned mysterious. Elan Rae, the in-house cannabis expert for Marley Natural (the official Bob Marley cannabis brand) described the “entourage effect,” as it’s called, as “the combined effect of the cannabinoid profile. It doesn’t allow you to specifically ascribe an effect to one cannabinoid.” To wit: it’s not the amount of THC that gets you high, but how it reacts with a slew of other cannabinoids.
So while you may not know the exact chemistry of why you’re getting a certain type of high, it stands to reason that you can use guidelines to land in the neighborhood of the high you’re after. Think of it this way: you want a kicky, effervescent vinho verde for picnics or beaches, a jigger of bourbon for cozy autumnal nights, and nineteen pitchers of pre-mixed margarita if you want a pernicious hangover to cap off an evening of homicidal mania and sexual regret. Similarly, you’ll want a playful, low-impact Sativa for an al fresco activity, and an Indica or Indica-dominant hybrid for cuffin’ season.
And what exactly is the difference between Indica and Sativa? Within the Cannabis genus, they are two separate species. Pretty much everything we smoke is one, the other, or a hybrid of the two. Indicas are mellower and harder-hitting, perfect for Olympiad-level chilling after a long day. They’re often prescribed to people who have trouble sleeping or need to manage pain. The plant phenotypically tends to be shorter and bushier, with thicker individual leaves. Sativas, on the other hand, tend to be neurologically wavier, generally better for a daytime high. They make most of us feel alert, and they’re excellent for idea generation, provided you don’t fall into too many disparate wormholes. The flower looks like the platonic ideal of weed; it’s the kind you get on a pair of Huf socks, or embroidered onto a red, gold, and green hat.
To say there’s a weed for every occasion is an understatement. Like German nouns, there’s an exact cannabis strain to complement “sentimental pessimism” or the “anguish one feels when comparing the shortcomings of reality to an idealized state of the world.” Some weed is built for fucking, and other weed is for ugly-crying at 4AM at season two of Bojack Horseman because you relate way too hard to an anthropomorphized cartoon horse and his drinking problem. (No judgment.)
It is with this knowledge, clear eyes, and a full heart that I went to my reputable Los Angeles medical center (not to be confused with any old run-of-the-mill bongmonger) and secured eight strains to try: Platinum Jack, XJ13, Dutch Treat, Pineapple Express, J1, Gorilla Glue, Berner’s Cookies and NorCal OG.
by Mary H.K. Choi, The Awl | Read more:
Image: Retinafunk
Subscribe to:
Posts (Atom)