Sunday, May 17, 2015

The Science of Craving

[ed. See also: The Neurological Pleasures of Fast Fashion]

The reward system exists to ensure we seek out what we need. If having sex, eating nutritious food or being smiled at brings us pleasure, we will strive to obtain more of these stimuli and go on to procreate, grow bigger and find strength in numbers. Only it’s not as simple in the modern world, where people can also watch porn, camp out in the street for the latest iPhone or binge on KitKats, and become addicted, indebted or overweight. As Aristotle once wrote: “It is of the nature of desire not to be satisfied, and most men live only for the gratification of it.” Buddhists, meanwhile, have endeavoured for 2,500 years to overcome the suffering caused by our propensity for longing. Now, it seems, Berridge has found the neuro-anatomical basis for this facet of the human condition—that we are hardwired to be insatiable wanting machines.

If you had opened a textbook on brain rewards in the late 1980s, it would have told you that the dopamine and opioids that swished and flickered around the reward pathway were the blissful brain chemicals responsible for pleasure. The reward system was about pleasure and somehow learning what yields it, and little more. So when Berridge, a dedicated young scientist who was more David than Goliath, stumbled upon evidence in 1986 that dopamine did not produce pleasure, but in fact desire, he kept quiet. It wasn’t until the early 1990s, after rigorous research, that he felt bold enough to go public with his new thesis. The reward system, he then asserted, has two distinct elements: wanting and liking (or desire and pleasure). While dopamine makes us want, the liking part comes from opioids and also endocannabinoids (a version of marijuana produced in the brain), which paint a “gloss of pleasure”, as Berridge puts it, on good experiences. For years, his thesis was contested, and only now is it gaining mainstream acceptance. Meanwhile, Berridge has marched on, unearthing more and more detail about what makes us tick. His most telling discovery was that, whereas the dopamine/wanting system is vast and powerful, the pleasure circuit is anatomically tiny, has a far more fragile structure and is harder to trigger.

Before his lecture, we meet for coffee; there’s another Starbucks in the convention centre. I’m surprised to find that someone so practised at public speaking has pre-performance jitters. Shortly after arriving, Berridge turns white and bolts from the queue to retrieve the laptop with his presentation on, which he has accidentally left in his hotel lobby. Nor is he immune to the desires and pleasures he studies. Without hesitating, he orders a “grande” chestnut praline latte and slice of coffee cake. “It’s easy to turn on intense wanting,” he says, when we eventually sit down. “Massive, robust systems do it. They can come on with the pleasure, they can come on without the pleasure, they don’t care. It’s tricky to turn on the pleasure.” He hadn’t expected his findings to turn out this way, but it made sense. “This may explain”, he later tells his audience, “why life’s intense pleasures are less frequent and less sustained than intense desires.”

In recent years, Berridge’s doubters have steadily dispersed, and reams of research have been applying the disparity between liking and wanting (or pleasure and desire, enjoyment and motivation) to the clinical study of conditions such as depression, addiction, binge eating, obsessive-compulsive disorder and Parkinson’s disease. It is also increasingly present in psychological and philosophical discussions about free will, relationships and consumerism. (...)

Although desire and pleasure often go hand in hand, it is perfectly possible to want something without liking it. Think of the crazy impulse purchases that are more about the frisson of shopping than the product itself. The cake that disgusts you, but you eat it anyway. The drugs you crave, even though they’re no fun any more. And as for that ex-lover...

by Amy Fleming, More Intelligent Life |  Read more:
Image: Brett Ryder

Saturday, May 16, 2015

The Mystery of $2 Bills

Heather McCabe's wallet is as full as George Costanza's. But rather than being stuffed with hard candy and ads for free guitar lessons, McCabe's is full of an exotic material of another sort: $2 bills.

Over the past few years, McCabe has been going to the bank, withdrawing her money in stacks of $2 bills, and using them in a social experiment of sorts. Every time she pays with them, McCabe snaps a photo of the recipient and posts a dispatch at her website TwoBuckaroo.com. “Usually there's a moment of surprise, a pause when someone sees it, an exclamation,” McCabe says. “Sometimes eyes light up, sometimes the person gasps, and then usually says something like, 'Oh lucky two-dollar bill!'”

It's not always positive though. While the now-famous Snopes story about a Taco Bell employee who refused to accept a $2 bill is probably not true, McCabe has been on the receiving end of vendors refusing to accept it as currency. “That's against the law,” McCabe says. “The bill is legal tender.” Among storeowners, the worry is that the $2 bills are counterfeit, a notion that comes from simply not seeing the bills in action all that much. “They don't necessarily trust themselves to know whether or not it's the real thing,” McCabe says. “But even so, who cares? If it's counterfeit, you're only losing two bucks.”

To McCabe, though, it's all good. Even negative reactions are indicative of this strange mid-point between currency and novelty that the $2 bill somehow inhabits. “There is always a reaction,” she says.

McCabe started her obsession after finding a bill in her jewelry box. “I have no memory of saving it,” McCabe says. “I thought it was special, I don't know why.” Personally, I had the same strange experience after returning to my parents' home and being greeted with a metal cup of $2 bills that I'd apparently held onto. John Bennardo, the producer and director of a soon-to-be-released documentary about the bill, found himself in the $2 crew by finding a bunch of them in the bottom of his drawer, saved for no good reason.

“I'd pull them out and admire them,” Bennardo says. “I didn't want to spend them.”

But why? Are they rare, therefore making them somehow more valuable than their $2 label? Nope. According to United States Federal Reserve statistics, there are currently 1.1 billion of the $2 bills in circulation. While that may be comparatively fewer than other bills—there are 11 billion $1 bills, 1.9 billion $10 bills, 8.1 billion $20s, and 10.1 billion $100s roaming the world right now—anything that numbers over one billion should not be considered “rare.”

How about the claim that they're not printed anymore? “The majority of people I've met, regardless of their education level or background, seem to believe the two-dollar bill is not made anymore,” McCabe says. They're printed less regularly than other bills—normal bills get a yearly printing, while $2 bills have only been printed three times over the past decade—but the most recent printing occurred in 2014. It's not as if the $2 bills being handled are classic tender from yesteryear.

What, then, makes them seem somehow more valuable than $2?

by Rick Paulas, Pacific Standard | Read more:
Image: armydre2008/Flickr

[ed. Oh my. Miss America, Kira Kazantzev, plays golf. No wonder she won!]
via:

Friday, May 15, 2015


David Shterenberg (1881-1948)
via:

Lernert and Sander, Food Cubes
via:

The Robots Are Winning!

Just as the Industrial Revolution inspired Frankenstein and its epigones, so has the computer age given rise to a rich new genre of science fiction. The machines that are inspiring this latest wave of science-fiction narratives are much more like Hephaestus’s golden maidens than were the machines that Mary Shelley was familiar with. Computers, after all, are capable of simulating mental as well as physical activities. (Not least, as anyone with an iPhone knows, speech.) It is for this reason that the anxiety about the boundaries between people and machines has taken on new urgency today, when we constantly rely on and interact with machines—indeed, interact with each other by means of machines and their programs: computers, smartphones, social media platforms, social and dating apps.

This urgency has been reflected in a number of recent films about troubled relationships between people and their human-seeming devices. The most provocative of these is Her, Spike Jonze’s gentle 2013 comedy about a man who falls in love with the seductive voice of an operating system, and, more recently, Alex Garland’s Ex Machina, about a young man who is seduced by a devious, soft-spoken female robot called Ava whom he has been invited to interview as part of the “Turing Test”: a protocol designed to determine the extent to which a robot is capable of simulating a human. Although the robot in Garland’s sleek and subtle film is a direct descendant of Hesiod’s Pandora—beautiful, intelligent, wily, ultimately dangerous—the movie, as the Eve-like name Ava suggests, shares with its distinguished literary predecessors some serious biblical concerns.

Both of the new films about humans betrayed by computers owe much to a number of earlier movies. The most authoritative of these remains Stanley Kubrick’s 2001: A Space Odyssey, which came out in 1968 and established many of the main themes and narratives of the genre. Most notable of these is the betrayal by a smooth-talking machine of its human masters. The mild-mannered computer HAL—not a robot, but a room-sized computer that spies on the humans with an electronic eye—takes control of a manned mission to Jupiter, killing off the astronauts one by one until the sole survivor finally succeeds in disconnecting him. It’s a strangely touching scene, suggesting the degree to which computers could already engage our sympathies at the beginning of the computer age. As his connections are severed, HAL first begs for its life and then suffers from a kind of dementia, finally regressing to its “childhood,” singing a song it was taught by its creator. It was the first of many scenes in which these thinking machines express anxiety about their own demises—surely a sign of “consciousness.”

But the more direct antecedents of Her and Ex Machina are a number of successful popular entertainments whose story lines revolved around the creation of robots that are, to all intents and purposes, indistinguishable from humans. In Ridley Scott’s stylishly noir 1982 Blade Runner (based on Philip K. Dick’s Do Androids Dream of Electric Sheep?), a “blade runner”—a cop whose job it is to hunt down and kill renegade androids called “replicants”—falls in love with one of the machines, a beautiful female called Rachael who is so fully endowed with what Homer called “mind” that she has only just begun to suspect that she’s not human herself.

This story is, in its way, an heir to Frankenstein and its literary forerunners. For we learn that the angry replicants have returned to Earth from the off-planet colonies where they work as slave laborers because they realize they’ve been programmed to die after four years, and they want to live—just as badly as humans do. But their maker, when at last they track him down and meet with him, is unable to alter their programming. “What seems to be the problem?” he calmly asks when one of the replicants confronts him. “Death,” the replicant sardonically retorts. “We made you as well as we could make you,” the inventor wearily replies, sounding rather like Victor Frankenstein talking to his monster—or, for that matter, like God speaking to Adam and Eve. At the end of the film, after the inventor and his rebellious creature both die, the blade runner and his alluring mechanical girlfriend declare their love for each other and run off, never quite knowing when she will stop functioning. As, indeed, none of us does.

The stimulating existential confusion that animates Blade Runner—the fact that the robots are so lifelike that some of them don’t know that they’re robots—has given strong interest to other recent science-fiction narratives. It was a central premise of the brilliant Sci-Fi Channel series Battlestar Galactica (2004–2009), which gave an Aeneid-like narrative philosophical complexity. In it, a small band of humans who survive a catastrophic attack by a robot race called Cylons (who have evolved from clanking metal prototypes—hostile humans like to refer to them as “toasters”—into perfect replicas of actual Homo sapiens) seek a new planet to settle. The narrative about the conflict between the humans and the machines is deliciously complicated by the fact that many of the Cylons, some of whom have been secretly embedded among the humans as saboteurs, programmed to “wake up” at a certain signal, aren’t aware that they’re not actually human; some of them, when they wake up and realize that they’re Cylons, stick to the human side anyway. After all, when you look like a human, think like a human, and make love like a human (as we repeatedly see them do), why, precisely, aren’t you human?

Indeed, the focus of many of these movies is a sentimental one: whatever their showy interest in the mysteries of “consciousness,” the real test of human identity turns out, as it so often does in popular entertainment, to be love. In Steven Spielberg’s A.I. (2001; the initials stand for “artificial intelligence”), a messy fairy tale that weds a Pinocchio narrative to the Prometheus story, a genius robotics inventor wants to create a robot that can love, and decides that the best vehicle for this project would be a child-robot: a “perfect child…always loving, never ill, never changing.” This narrative is, as we know, shadowed by Frankenstein—and, beyond that, by Genesis, too. Why does the creator create? To be loved, it turns out. When the inventor announces to his staff his plan to build a loving child-robot, a woman asks whether “the conundrum isn’t to get a human to love them back.” To this the inventor, as narcissistic and hubristic as Victor Frankenstein, retorts, “But in the beginning, didn’t God create Adam to love him?”

The problem is that the creator does his job too well. For the mechanical boy he creates is so human that he loves the adoptive human parents to whom he’s given much more than they love him, with wrenching consequences. The robot-boy, David, wants to be “unique”—the word recurs in the film as a marker of genuine humanity—but for his adoptive family he is, in the end, just a machine, an appliance to be abandoned at the edge of the road—which is what his “mother” ends up doing, in a scene of great poignancy. Although it’s too much of a mess to be able to answer the questions it raises about what “love” is and who deserves it, A.I. did much to sentimentalize the genre, with its hint that the capacity to love, even more than the ability to think, is the hallmark of “human” identity.

by Daniel Mendelsohn, NY Review of Books |  Read more:
Image: A24 Films

Thursday, May 14, 2015

The Last Day of Her Life

After three hours, Mapstone gave a preliminary diagnosis: amnestic mild cognitive impairment. At first Sandy was relieved — he had said mild, hadn’t he? — but then she caught the look on his face. This is not a good thing, Mapstone told her gently; most cases of amnestic M.C.I. progress to full-­blown Alzheimer’s disease within 10 years.

When Sandy went back to the waiting room to meet Daryl, she was weeping uncontrollably. Between sobs, she explained the diagnosis and the inevitable decline on the horizon. She felt terror at the prospect of becoming a hollowed-­out person with no memory, mind or sense of identity, as well as fury that she was powerless to do anything but endure it. With Alzheimer’s disease, she would write, it is “extraordinarily difficult for one’s body to die in tandem with the death of one’s self.” That day at Mapstone’s office, she vowed that she would figure out a way to take her own life before the disease took it from her. (...)

On a quiet Friday morning in November 2010, Sandy sat down with a mug of honey-­ginger tea to read two books that Daryl had brought her. By this point, a year and a half after her amnestic M.C.I. diagnosis, she had progressed to what Duffy said was Alzheimer’s disease. She had retired from Cornell, but she was doing well. She could still travel alone to familiar destinations, including Austin, Tex., where Emily was living. Jeremy had temporarily moved back home to be with her. She could read novels, even difficult ones like Cormac McCarthy’s “The Road.” She played tennis, gardened and went for walks around Ithaca with a handful of friends, most of them former colleagues from Cornell. She saw a few psychotherapy patients. One would later say that even though Sandy was having some trouble remembering words, “it didn’t really matter. In a therapy relationship you’re talking more about emotions — and in that regard, she didn’t miss a beat.”

The first book on her table that Friday morning was “Final Exit.” Sandy read it in the early 1990s when it was published; even then she was intrigued by the argument of the author, Derek Humphry, in favor of self-­directed “death with dignity” for people who were terminally ill. The second was a newer book by the Australian right-­to-­die advocate Philip Nitschke called “The Peaceful Pill Handbook.” The pill in the title (though not literally a pill; it comes in liquid form) was Nembutal, a brand name for pentobarbital, a barbiturate that is used by veterinarians to euthanize animals and that is also used in state-­sanctioned physician-­assisted suicides. After reading about it, Sandy thought pentobarbital was what she was looking for. It was reliable, fast-­acting and — most important to her — a gentle way to die. It causes swift but not sudden unconsciousness and then a gradual slowing of the heart.

There could be complications, of course, like vomiting; Nitschke and his co-­author, Fiona Stewart, recommended taking an anti-nausea drug a few hours before taking the fatal dose to minimize that risk. They warned that pentobarbital is detectable in a person’s body after death — but that didn’t matter to Sandy. In fact, she preferred having people know that she died by her own hand.

One morning during one of Sandy’s frequent phone calls to her sister in Oregon, she told her about the decision to use pentobarbital. Sandy had a special relationship with Bev, who was six years younger. When Sandy married Daryl, Bev was 14, and Sandy invited her sister to live with them rather than with their parents, whose unhappy marriage made it feel, as Sandy put it in her memoir, as if “chaos could erupt at any moment.”

A year before Sandy received her diagnosis, Bev was found to have Stage 4 ovarian cancer. The sisters had discussed the fact that Oregon law allows people with terminal illnesses to take their own lives. Sandy now envied Bev’s situation. “I don’t think I have ever been as jealous about anything in my life as I am about this,” she wrote in her journal shortly after she saw Mapstone. It was weeks before she could get past that jealousy and take Bev into her confidence.

But even if Sandy had lived in Oregon, her Alzheimer’s disease would have precluded her from getting help in taking her own life. States that allow for assisted dying require two doctors to certify that the person has a prognosis of less than six months to live, and most people with Alzheimer’s have no such prognosis. They also require that the person be declared “of sound mind,” a difficult hurdle for someone whose brain is deteriorating. (...)

Ronald Dworkin, an influential legal scholar and the author of “Life’s Dominion: An Argument About Abortion, Euthanasia and Individual Freedom,” wrote about a kind of hierarchy of needs for people in Sandy’s situation, who want their autonomy to be respected even as disease changes the essence of who they are and what autonomy means. He differentiated between “critical interests” (personal goals and desires that make life worth living) and “experiential interests” (enjoying listening to music, for instance, or eating chocolate ice cream). Sandy was appreciating her experiential interests — playing with Felix and working in her garden — but her critical interests were far more sophisticated and were moving out of her reach. Critical interests should take priority when making end-­of-­life choices on behalf of someone whose changed state renders her less capable of deciding on her own, Dworkin wrote, because critical interests reflect your true identity. The new Sandy seemed to love being a grandmother, but it was important to take into account what the old Sandy would have wanted.

by Robin Marantz Henig, NY Times |  Read more:
Image: Paul Fusco/Magnum Photos

Wednesday, May 13, 2015

Why Can't America Have Great Trains?

Thirty-nine minutes into his southbound ride from Wilmington, Delaware, to Washington, D.C., Joseph H. Boardman, president and CEO of Amtrak, begins to cry. We're in the dining car of a train called theSilver Star, surrounded by people eating hamburgers. TheSilver Star runs from New York City to Miami in 31 hours, or five more hours than the route took in 1958, which is when our dining car was built. Boardman and I have been discussing the unfortunate fact that 45 years since its inception, the company he oversees remains a poorly funded, largely neglected ward of the state, unable to fully control its own finances or make its own decisions. I ask him, "Is this a frustrating job?"

"I guess it could be, and there are times it is," he says. "No question about that. But—" His voice begins to catch. "Sixty-six years old, I've spent my life doing this. I talked to my 80-year-old aunt this weekend, who said, 'Joe, just keep working.' Because I think about retirement." Boardman is a Republican who formerly ran the Federal Railroad Administration and was New York state's transportation commissioner; he has a bushy white mustache and an aw-shucks smile. "We've done good things," he continues. "We haven't done everything right, and I don't make all of the right decisions, and, yes, I get frustrated. But you have to stay up." A tear crawls down his left cheek.

It's easy to love trains—the model kind, the European kind, the kind whose locomotives billow with steam in black-and-white photos of the old American West. It's harder to love Amtrak, the kind we actually ride. Along with PBS and the United States Postal Service, Amtrak is perpetual fodder for libertarian think-tankers and Republican office-seekers on the prowl for government profligacy. Ronald Reagan and George W. Bush repeatedly tried to eliminate its subsidy, while Mitt Romney promised to do the same. Democrats, for their part, aren't interested in slaying Amtrak, but mostly you get the sense they just feel bad for it. "If you ever go to Japan," former Amtrak board member and rail die-hard Mike Dukakis told me, "ride the trains and weep."

It's true: Compared with the high-speed trains of Western Europe and East Asia, American passenger rail is notoriously creaky, tardy, and slow. The Acela, currently the only "high-speed" train in America, runs at an average pace of 68 miles per hour between Washington and Boston; a high-speed train from Madrid to Barcelona averages 154 miles per hour. Amtrak's most punctual trains arrive on schedule 75 percent of the time; judged by Amtrak's lax standards, Japan's bullet trains are late basically 0 percent of the time.

And those stats don't figure to improve anytime soon. While Amtrak isn't currently in danger of being killed, it also isn't likely to do more than barely survive. Last month, the House of Representatives agreed to fund Amtrak for the next four years at a rate of $1.4 billion per year. Meanwhile, the Chinese government—fair comparison or not—will be spending $128 billion this year on rail. (Thanks to the House bill, though, Amtrak passengers can look forward to a new provision allowing cats and dogs on certain trains.)

A few decades ago, news of another middling Amtrak appropriation wouldn't have warranted a second glance; passenger rail was unpopular and widely thought to be obsolete. But recently, Amtrak's popularity has actually spiked. Ridership has increased by roughly 50 percent in the past 15 years, and ridership in the Northeast Corridor stood at an all-time high in 2014. Amtrak also now accounts for 77 percent of all rail and air travel between Washington and New York, up from just 37 percent when it launched the Acela in 2000.

And yet, despite this outpouring of popular demand, despite the clear environmental benefits of rail travel, despite the fact that trains can help relieve urban congestion, despite the professed enthusiasm of the Obama administration (and especially rail fan-in-chief, Joe Biden) for high-speed trains—despite all of this, Amtrak, which runs a deficit and therefore depends on money from Washington, remains on a seemingly permanent path to mediocrity.

What gives, exactly? Why can't Amtrak create any momentum for itself in the political world? Why is the United States apparently condemned to have second-rate trains?

Part of the answer, of course, is geography: Density lends itself to trains, and America is far less dense than, say, Spain or France. But this explanation isn't wholly satisfying because, even in the densest parts of the United States, intercity rail is slow or inefficient.

In an effort to solve the riddle of American passenger rail's stubborn feebleness, I spent a couple months seeking out train obsessives around the country. During these conversations, I heard no shortage of ideas for fixing Amtrak. But perhaps the place to start is in Washington, where Amtrak clearly feels mistreated by its bosses in the federal government. "I think they lost their way a long time ago," Boardman says of Congress. "I don't understand how they don't understand. It's an absolutely necessary service, and it should be much better than it is." Later during our trip, as he shows off a brand-new luggage compartment aboard theSilver Star, he elaborates. "Maybe it's about the kid who gets bullied," he says. "Once they start bullying you, they can't stop."

by Simon Van Zuylen-Wood, National Journal | Read more:
Image: Ricky Carioti/The Washington Post

Tomorrow's Advance Man

If you have a crackerjack idea, one of your stops on Sand Hill Road will be Andreessen Horowitz, often referred to by its alphanumeric URL, a16z. (There are sixteen letters between the “a” in Andreessen and the “z” in Horowitz.) Since the firm was launched, six years ago, it has vaulted into the top echelon of venture concerns. Competing V.C.s, disturbed by its speed and its power and the lavish prices it paid for deals, gave it another nickname: AHo. Each year, three thousand startups approach a16z with a “warm intro” from someone the firm knows. A16z invests in fifteen. Of those, at least ten will fold, three or four will prosper, and one might soar to be worth more than a billion dollars—a “unicorn,” in the local parlance. With great luck, once a decade that unicorn will become a Google or a Facebook and return the V.C.’s money a thousand times over: the storied 1,000x. There are eight hundred and three V.C. firms in the U.S., and last year they spent forty-eight billion dollars chasing that dream. (...)

When a startup is just an idea and a few employees, it looks for seed-round funding. When it has a product that early adopters like—or when it’s run through its seed-round money—it tries to raise an A round. Once the product catches on, it’s time for a B round, and on the rounds go. Most V.C.s contemplating an investment in one of these early rounds consider the same factors. “The bottom seventy per cent of V.C.s just go down a checklist,” Jordan Cooper, a New York entrepreneur and V.C., said. “Monthly recurring revenue? Founder with experience? Good sales pipeline? X per cent of month-over-month growth?” V.C.s also pattern-match. If the kids are into Snapchat, fund things like it: Yik Yak, Streetchat, ooVoo. Or, at a slightly deeper level, if two dropouts from Stanford’s computer-science Ph.D. program created Google, fund more Stanford C.S.P. dropouts, because they blend superior capacity with monetizable dissatisfaction.

Venture capitalists with a knack for the 1,000x know that true innovations don’t follow a pattern. The future is always stranger than we expect: mobile phones and the Internet, not flying cars. Doug Leone, one of the leaders of Sequoia Capital, by consensus Silicon Valley’s top firm, said, “The biggest outcomes come when you break your previous mental model. The black-swan events of the past forty years—the PC, the router, the Internet, the iPhone—nobody had theses around those. So what’s useful to us is having Dumbo ears.”* A great V.C. keeps his ears pricked for a disturbing story with the elements of a fairy tale. This tale begins in another age (which happens to be the future), and features a lowborn hero who knows a secret from his hardscrabble experience. The hero encounters royalty (the V.C.s) who test him, and he harnesses magic (technology) to prevail. The tale ends in heaping treasure chests for all, borne home on the unicorn’s back. (...)

V.C.s give the Valley its continuity—and its ammunition. They are the arms merchants who can turn your crazy idea and your expendable youth into a team of coders with Thunderbolt monitors. Apple and Microsoft got started with venture money; so did Starbucks, the Home Depot, Whole Foods Market, and JetBlue. V.C.s made their key introductions and stole from every page of Sun Tzu to help them penetrate markets. And yet V.C.s maintain a zone of embarrassed privacy around their activities. They tell strangers they’re investors, or work in technology, because, in a Valley that valorizes the entrepreneur, they don’t want to be seen as just the money. “I say I’m in the software industry,” one of the Valley’s best-known V.C.s told me. “I’m ashamed of the truth.”

At a hundred and eleven dollars a square foot, Sand Hill Road is America’s most expensive office-rental market—an oak-and-eucalyptus-lined prospect stippled with bland, two-story ski chalets constrained by an ethos of nonconspicuous consumption (except for the Teslas in the parking lot). It’s a community of paranoid optimists. The top firms coöperate and compete by turns, suspicious of any company whose previous round wasn’t led by another top-five firm even as they’re jealous of that firm for leading it. They call this Schadenfreude-riddled relationship “co-opitition.” Firms trumpet their boldness, yet they often follow one another, lemming-like, pursuing the latest innovation—pen-based computers, biotech, interactive television, superconductors, clean tech—off a cliff.

Venture capital became a profession here when an investor named Arthur Rock bankrolled Intel, in 1968. Intel’s co-founder Gordon Moore coined the phrase “vulture capital,” because V.C.s could pick you clean. Semiretired millionaires who routinely arrived late for pitch meetings, they’d take half your company and replace you with a C.E.O. of their choosing—if you were lucky. But V.C.s can also anoint you. The imprimatur of a top firm’s investment is so powerful that entrepreneurs routinely accept a twenty-five per cent lower valuation to get it. Patrick Collison, a co-founder of the online-payment company Stripe, says that landing Sequoia, Peter Thiel, and a16z as seed investors “was a signal that was not lost on the banks we wanted to work with.” Laughing, he noted that the valuation in the next round of funding—“for a pre-launch company from very untested entrepreneurs who had very few customers”—was a hundred million dollars. Stewart Butterfield, a co-founder of the office-messaging app Slack, told me, “It’s hard to overestimate how much the perception of the quality of the V.C. firm you’re with matters—the signal it sends to other V.C.s, to potential employees, to customers, to the tech press. It’s like where you went to college.” (...)

Corporate culture, civic responsibility, becoming a pillar of society—these are not venture’s concerns. Andy Weissman, a partner at New York’s Union Square Ventures, noted that venture in the Valley is a perfect embodiment of the capitalist dynamic that the economist Joseph Schumpeter called “creative destruction.” Weissman said, “Silicon Valley V.C.s are all techno-optimists. They have the arrogant belief that you can take a geography and remove all obstructions and have nothing but a free flow of capital and ideas, and that it’s good, it’s very good, to creatively destroy everything that has gone before.” Some Silicon Valley V.C.s believe that these values would have greater sway if their community left America behind: Andreessen’s nerd nation with a charter and a geographic locale. Peter Thiel favors “seasteading,” establishing floating cities in the middle of the ocean. Balaji Srinivasan, until recently a general partner at a16z and now the chairman of one of its Bitcoin companies, has called for the “ultimate exit.” Arguing that the United States is as fossilized as Microsoft, and that the Valley has become stronger than Boston, New York, Los Angeles, and Washington, D.C., combined, Srinivasan believes that its denizens should “build an opt-in society, ultimately outside the U.S., run by technology.”

The game in Silicon Valley, while it remains part of California, is not ferocious intelligence or a contrarian investment thesis: everyone has that. It’s not even wealth: anyone can become a billionaire just by rooming with Mark Zuckerberg. It’s prescience. And then it’s removing every obstacle to the ferocious clarity of your vision: incumbents, regulations, folkways, people. Can you not just see the future but summon it?

by Tad Friend, New Yorker |  Read more:
Image: Joe Pugiliese

Monday, May 11, 2015

How Friendship Became a Tool of the Powerful

[ed. See also: We Should Use Brands, Not Love Them]

Imagine walking into a coffee shop, ordering a cappuccino, and then, to your surprise, being informed that it has already been paid for. Where did this unexpected gift come from? It transpires that it was left by the previous customer. The only snag, if indeed it is a snag, is that you now have to do the same for the next customer who walks in.

This is known as a “pay-it-forward” pricing scheme. It is something that has been practised by a number of small businesses in California, such as the Karma Kitchen in Berkeley and, in some cases, customers have introduced it spontaneously. On the face of it, it would seem to defy the logic of free-market economics. Markets, surely, are places where we are allowed, even expected, to behave selfishly. With its hippy idealism, pay-it-forward would appear to go against the core tenets of economic calculation.

But there is more to it than this. Researchers from the decision science research group at the University of California, Berkeley have looked closely at pay-it-forward pricing and discovered something with profound implications for how markets and businesses work. It transpires that people will generally pay more under the pay-it-forward model than under a conventional pricing system. As the study’s lead author, Minah Jung, puts it: “People don’t want to look cheap. They want to be fair, but they also want to fit in with the social norms.” Contrary to what economists have long assumed, altruism can often exert a far stronger influence over our decision-making than calculation.

Such findings are typical of the field of behavioural economics, which emerged in the late 1970s. Like regular economists, behavioural economists assume that individuals are usually motivated to maximise their own benefit – but not always. In certain circumstances, they are social and moral animals, even when this appears to undermine their economic interests. They follow the herd and act according to certain rules of thumb. They have some principles that they will not sacrifice for money at all.

It seems that this undermines the cynical, individualist theory of human psychology, which lies at the heart of orthodox economics. Could it be that we are decent, social creatures after all? A great deal of neuroscientific research into the roots of sympathy and reciprocity supports this. Optimists might view this as the basis for a new political hope, of a society in which sharing and gift-giving offer a serious challenge to the power of monetary accumulation and privatisation.

But there is also a more disturbing possibility: that the critique of individualism and monetary calculation is now being incorporated into the armoury of utilitarian policy and management. One of the key insights of behavioural economics is that, if one wants to control other human beings, it is often far more effective to appeal to their sense of morality and social identity than to their self-interest.

This is symptomatic of a more general shift in policy and business practices today. Across various fields of expertise, from healthcare to marketing, from military training to finance, there is rising hope that strategic goals can be achieved through harnessing the power of the “social”. But what exactly does this mean? As the era of socialdemocracy recedes further into the past, the meaning of the term is undergoing a profound transformation. Where once the term implied something concerning society or the common good, increasingly it refers to a technique of psychological intervention on the individual. Informal social connections and friendships are being rendered more visible and measurable. In the process, they are being turned into possible instruments of power.

Over recent years, generosity has become big business. In 2009, Chris Anderson, former editor of Wired magazine, published Free: The Future of a Radical Price. Anderson argued that there was now a strong business case for giving products and services away for free, in order to forge better relationships with customers. Giving things away for free becomes a means of holding an audience captive or building a reputation, which can then be exploited with future sales or advertising. Michael O’Leary, boss of Ryanair, has even suggested that airline tickets might one day be priced at zero, with all costs recovered through additional charges for luggage, using the bathroom, skipping queues, and so on.

What Anderson was highlighting was the potential of non-monetary relationships to increase profits. And just as corporate giving can be used as a way of boosting revenue, so can the magic words that are used in return. Marketing specialists now analyse the optimal way of saying the words “thank you” to a customer, so as to deepen the social relationship with them.

The language of gratitude has infiltrated a number of high-profile advertising campaigns. Around Christmas 2013, Lloyds TSB, one of the British banks to be most embarrassed by the 2008 financial crisis, launched a campaign consisting entirely of cutesy images of childhood friends enjoying happy moments together, concluding with the words “thank you”, written in party balloons. There was no mention of money. More bizarrely, Tesco, whose brand has suffered in recent years, released a series of YouTube videos in 2013 with men in Christmas jumpers singing “thank you” to everyone from the person who cooks Christmas dinner, to those driving safely, to other companies such as Instagram and so on. Tesco, it was implied, sprays gratitude in all directions, regardless of its own private interests.

There is inevitably a limit to how much of a social bond an individual can have with a multinational company. Businesses today are obsessed with being social, but what they typically mean by this is that they are able to permeate peer-to-peer social networks as effectively as possible. Brands hope to play a role in cementing friendships, as a guarantee that they will not be abandoned for more narrowly calculated reasons. So, for example, Coca-Cola has tried a number of somewhat twee marketing campaigns, such as putting individual names (“Sue”, “Tom”, etc) on their bottles as a way to encourage gift-giving. Managers hope that their employees will also act as “brand ambassadors” in their everyday social lives. Meanwhile, neuromarketers have begun studying how successfully images and advertisements trigger common neural responses in groups, rather than in isolated individuals. This, it seems, is a far better indication of how larger populations will respond to advertising.

All this – along with the rise of the “sharing economy”, exemplified by Airbnb and Uber, offers a simple lesson to big business. People will take more pleasure in buying things if the experience can be blended with something that feels like friendship and gift-exchange. The role of money must be airbrushed out of the picture wherever possible. As marketers see it, payment is one of the unfortunate “pain points” in any relationship with a customer, which requires anaesthetising with some form of more social experience. The market must be represented as something else entirely.

Yet the greatest catalyst for the new business interest in being social is, unsurprisingly, the rise of social media. At the same time that behavioural economics has been highlighting the various ways in which we are altruistic creatures, social media offers businesses an opportunity to analyse and target that social behaviour. It allows advertising to be tailored to specific individuals, on the basis of who they know, and what those other people like and purchase. These practices, which are collectively referred to as “social analytics”, mean that tastes and behaviours can be traced in unprecedented detail. The end goal is no different from what it was at the dawn of marketing and management in the late 19th century: making money. What has changed is that each one of us is now viewed as an instrument through which to alter the attitudes and behaviours of our friends and contacts. Behaviours and ideas can be released like “contagions”, in the hope of infecting much larger networks.

by William Davies, The Guardian |  Read more:
Image: Pete Gamlen

The Killing of Osama bin Laden

It’s been four years since a group of US Navy Seals assassinated Osama bin Laden in a night raid on a high-walled compound in Abbottabad, Pakistan. The killing was the high point of Obama’s first term, and a major factor in his re-election. The White House still maintains that the mission was an all-American affair, and that the senior generals of Pakistan’s army and Inter-Services Intelligence agency (ISI) were not told of the raid in advance. This is false, as are many other elements of the Obama administration’s account. The White House’s story might have been written by Lewis Carroll: would bin Laden, target of a massive international manhunt, really decide that a resort town forty miles from Islamabad would be the safest place to live and command al-Qaida’s operations? He was hiding in the open. So America said.

The most blatant lie was that Pakistan’s two most senior military leaders – General Ashfaq Parvez Kayani, chief of the army staff, and General Ahmed Shuja Pasha, director general of the ISI – were never informed of the US mission. This remains the White House position despite an array of reports that have raised questions, including one by Carlotta Gall in the New York Times Magazine of 19 March 2014. Gall, who spent 12 years as the Times correspondent in Afghanistan, wrote that she’d been told by a ‘Pakistani official’ that Pasha had known before the raid that bin Laden was in Abbottabad. The story was denied by US and Pakistani officials, and went no further. In his book Pakistan: Before and after Osama (2012), Imtiaz Gul, executive director of the Centre for Research and Security Studies, a think tank in Islamabad, wrote that he’d spoken to four undercover intelligence officers who – reflecting a widely held local view – asserted that the Pakistani military must have had knowledge of the operation. The issue was raised again in February, when a retired general, Asad Durrani, who was head of the ISI in the early 1990s, told an al-Jazeera interviewer that it was ‘quite possible’ that the senior officers of the ISI did not know where bin Laden had been hiding, ‘but it was more probable that they did [know]. And the idea was that, at the right time, his location would be revealed. And the right time would have been when you can get the necessary quid pro quo – if you have someone like Osama bin Laden, you are not going to simply hand him over to the United States.’

This spring I contacted Durrani and told him in detail what I had learned about the bin Laden assault from American sources: that bin Laden had been a prisoner of the ISI at the Abbottabad compound since 2006; that Kayani and Pasha knew of the raid in advance and had made sure that the two helicopters delivering the Seals to Abbottabad could cross Pakistani airspace without triggering any alarms; that the CIA did not learn of bin Laden’s whereabouts by tracking his couriers, as the White House has claimed since May 2011, but from a former senior Pakistani intelligence officer who betrayed the secret in return for much of the $25 million reward offered by the US, and that, while Obama did order the raid and the Seal team did carry it out, many other aspects of the administration’s account were false.

‘When your version comes out – if you do it – people in Pakistan will be tremendously grateful,’ Durrani told me. ‘For a long time people have stopped trusting what comes out about bin Laden from the official mouths. There will be some negative political comment and some anger, but people like to be told the truth, and what you’ve told me is essentially what I have heard from former colleagues who have been on a fact-finding mission since this episode.’ As a former ISI head, he said, he had been told shortly after the raid by ‘people in the “strategic community” who would know’ that there had been an informant who had alerted the US to bin Laden’s presence in Abbottabad, and that after his killing the US’s betrayed promises left Kayani and Pasha exposed.

The major US source for the account that follows is a retired senior intelligence official who was knowledgeable about the initial intelligence about bin Laden’s presence in Abbottabad. He also was privy to many aspects of the Seals’ training for the raid, and to the various after-action reports. Two other US sources, who had access to corroborating information, have been longtime consultants to the Special Operations Command. I also received information from inside Pakistan about widespread dismay among the senior ISI and military leadership – echoed later by Durrani – over Obama’s decision to go public immediately with news of bin Laden’s death. The White House did not respond to requests for comment.

by Seymour M. Hersh, London Review of Books |  Read more:
Image: Politico