Friday, January 31, 2014

It’s a Services World


With the announcement of Nest’s acquisition by Google for $3.2 billion last week, the entire technology industry was thrown into hypergear deliberating why there was such a high price placed on something that appeared to be so simple.

It’s only a thermostat after all, right? Wrong.

Nest helps consumers control the most energy-guzzling aspects of your home – heating and air conditioning – which accounts for 56 percent of the average home’s energy consumption. What made the Nest acquisition so appealing was its ties to this mandatory service. It provides a product that is so closely embedded into the consumer’s life that it was an appealing acquisition for any mega corporation that wants to take advantage of the thing that powers everything – electricity.

Now add $1.99 per month subscription to connect Nest to Google services, and you’ve opened a lot more consumers to replacing their “ugly” thermostat, rather than paying the upfront $249 Nest one-off purchase cost. Which could allow Nest to work its way into millions of homes; this means that 56% of all electricity used, the monthly service that we all are forced to pay, will largely be monitored and controlled by one of the most powerful companies in the world: Google.

Google has done extensively well to take expensive products and turning them into a service, often for free. For example, Google’s acquisition of analytics company Urchin Software Corporation in 2005 turned a very high-dollar offering to a free service for website owners. Google is transforming an industry formerly dominated by Microsoft with the launch of Google Drive, free for most and offered at a low monthly fee for businesses, replaces the need for Word, Excel and the rest of the antiquated one-off software offerings.

This is indicative of a shift from one-off product sales to services that will become essential to our everyday lives, things that we will pay for over and over again. In an age when consumers would balk at being forced to pay $120 for a year’s worth of music streaming, they are happy to have their money taken away dollar-by-dollar at a $10 a month clip. Whether it’s more than $1,200 a year on an iPhone plan or a monthly subscription for home delivered products and services, it’s time for Silicon Valley to realize we have reached the age of leasing, not buying things.

Welcome to the age of services. With every new app and product that debuts, entrepreneurs now more than ever need to take into consideration the value proposition of getting consumers quickly on the “titty” (as a great quote from the TV series “House of Cards” so bluntly put it). (...)

Much of this shift has to do with up-and-coming generations that have a bent for instant gratification at a bargain price (or even free). They want the $1.99 a month subscription for music, with on-demand video for $9.99. The Internet has made it possible to lease rather than buy, which large companies have done for decades to better balance their books. Depreciation on large equipment (cars, planes, buildings, etc.) is better if you can match it with consumption. Now technology has made it feasible for the average Joe to do the same.

This generation may never really own a thing in their life. Long gone are days of saving up to buy something when you have credit cards and layaway plans await. Instant consumerism is the driving force, and subscription services lead the way.

by Manu Rekhi, PandoDaily |  Read more:
Image: Thinkstock

Klaus Fussmann (German, b. 1938), Marguerites, 2010.
via:

How to Escape the Community College Trap

When Daquan McGee got accepted to the Borough of Manhattan Community College in the spring of 2010, he was 19 and still finding his footing after a two-year prison sentence for attempted robbery. He signed up for the standard battery of placement tests in reading, writing, and math; took them cold; and failed two—writing and math. Steered into summer developmental education (otherwise known as remediation), he enrolled in an immersion writing course, which he passed while working full-time at a Top Tomato Super Store. Then McGee learned of a program for which a low-income student like him might qualify, designed to maximize his chances of earning a degree. At a late-summer meeting, he got the rundown on the demands he would face.

McGee would have to enroll full-time in the fall, he was told; part-time attendance was not permitted. Every other week, he would be required to meet with his adviser, who would help arrange his schedule and track his progress. In addition to his full course load, McGee would have to complete his remaining remedial class, in math, immediately. If he slipped up, his adviser would hear about it from his instructor—and mandatory tutoring sessions would follow. If he failed, he would have to retake the class right away. Also on McGee’s schedule was a non-optional, noncredit weekly College Success Seminar, featuring time-management strategies, tips on study habits and goal setting, exercises in effective communication, and counsel on other life skills. The instructor would be taking attendance. If McGee complied with all that was asked of him, he would be eligible for a monthly drill: lining up in one of the long hallways in the main campus building to receive a free, unlimited MetroCard good for the following month. More important, as long as he stayed on track, the portion of his tuition not already covered by financial aid would be waived.

In a hurry to make up for his wasted prison years, McGee signed up. The pace, as he’d been warned, was fast from the start, and did not ease up after the fall. Through the spring semester and on into his second year, his course load remained heavy, and the advisory meetings continued, metronomically. He was encouraged to take winter- and summer-term classes, filling in the breaks between semesters. McGee, a guy with a stocky boxer’s build, doesn’t gush—he conveys low-key composure—but when I met him in October of 2012, early in his third year, he had only praise for the unremitting pushiness, and for the array of financial benefits that came along with it. The package was courtesy of a promising experimental initiative that goes by the snappy acronym ASAP, short for Accelerated Study in Associate Programs. Last winter, McGee graduated with an associate’s degree in multimedia studies. It had taken him two and a half years.

In the community-college world, McGee’s achievement is a shockingly rare feat, and the program that so intently encouraged him to accomplish it is a striking anomaly. The country’s low-cost sub-baccalaureate system—created a century ago to provide an open and affordable entry into higher education to an ever more diverse group of Americans—now enrolls 45 percent of all U.S. undergraduates, many of them part-time students. But only a fraction ever earn a degree, and hardly anyone does it quickly. The associate’s degree is nominally a two-year credential, and the system is proud of its transfer function, sending students onward to four-year schools, as juniors, to pursue a bachelor’s degree—the goal that 80 percent of entrants say they aspire to. Reality, however, typically confounds that tidy timeline. In urban community colleges like the Borough of Manhattan Community College, the national three-year graduation rate is 16 percent. Nationwide, barely more than a third of community-college enrollees emerge with a certificate or degree within six years.

Behind these dismal numbers lie the best of intentions. Community colleges have made it their mission to offer easy access, flexibility, and lots of options to a commuter population now dominated by “nontraditional” students. That’s a catchall label for the many people who don’t fit the classic profile of kids living in dorms, being financed by their parents. Nearly 70 percent of high-school graduates currently pursue some kind of postsecondary schooling, up from half in 1980. The surge is hardly surprising: higher education, over the past three decades, has become a prerequisite for a middle-class life. But of course, as the matriculation rate has climbed, so has the number of students who enter college with marginal credentials and other handicaps. The least academically prepared and most economically hard-pressed among them are typically bound for community college, where low-income students—plenty of them the first in their family to venture beyond high school—outnumber their high-income peers 2-to-1. Many of these students are already juggling jobs and family commitments by their late teens (McGee and his longtime girlfriend had a baby daughter in the fall of his freshman year). This could hardly be a more challenging population to serve.

The bet public community colleges have made—that the best way to meet the needs of their constituents is by offering as much flexibility and convenience as possible—makes a certain intuitive sense in light of such complications. So does a commitment to low cost. Give students a cheap, expansive menu, served up at all hours; don’t demand a specific diet—that’s not a bad metaphor for the community-college experience today.

by Ann Hulbert, The Atlantic |  Read more:
Image: Mike McQuade

U.S. Spying at Copenhagen Climate Talks Spark Anger

[ed. But, but... they told us it was just for terrorism, they don't actually read the metadata!]

Developing countries have reacted angrily to revelations that the United States spied on other governments at the Copenhagen climate summit in 2009.

Documents leaked by Edward Snowden show how the US National Security Agency (NSA) monitored communication between key countries before and during the conference to give their negotiators advance information about other positions at the high-profile meeting where world leaders including Barack Obama, Gordon Brown and Angela Merkel failed to agree to a strong deal on climate change.

Jairam Ramesh, the then Indian environment minister and a key player in the talks that involved 192 countries and 110 heads of state, said: "Why the hell did they do this and at the end of this, what did they get out of Copenhagen? They got some outcome but certainly not the outcome they wanted. It was completely silly of them. First of all, they didn't get what they wanted. With all their hi-tech gizmos and all their snooping, ultimately the Basic countries [Brazil, South Africa, India and China] bailed Obama out. With all their snooping what did they get?"

Martin Khor, an adviser to developing countries at the summit and director of the South Centre thinktank, said: "Would you play poker with someone who can see your cards? Spying on one another like this is absolutely not on. When someone has an upper hand is very disconcerting. There should be an assurance in negotiations like this that powerful players are not going to gain undue advantage with technological surveillance.

"For negotiations as complex as these we need maximum goodwill and trust. It is absolutely critical. If there is anything that prevents a level playing field, that stops negotiations being held on equal grounds. It disrupts the talks," he said. (...)

The document shows the NSA had provided advance details of the Danish plan to "rescue" the talks should they founder, and also had learned of China's efforts to coordinate its position with India before the conference.

The talks – which ended in disarray after the US, working with a small group of 25 countries, tried to ram through an agreement that other developing countries mostly rejected – were marked by subterfuge, passion and chaos.

Members of the Danish negotiating team told the Danish newspaper Information that both the US and Chinese delegations were "peculiarly well-informed" about closed-door discussions. "They simply sat back, just as we had feared they would if they knew about our document," one source told Information.

by John Vidal and Suzanne Goldenberg, The Guardian | Read more:
Image: Olivier Morin/AFP/Getty Images

Wednesday, January 29, 2014


[ed. Vermeer selfie]
via:

David Jonason, Toast & Coffee 2004
via:

The Largest Free Transit Experiment in the World

Last January, Tallinn, the capital city of Estonia, did something that no other city its size had done before: It made all public transit in the city free for residents.

City officials made some bold predictions about what would result. There would be a flood of new passengers on Tallinn’s buses and trams — as many as 20 percent more riders. Carbon emissions would decline substantially as drivers left their cars at home and rode transit instead. And low-income residents would gain new access to jobs that they previously couldn’t get to. As Mayor Edgar Savisaar likes to say, zeroing out commuting costs was for some people as good as receiving a 13th month of salary.

One year later, this city of 430,000 people has firmly established itself as the leader of a budding international free-transit movement. Tallinn has hosted two conferences for city officials, researchers and journalists from across Europe to discuss the idea. The city has an English-language website devoted to its experiment. And promotional materials have proclaimed Tallinn the "capital of free public transport."

The idea has been very popular with Tallinners. In an opinion poll, nine out of ten people said they were happy with how it’s going. Pille Saks is one of them. "I like free rides," says Saks, a 29 year-old secretary who goes to work by bus. "I have a car, but I don’t like to drive with it, especially in the winter when there is a lot of snow and roads are icy."

Different Reads on Ridership

What’s less clear on the first anniversary of free transit in Tallinn is whether it has actually changed commuting behavior all that much.

Mayor Savisaar says it has. He points to numbers from early last year, showing that traffic on the biggest crossroads had decreased by 14 percent compared to a week shortly before the policy started. He has also cited substantial increases in transit riders. "We are frequently asked … why we are offering free-of-charge public transport," Savisaar told a gathering of officials from Europe and China in August. "It is actually more appropriate to ask why most cities in the world still don’t."

by Sulev Vedler, The Atlantic |  Read more:
Image: Vallo Kruuser

The Botmaker Who Sees Through the Internet

In the recent Spike Jonze film “Her,” a lonely man buys a futuristic Siri-style computer program designed to interact fluently with humans. The program’s female voice speaks intimately to its owner through an earpiece, while also organizing his e-mails and keeping track of his appointments, until eventually he falls in love with it. Watching it happen is unsettling because of how well the program works on our human protagonist—how powerfully this computerized, disembodied simulation of a woman affects him.

The piece of software at the heart of “Her” only exists within a work of science fiction, of course—one set in a comfortably vague point in the future. While today’s smartphones and computers do “talk” to their users, they do so without the emotionally potent and slightly unpredictable qualities that might make a machine feel human.

But in certain quarters, automated beings with that exact set of qualities have already started to emerge. For the past few years, tiny computer programs have been telling jokes, writing poems, complaining, commenting on the news, and awkwardly flirting—on the Web and through the medium of Twitter, where their messages live side by side with those composed by the carbon-based life-forms who take daily delight in their antics.

One of the most prolific makers of these little programs—or bots, as they’re known—is 30-year-old Darius Kazemi, a computer programmer who lives in Somerville and works at a technology company called Bocoup, while devoting himself to the veritable petting zoo of autonomous digital creatures he has invented in his free time.

Chances are you haven’t heard of Kazemi. But over the past two years, he has emerged as one of the most closely watched and pioneering figures at the intersection of technology, cultural commentary, and what feels to many like a new kind of Web-native art. (...)

Kazemi’s dozens of projects have won him admirers among a range of people so wide it suggests the world doesn’t quite have a category for him yet. His work is tracked by video game designers, comedians, philosophers, and fellow bot-makers; an English literature professor named Leonardo Flores has written about the output of his bots in an online journal devoted to electronic poetry. Web designer Andrew Simone, who follows Kazemi’s work, calls him “a deeply subversive, bot-making John Cage.”

Kazemi is part of a small but vibrant group of programmers who, in addition to making clever Web toys, have dedicated themselves to shining a spotlight on the algorithms and data streams that are nowadays humming all around us, and using them to mount a sharp social critique of how people use the Internet—and how the Internet uses them back.

By imitating humans in ways both poignant and disorienting, Kazemi’s bots focus our attention on the power and the limits of automated technology, as well as reminding us of our own tendency to speak and act in ways that are essentially robotic. While they’re more conceptual art than activism, the bots Kazemi is creating are acts of provocation—ones that ask whether, as computers get better at thinking like us and shaping our behavior, they can also be rewired to spring us free.

by Leon Neyfakh, Boston Globe |  Read more:
Image: Lane Turner

Fast Food, Slow Customers

[ed. See also: Old McDonalds]

With its low coffee prices, plentiful tables and available bathrooms, McDonald’s restaurants all over the country, and even all over the world, have been adopted by a cost-conscious set as a coffeehouse for the people, a sort of everyman’s Starbucks.

Behind the Golden Arches, older people seeking company, schoolchildren putting off homework time and homeless people escaping the cold have transformed the banquettes into headquarters for the kind of laid-back socializing once carried out on a park bench or brownstone stoop.

But patrons have also brought the mores of cafe culture, where often a single purchase is permission to camp out with a laptop. Increasingly, they seem to linger over McCafe Lattes, sometimes spending a lot of time but little money in outlets of this chain, which rose to prominence on a very different business model: food that is always fast. And so restaurant managers and franchise owners are often frustrated by these, their most loyal customers. Such regulars hurt business, some say, and leave little room for other customers. Tensions can sometimes erupt.

In the past month, those tensions came to a boil in New York City. When management at a McDonald’s in Flushing, Queens, called the police on a group of older Koreans, prompting outrage at the company’s perceived rudeness, calls for a worldwide boycott and a truce mediated by a local politician, it became a famous case of a struggle that happens daily at McDonald’s outlets in the city and beyond.

Is the customer always right — even the ensconced penny-pincher? The answer seems to be yes among the ones who do the endless sitting. (...)

McDonald’s is not alone in navigating this tricky territory. Last year, a group of deaf patrons sued Starbucks after a store on Astor Place in Lower Manhattan forbade their meet-up group to convene there, complaining they did not buy enough coffee. Spending the day nursing a latte is behavior reinforced by franchises like Starbucks and others that seem to actively cultivate it, offering free Wi-Fi that encourages customers to park themselves and their laptops for hours.

There is a social benefit to such spots, some experts said.

“As long as there have been cities, these are the kind of places people have met in,” said Don Mitchell, a professor of urban geography at Syracuse University and the author of “The Right to the City: Social Justice and the Fight for Public Space.”

“Whether they have been private property, public property or something in between,” he said, “taking up space is a way to claim a right to be, a right to be visible, to say, ‘We’re part of the city too.’ ”

by Sarah Maslin Nir, NY Times |  Read more:
Image: Ruth Fremson/The New York Times

Tuesday, January 28, 2014

Seven Questions for Bob Dylan

Bob Dylan is either the most public private man in the world or the most private public one. He has a reputation for being silent and reclusive; he is neither. He has been giving interviews—albeit contentious ones—for as long as he's been making music, and he's been making music for more than fifty years. He's seventy-two years old. He's written one volume of an autobiography and is under contract to write two more. He's hosted his own radio show. He exhibits his paintings and his sculpture in galleries and museums around the world. Ten years ago, he cowrote and starred in a movie, Masked and Anonymous, that was about his own masked anonymity. He is reportedly working on another studio recording, his thirty-sixth, and year after year and night after night he still gets on stage to sing songs unequaled in both their candor and circumspection. Though famous as a man who won't talk, Dylan is and always has been a man who won't shut up.

And yet he has not given in; he has preserved his mystery as assiduously as he has curated his myth, and even after a lifetime of compulsive disclosure he stands apart not just from his audience but also from those who know and love him. He is his own inner circle, a spotlit Salinger who has remained singular and inviolate while at the same time remaining in plain sight.

It's quite a trick. Dylan's public career began at the dawn of the age of total disclosure and has continued into the dawn of the age of total surveillance; he has ended up protecting his privacy at a time when privacy itself is up for grabs. But his claim to privacy is compelling precisely because it's no less enigmatic and paradoxical than any other claim he's made over the years. Yes, it's important to him—"of the utmost importance, of paramount importance," says his friend Ronee Blakley, the Nashville star who sang with Dylan on his Rolling Thunder tour. And yes, the importance of his privacy is the one lesson he has deigned to teach, to the extent that his friends Robbie Robertson and T Bone Burnett have absorbed it into their own lives. "They both have learned from him," says Jonathan Taplin, who was the Band's road manager and is now a professor at the University of Southern California. "They've learned how to keep private, and they lead very private lives. That's the school of Bob Dylan—the smart guys who work with him learn from him. Robbie's very private. And T Bone is so private, he changes his e-mail address every three or four weeks."

How does Dylan do it? How does he impress upon those around him the need to protect his privacy? He doesn't. They just do. That's what makes his privacy Dylanesque. It's not simply a matter of Dylan being private; it's a matter of Dylan's privacy being private—of his manager saying, when you call, "Oh, you're the guy writing about Bob Dylan's privacy. How can I not help you?" (...)

"I've always been appalled by people who come up to celebrities while they're eating," says Lynn Goldsmith, a photographer who has taken pictures of Dylan, Springsteen, and just about every other god of the rock era. "But with Dylan, it's at an entirely different level. With everybody else, it's 'We love you, we love your work.' With Dylan, it's 'How does it feel to be God?' It's 'I named my firstborn after you.' In some ways, the life he lives is not the life he's chosen. In some ways, the life he leads has been forced upon him because of the way the public looks upon him to be."

That's the narrative, anyway—Dylan as eternal victim, Dylan as the measure of our sins. There is another narrative, however, and it's that Dylan is not just the first and greatest intentional rock 'n' roll poet. He's also the first great rock 'n' roll asshole. The poet expanded the notion of what it was possible for a song to express; the asshole shrunk the notion of what it was possible for the audience to express in response to a song. The poet expanded what it meant to be human; the asshole noted every human failing, keeping a ledger of debts never to be forgotten or forgiven. As surely as he rewrote the songbook, Dylan rewrote the relationship between performer and audience; his signature is what separates him from all his presumed peers in the rock business and all those who have followed his example. "I never was a performer who wanted to be one of them, part of the crowd," he said, and in that sentence surely lies one of his most enduring achievements: the transformation of the crowd into an all-consuming but utterly unknowing them.

"We played with McCartney at Bonnaroo, and the thing about McCartney is that he wants to be loved so much," Jeff Tweedy says. "He has so much energy, he gives and gives and gives, he plays three hours, and he plays every song you want to hear. Dylan has zero fucks to give about that. And it's truly inspiring.

by Tom Junod, Esquire |  Read more:
Image: Fame Pictures; historic Dylan photos: AP

Monday, January 27, 2014


Brenda Cablayan, Houses on the Hill
via:

Obliquity

If you want to go in one direction, the best route may involve going in the other. Paradoxical as it sounds, goals are more likely to be achieved when pursued indirectly. So the most profitable companies are not the most profit-oriented, and the happiest people are not those who make happiness their main aim. The name of this idea? Obliquity

The American continent separates the Atlantic Ocean in the east from the Pacific Ocean in the west. But the shortest crossing of America follows the route of the Panama Canal, and you arrive at Balboa Port on the Pacific Coast some 30 miles to the east of the Atlantic entrance at Colon.

A map of the isthmus shows how the best route west follows a south-easterly direction. The builders of the Panama Canal had comprehensive maps, and understood the paradoxical character of the best route. But only rarely in life do we have such detailed knowledge. We are lucky even to have a rough outline of the terrain.

Before the canal, anyone looking for the shortest traverse from the Atlantic to the Pacific would naturally have gazed westward. The south-east route was found by Vasco Nunez de Balboa, a Spanish conquistador who was looking for gold, not oceans.

George W. Bush speaks mangled English rather than mangled French because James Wolfe captured Quebec in 1759 and made the British crown the dominant influence in Northern America. Eschewing obvious lines of attack, Wolfe’s men scaled the precipitous Heights of Abraham and took the city from the unprepared defenders. There are many such episodes in military history. The Germans defeated the Maginot Line by going round it, while Japanese invaders bicycled through the Malayan jungle to capture Singapore, whose guns faced out to sea. Oblique approaches are most effective in difficult terrain, or where outcomes depend on interactions with other people. Obliquity is the idea that goals are often best achieved when pursued indirectly.

Obliquity is characteristic of systems that are complex, imperfectly understood, and change their nature as we engage with them. (...)

The distinction between intent and outcome is central to obliquity. Wealth, family relationships, employment all contribute to happiness but these activities are not best conducted with happiness as their goal. The pursuit of happiness is a strange phrase in the US constitution because happiness is not best achieved when pursued. A satisfying life depends above all on building good personal relationships with other people – but we entirely miss the point if we seek to develop these relationships with our personal happiness as a primary goal.

Humans have well developed capacities to detect purely instrumental behaviour. The actions of the man who buys us a drink in the hope that we will buy his mutual funds are formally the same as those of the friend who buys us a drink because he likes our company, but it is usually not too difficult to spot the difference. And the difference matters to us. “Honesty is the best policy, but he who is governed by that maxim is not an honest man,” wrote Archbishop Whately three centuries ago. If we deal with someone for whom honesty is the best policy, we can never be sure that this is not the occasion on which he will conclude that honesty is no longer the best policy. Such experiences have been frequent in financial markets in the last decade. We do better to rely on people who are honest by character rather than honest by choice.

by John Kay, Financial Times via Naked Capitalism |  Read more:
Image: via:

Happiness and Its Discontents

A quick survey of our culture—particularly our self-help culture—confirms Freud's observation. One could even say that, in our era, the idea that we should lead happy, balanced lives carries the force of an obligation: We are supposed to push aside our anxieties in order to enjoy our lives, attain peace of mind, and maximize our productivity. The cult of "positive thinking" even assures us that we can bring good things into our lives just by thinking about them. (...)

Needless to say, our fixation on the ideal of happiness diverts our attention from collective social ills, such as socioeconomic disparities. As Barbara Ehrenreich has shown, when we believe that our happiness is a matter of thinking the right kinds of (positive) thoughts, we become blind to the ways in which some of our unhappiness might be generated by collective forces, such as racism or sexism. Worst of all, we become callous to the lot of others, assuming that if they aren't doing well, if they aren't perfectly happy, it's not because they're poor, oppressed, or unemployed but because they're not trying hard enough.

If all of that isn't enough to make you suspicious of the cultural injunction to be happy, consider this basic psychoanalytic insight: Human beings may not be designed for happy, balanced lives. The irony of happiness is that it's precisely when we manage to feel happy that we are also most keenly aware that the feeling might not last. Insofar as each passing moment of happiness brings us closer to its imminent collapse, happiness is merely a way of anticipating unhappiness; it's a deviously roundabout means of producing anxiety.

Take the notion that happiness entails a healthy lifestyle. Our society is hugely enthusiastic about the idea that we can keep illness at bay through a meticulous management of our bodies. The avoidance of risk factors such as smoking, drinking, and sexual promiscuity, along with a balanced diet and regular exercise, is supposed to guarantee our longevity. To a degree, that is obviously true. But the insistence on healthy habits is also a way to moralize illness, to cast judgment on those who fail to adhere to the right regimen. Ultimately, as the queer theorist Tim Dean has illustrated, we are dealing with a regulation of pleasure—a process of medicalization that tells us which kinds of pleasures are acceptable and which are not.

I suspect that beneath our society's desperate attempts to minimize risk, and to prescribe happiness as an all-purpose antidote to our woes, there resides a wretched impotence in the face of the intrinsically insecure nature of human existence. As a society, we have arguably lost the capacity to cope with this insecurity; we don't know how to welcome it into the current of our lives. We keep trying to brush it under the rug because we have lost track of the various ways in which our lives are not meant to be completely healthy and well adjusted.

Why, exactly, is a healthy and well-adjusted life superior to one that is filled with ardor and personal vision but that is also, at times, a little unhealthy and maladjusted? Might some of us not prefer lives that are heaving with an intensity of feeling and action but that do not last quite as long as lives that are organized more sensibly? Why should the good life equal a harmonious life? Might not the good life be one that includes just the right amount of anxiety? Indeed, isn't a degree of tension a precondition of our ability to recognize tranquillity when we are lucky enough to encounter it? And why should our lives be cautious rather than a little dangerous? Might not the best lives be ones in which we sometimes allow ourselves to become a little imprudent or even a tad unhinged?

by Mari Ruti, Chronicle of Higher Education |  Read more:
Image: Geoffrey Moss

Sunday, January 26, 2014

Game Change


By the time the contenders for Super Bowl XLVIII were set, two weekends ago, a hero and a villain had been chosen, too. The Denver Broncos’ quarterback, the aging, lovable Peyton Manning, had outplayed the Patriots to win the A.F.C. title. Meanwhile, in the N.F.C. championship game, Richard Sherman, a cornerback for the Seattle Seahawks, became the designated bad guy. With thirty-one seconds left to play, Colin Kaepernick, the San Francisco 49ers quarterback, had the ball on Seattle’s eighteen-yard line—the 49ers were losing by six points and needed a touchdown. He spotted Michael Crabtree, a wide receiver, and sent him the pass. Sherman twisted up in the air until he seemed almost in synch with the ball’s spiralling, then tipped the ball into the hands of another defender for an interception, and won the game.

Sherman was swarmed by his teammates but broke away to chase after Crabtree. He stretched out a hand and said, “Hell of a game, hell of a game,” to which Crabtree responded by shoving him in the face mask. Moments later, Sherman was surrounded by reporters and cameramen; by then, he had acquired an N.F.C. champions’ cap, which left his eyes in shadow, and his long dreadlocks hung loose. When Erin Andrews, of Fox Sports, asked him about the final play, he more or less exploded. “I’m the best corner in the game!” he proclaimed. “When you try me with a sorry receiver like Crabtree, that’s the result you gonna get! Don’t you ever talk about me!”

“Who was talking about you?” Andrews asked.

“Crabtree! Don’t you open your mouth about the best, or I’m gonna shut it for you real quick! L.O.B.!”

L.O.B.: that’s Legion of Boom, the nickname of the Seattle defense. The video of the “epic rant,” as it was called, went viral. Andrews told GQ that the response was so overwhelming that her Twitter account froze. She added, “Then we saw it was taking on a racial turn.” Some people expressed alarm that an angry black man was shouting at a blond-haired woman. (Andrews immediately shut down that line of complaint.) Many people expressed a hope that Manning would put Sherman in his place. The names that he was called were numerous, offensive, and explicitly racial, but one that stood out—it was used more than six hundred times on television, according to Deadspin—was “thug.”

by Amy Davidson, New Yorker |  Read more:
Image: via:

Almost Everything in "Dr. Strangelove" Was True

[ed. One of my all-time, top-ten movie favorites. Maybe top-five.]

This month marks the fiftieth anniversary of Stanley Kubrick’s black comedy about nuclear weapons, “Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb.” Released on January 29, 1964, the film caused a good deal of controversy. Its plot suggested that a mentally deranged American general could order a nuclear attack on the Soviet Union, without consulting the President. One reviewer described the film as “dangerous … an evil thing about an evil thing.” Another compared it to Soviet propaganda. Although “Strangelove” was clearly a farce, with the comedian Peter Sellers playing three roles, it was criticized for being implausible. An expert at the Institute for Strategic Studies called the events in the film “impossible on a dozen counts.” A former Deputy Secretary of Defense dismissed the idea that someone could authorize the use of a nuclear weapon without the President’s approval: “Nothing, in fact, could be further from the truth.” (See a compendium of clips from the film.) When “Fail-Safe”—a Hollywood thriller with a similar plot, directed by Sidney Lumet—opened, later that year, it was criticized in much the same way. “The incidents in ‘Fail-Safe’ are deliberate lies!” General Curtis LeMay, the Air Force chief of staff, said. “Nothing like that could happen.” The first casualty of every war is the truth—and the Cold War was no exception to that dictum. Half a century after Kubrick’s mad general, Jack D. Ripper, launched a nuclear strike on the Soviets to defend the purity of “our precious bodily fluids” from Communist subversion, we now know that American officers did indeed have the ability to start a Third World War on their own. And despite the introduction of rigorous safeguards in the years since then, the risk of an accidental or unauthorized nuclear detonation hasn’t been completely eliminated.

The command and control of nuclear weapons has long been plagued by an “always/never” dilemma. The administrative and technological systems that are necessary to insure that nuclear weapons are always available for use in wartime may be quite different from those necessary to guarantee that such weapons can never be used, without proper authorization, in peacetime. During the nineteen-fifties and sixties, the “always” in American war planning was given far greater precedence than the “never.” Through two terms in office, beginning in 1953, President Dwight D. Eisenhower struggled with this dilemma. He wanted to retain Presidential control of nuclear weapons while defending America and its allies from attack. But, in a crisis, those two goals might prove contradictory, raising all sorts of difficult questions. What if Soviet bombers were en route to the United States but the President somehow couldn’t be reached? What if Soviet tanks were rolling into West Germany but a communications breakdown prevented NATO officers from contacting the White House? What if the President were killed during a surprise attack on Washington, D.C., along with the rest of the nation’s civilian leadership? Who would order a nuclear retaliation then?

With great reluctance, Eisenhower agreed to let American officers use their nuclear weapons, in an emergency, if there were no time or no means to contact the President. Air Force pilots were allowed to fire their nuclear anti-aircraft rockets to shoot down Soviet bombers heading toward the United States. And about half a dozen high-level American commanders were allowed to use far more powerful nuclear weapons, without contacting the White House first, when their forces were under attack and “the urgency of time and circumstances clearly does not permit a specific decision by the President, or other person empowered to act in his stead.” Eisenhower worried that providing that sort of authorization in advance could make it possible for someone to do “something foolish down the chain of command” and start an all-out nuclear war. But the alternative—allowing an attack on the United States to go unanswered or NATO forces to be overrun—seemed a lot worse. Aware that his decision might create public unease about who really controlled America’s nuclear arsenal, Eisenhower insisted that his delegation of Presidential authority be kept secret. At a meeting with the Joint Chiefs of Staff, he confessed to being “very fearful of having written papers on this matter.”

President John F. Kennedy was surprised to learn, just a few weeks after taking office, about this secret delegation of power. “A subordinate commander faced with a substantial military action,” Kennedy was told in a top-secret memo, “could start the thermonuclear holocaust on his own initiative if he could not reach you.” Kennedy and his national-security advisers were shocked not only by the wide latitude given to American officers but also by the loose custody of the roughly three thousand American nuclear weapons stored in Europe. Few of the weapons had locks on them. Anyone who got hold of them could detonate them. And there was little to prevent NATO officers from Turkey, Holland, Italy, Great Britain, and Germany from using them without the approval of the United States.

by Eric Schlosser, New Yorker |  Read more:
Image: Columbia Pictures

Where'd All The Cocaine Go?

Toward the end of last year, the DEA published its 2013 National Drug Threat Assessment Summary, a 28-page report chronicling drug consumption trends across the United States. These include the continued rise in abuse of prescription drugs (second only to marijuana in popularity), the increase in the production of heroin in Mexico and its availability in the U.S., and the emergence of synthetic designer drugs.

Much of the report is unremarkable—until you arrive at the section on cocaine. “According to [National Seizure System] data,” it reads, “approximately 16,908 kilograms of cocaine were seized at the southwest Border in 2011. During 2012, only 7,143 kilograms of cocaine were seized, a decrease of 58 percent.”

That sharp decline echoes an ongoing trend: 40 percent fewer people in the United States used cocaine in 2012 than they did in 2006; only 19 percent of Chicago arrestees had cocaine in their system two years ago compared to 50 percent in 2000; and less high school seniors say they’ve used cocaine in the last 12 months than at any time since the mid-70s. In fact, the report indicates cocaine was sporadically unavailable in Chicago, Houston, Baltimore, and St. Louis in the spring of 2012. So where’d the blow go? (...)

To speak at greater length on the subject, I reached out UCLA professor of public policy, Mark Kleiman, the nation’s leading authority on drug policy. Earlier this year he gained chronic celebrity status when Washington tapped him to be the state’s “pot czar.” On a recent Sunday morning, Professor Kleiman and I discussed the disappearance of cocaine and whether it would ever come back.

Why would a drug like cocaine ever disappear?
Drug use tends to follow epidemic cycles. When a drug appears, or reappears, it tends to does so at the top of the social spectrum—it’s associated with glamour. People are using it for the first time, and they’re having a good time. Very few people are in trouble with it because use is relatively new and it has terrific word of mouth. People say, “Oh my God, this is wonderful, you have to try this!” So you literally get an exponential growth in use. Every new user is a potential source of additional new users. You get a very rapid rise in the number of users.

As David Musto pointed out in The American Disease, over time, two things happen. Once everybody susceptible to the suggestion of “Let’s try this” has tried it, there’s a declining pool of new users. And, some of the original users have been at it long enough to develop a bad habit. So now there are fewer users to tell all their friends that "this is wonderful," and more problem users who either tell their friends, or demonstrate by their behavior, that this is not wonderful.

How did his cycle play out with cocaine?
In the case of cocaine, there was a rapid price decrease as drug dealers crowded into the market to take advantage of the bonanza. The price of cocaine dropped by 80 percent, which brought a new user group into the population. The development of the crack market made crack available to anyone with $5 for a rock; in the powder cocaine market, the price of admission was $100 for a gram. So, the social status of the drug fell along with the user group. Now, using cocaine puts you in a class not with hedge-fund managers, but with $5 crack whores. Surprisingly, people prefer to be in the “hedge-fund-manager” category, which doesn’t necessarily reflect sound moral judgment but is a social fact.

All of those things created a peak in use. The number of people starting cocaine use peaked in about 1985, which was before the Len Bias affair. Then we got a set of panic-driven policies aimed at suppressing the crack epidemic. We got mandatory sentencing, aggressive law enforcement, and a ramping-up of the War on Drugs. (...)

Is that lifecycle built into every drug?
Yes, the question is when a drug moves from being purely epidemic to being endemic. And that’s happened with cocaine. Remember, the first cocaine epidemic—the long slow one that starts with Sigmund Freud. That one played itself out by the late 1920s. After that, cocaine use went close to zero. That didn’t happen this time—there is still cocaine initiation going on. I do not see any time soon when cocaine is not part of the American scene. It looks to me as if cocaine, the opiates and cannabis, like alcohol, now have a steady user base—not just an occasional flare up. But a drug that’s as destructive as cocaine is when heavily use—especially as crack—isn’t going to become endemic at a high level. The drug we have that’s endemic at a high level is alcohol, and unfortunately that’s not going away. And it looks to me like cannabis is going to join alcohol.

by Michael Zelenko, Vice | Read more:
Image: US Coast Guard

Saturday, January 25, 2014


Photo: markk, Rabbit Island

Twitter's Achilles' Heel

At some point, Twitter and the rest of social media became less about wanting to share the news and more about wanting to be the news.

Take Justin Bieber, for example.

As reports of the once-angelic and deeply troubled Canadian pop star’s arrest began to make its way around the web, reactions streamed onto Twitter, ranging from jokes to tongue clucks.

But by far, the most common refrain was something like this: “Why is this news??”

The simplest answer is that it wasn’t — at least not the most important news happening on that particular day. But Twitter isn’t really about the most important thing anymore — it stopped being about relevancy a long time ago. Twitter seems to have reached a turning point, a phase in which its contributors have stopped trying to make the service as useful as possible for the crowd, and are instead trying to distinguish themselves from one another. It’s less about drifting down the stream, absorbing what you can while you float, and more about trying to make the flashiest raft to float on, gathering fans and accolades as you go.

How did this happen?

A theory: The psychology of crowd dynamics may work differently on Twitter than it does on other social networks and systems. As a longtime user of the service with a sizable audience, I think the number of followers you have is often irrelevant. What does matter, however, is how many people notice you, either through retweets, favorites or the holy grail, a retweet by someone extremely well known, like a celebrity. That validation that your contribution is important, interesting or worthy is enough social proof to encourage repetition. Many times, that results in one-upmanship, straining to be the loudest or the most retweeted and referred to as the person who captured the splashiest event of the day in the pithiest way. (...)

It feels as if we’re all trying to be a cheeky guest on a late-night show, a reality show contestant or a toddler with a tiara on Twitter — delivering the performance of a lifetime, via a hot, rapid-fire string of commentary, GIFs or responses that help us stand out from the crowd. We’re sold on the idea that if we’re good enough, it could be our ticket to success, landing us a fleeting spot in a round-up on BuzzFeed or The Huffington Post, or at best, a writing gig. But more often than not, it translates to standing on a collective soapbox, elbowing each other for room, in the hopes of being credited with delivering the cleverest one-liner or reaction. Much of that ensues in hilarity. Perhaps an equal amount ensues in exhaustion.

by Jenna Wortham, NY Times | Read more:
Image: Joe Raedle/Getty Images

Out in the Great Alone


[ed. Missed this when it first came out, but with the Iditarod less than a couple months away it's a terrific read.]

“You’re not a pilot in Alaska,” Jay said, fixing me with a blue-eyed and somehow vaguely piratical stare, “until you’ve crashed an airplane. You go up in one of these stinkin’ tin cans in the Arctic? Sooner or later you’re gonna lose a motor, meet the wrong gust of wind, you name it. And OH BY THE WAY” (leaning in closer, stare magnifying in significance) “that doesn’t have to be the last word.” (...)

The plan was for me to spend a few nights in the apartment connected to the hangar — live with the planes, get the feel of them. I’d read that some Iditarod mushers slept with their dogs, to make themselves one with the pack. I needed flying lessons because the little Piper Super Cubs that would carry us to Nome were two-seaters, one in front, one behind. Jay wanted me prepared in case he had a fatal brain aneurysm (his words), or a heart attack (his words 10 seconds later), or keeled over of massive unspecified organ failure (“Hey, I’m gettin’ up there — but don’t worry!”) at 2,200 feet.

Choosing an airplane — that was the first step. Jay had four, and as the first ACTS client to arrive, I got first pick.

They were so small. Airplanes aren’t supposed to be so small. How can I tell you what it was like, standing there under the trillion-mile blue of the Alaska sky, ringed in by white mountains, resolving to take to the air in one of these winged lozenges? Each cockpit was exactly the size of a coffin. A desk fan could have blown the things off course. A desk fan on medium. Possibly without being plugged in.

“God love ’em,” Jay said. “Cubs are slower’n heck, they’ll get beat all to hell by the wind, and there’s not much under the hood. But bush pilots adore ’em, because you can mod ’em to death. And OH BY THE WAY … put ’em on skis and come winter, the suckers’ll land you anywhere.”

Two of the Cubs were painted bright yellow. I took an immediate liking to the one with longer windows in the back. Better visibility, I told myself, nodding. Jay said it had the smallest engine of any of the Cubs in our squadron. Less momentum when I go shearing into the treeline, I told myself, nodding.

The name painted in black on her yellow door read: NUGGET. She had a single propeller, which sat inquisitively on the end of her nose, like whiskers. Jay told me — I heard him as if from a great distance — that she’d had to be rebuilt not long ago, after being destroyed on a previous trip north. Was I hearing things, or did he say destroyed by polar bears?

I patted Nugget’s side. Her fuselage was made of stretched fabric. It flexed like a beach ball, disconcertingly.

Into the cockpit. Flight helmet strapped, restraints active. Mic check. Then Jay’s voice in my headset: “Are you ready!” It wasn’t exactly a question. (...)

We’d done some practice turns and picked out a lake; now all I had to do was get the plane on it. Jay explained to me about landing on snow, how the scatter of light tends to mask the true height of the ground. You can go kamikaze into the ice, thinking the earth is still 30 feet below you. To gauge your real altitude re: the whiteout, you have to use “references” — sticks poking through the snow, a line of trees on the bank. These supply you with vital cues, like “might want to ease down a touch” or “gracious, I’m about to fireball.”

I won’t bore you with the details of how to steer a Super Cub — where the stick was (imagine the porniest position possible; now go 6 inches pornier than that), how to bank, what the rudder pedals felt like. Suffice it to say that in theory, it was ridiculously simple. In practice …

“You have the aircraft.” Jay’s voice in my ear. “Just bring us down in a nice straight line.”

I felt the weight in my right hand as Jay released the stick. The lake was straight ahead, maybe three miles off, a white thumbnail in an evergreen-spammed distance. The plane was under my control.

Nugget — I’m not sure how to put this — began to sashay.

“Just a niiice straight line,” Jay reminded me. “And OH BY THE WAY … your pilot’s dead.” He slumped over in his seat.

Little lesson I picked up someplace: Once your pilot gives up the ghost, it is not so easy to see where you are headed from the backseat of a Super Cub. I mean at the “what direction is the plane even pointing right now” level. You will find that your deceased pilot, looming up against the windshield, blocks almost your entire forward view. To mitigate this, the savvy backseater will bank the wings one way while stepping on the opposite rudder pedal, causing the plane to twist 30 degrees or so to one side while continuing to travel in a straight line, like a runner sliding into base. That way, said enterprising backseater can see forward through the plane’s presumably non-corpse-occluded side window.

Yeah. Well. A thing about me as a pilot is that I do not, ever, want to see forward out of the side window. Especially not while plummeting toward a frozen lake. It’s like, bro, why create the hurricane. I figured that, as an alternative technique, I would just basically try to guess where we were going.

“How’s your speed?” my pilot’s (lifeless form) inquired.

The ground seemed to be making an actual screaming noise as it rushed up toward us. Hmm — maybe a little fast. I cut the throttle. Nugget kind of heaved and started falling at a different angle; more “straight down,” as the aeronautics manuals say. We were out over the lake. I had a sense of measureless whiteness lethally spread out below me. Either the landscape was baffled or I was. There were trees on the bank, but we were dropping too fast; I couldn’t relate them to anything. My references had gone sideways. At the last moment I pulled back on the stick.

There was a chiropractic skrrrk of skis entering snow. There was, simultaneously, a feeling of force transmitting itself upward into the plane. Nugget bounced, like a skipped stone, off the ice. We were tossed up and forward, maybe 15 feet into the air …

… and came down again, bounced again, came down again, and, unbelievably, slid to a stop.

“Guess what,” the reanimated form of my pilot said, popping up. “You just landed an airplane.” (...)

Anchorage, Alaska’s one real city. Fairbanks is a town, Juneau is an admin building with ideas. Anchorage is Tulsa, only poured into a little hollow in a celestially beautiful mountain range on the outer rim of the world.

When you’re there, it truly feels like you’re at the end of something. Like a last outpost. You’re in a coffee shop, you ordered cappuccino, you can see white mountains from the window, and on the other side of the mountains is wilderness that hasn’t changed since 1492.

That’s an exaggeration, but not as much of one as you might think.

by Brian Phillips, Grantland |  Read more:
Images: Brian Phillips and Jeff Schultz/AlaskaStock

Friday, January 24, 2014


[ed. Chinese New Year Jan. 31, 2014 - Year of the Horse]
via:

Diamonds Are Forever

Diamonds are supposed to be a girl's best friend. Now, they might also be her mother, father or grandmother.

Swiss company Algordanza takes cremated human remains and — under high heat and pressure that mimic conditions deep within the Earth — compresses them into diamonds.

Rinaldo Willy, the company's founder and CEO, says he came up with the idea a decade ago. Since then, his customer base has expanded to 24 countries.

Each year, the remains of between 800 and 900 people enter the facility. About three months later, they exit as diamonds, to be kept in a box or turned into jewelry.

Most of the stones come out blue, Willy says, because the human body contains trace amounts of boron, an element that may be involved in bone formation. Occasionally, though, a diamond pops out white, yellow or close to black – Willy's not sure why. Regardless, he says, "every diamond from each person is slightly different. It's always a unique diamond."

Most of the orders Algordanza receives come from relatives of the recently deceased, though some people make arrangements for themselves to become diamonds once they've died. Willy says about 25 percent of his customers are from Japan.

At between $5,000 and $22,000, the process costs as much as some funerals. The process and machinery involved are about the same as in a lab that makes synthetic diamonds from other carbon materials.

by Rae Ellen Bichell, NPR | Read more:
Image: Rinaldo Willy/Algordanza

What Jobs Will the Robots Take?

In the 19th century, new manufacturing technology replaced what was then skilled labor. Somebody writing about the future of innovation then might have said skilled labor is doomed. In the second half of the 20th century, however, software technology took the place of median-salaried office work, which economists like David Autor have called the "hollowing out" of the middle-skilled workforce.

The first wave showed that machines are better at assembling things. The second showed that machines are better at organization things. Now data analytics and self-driving cars suggest they might be better at pattern-recognition and driving. So what are we better at?

If you go back to the two graphs in this piece to locate the safest industries and jobs, they're dominated by managers, health-care workers, and a super-category that encompasses education, media, and community service. One conclusion to draw from this is that humans are, and will always be, superior at working with, and caring for, other humans. In this light, automation doesn't make the world worse. Far from it: It creates new opportunities for human ingenuity.

But robots are already creeping into diagnostics and surgeries. Schools are already experimenting with software that replaces teaching hours. The fact that some industries have been safe from automation for the last three decades doesn't guarantee that they'll be safe for the next one.

by Derek Thompson, The Atlantic | Read more:
Image:Reuters

The Pleasure and Pain of Speed


How long is now? According to Google, not much less than 250 milliseconds. In 2008, the company presented a research report that examined ideal “latency” times for search results. It concluded “that a response time over 1 second may interrupt a user’s flow of thought.” The ideal latency for a search engine, said Google, was right at the quarter-second mark.

Which seems safe enough, because psychologists have long estimated that it takes us humans at least a quarter of a second to do much of anything. William James, wondering more than a century ago what is “the minimum amount of duration which we can distinctly feel,” had it pegged around 50 milliseconds. James cited the seminal work of Austrian physiologist Sigmund Exner, who observed that people shown sets of flashing sparks stopped being able to recognize them as distinct entities around 0.044 seconds. This “now” time increases as you go up the ladder of complexity.

To do more than barely register an image as a stimulus, to actually see something for what it is, the neuroscientist Christof Koch notes in The Quest for Consciousness, requires an average of a quarter of a second (when we are told what to look for, recognition drops to 150 milliseconds). Google’s target response time is just on Koch’s cusp of perceivable consciousness. From there, went the implication of Google’s report, lay a sloping temporal despond: More slow, less happy.

A quarter of a second, then, is a biological bright line limiting the speed at which we can experience life. And the life that we are creating for ourselves, with the help of technology, is rushing towards that line. The German sociologist Hartmut Rosa catalogues the increases in speed in his recent book, Social Acceleration: A New Theory of Modernity. In absolute terms, the speed of human movement from the pre-modern period to now has increased by a factor of 100. The speed of communications (a revolution, Rosa points out, that came on the heels of transport) rose by a factor of 10 million in the 20th century. Data transmission has soared by a factor of around 10 billion.

As life has sped up, we humans have not, at least in our core functioning: Your reaction to stimuli is no faster than your great-grandfather’s. What is changing is the amount of things the world can bring to us, in our perpetual now. But is our ever-quickening life an uncontrolled juggernaut, driven by a self-reinforcing cycle of commerce and innovation, and forcing us to cope with a new social and psychological condition? Or is it, instead, a reflection of our intrinsic desire for speed, a transformation of the external world into the rapid-fire stream of events that is closest to the way our consciousness perceives reality to begin with? (...)

Referring to the theorist Walter Benjamin, Rosa argues that the greater the number of “lived events per unit of time,” the less likely it is these are to transform into “experiences.” Benjamin argued that we tried to capture these moments with physical souvenirs, including photographs, which could later be accessed in an attempt to reinvoke memories. Of course, this process has accelerated, and the physical souvenir is now as quaint as the physical photograph. In Instagram, we have even developed a kind of souvenir of the present: An endless photography of moments suggests that we do not trust that they will actually become moments, as if we were photographing not to know that the event happened, but that it is happening.

by Tom Vanderbilt, Nautilus |  Read more:
Image: Chad Hagen

Thursday, January 23, 2014


Baloji (DR Congo and Belgium) @ globalFEST_NYC 2014
via:

How a Math Genius Hacked OkCupid to Find True Love


Chris McKinlay was folded into a cramped fifth-floor cubicle in UCLA’s math sciences building, lit by a single bulb and the glow from his monitor. It was 3 in the morn­ing, the optimal time to squeeze cycles out of the supercomputer in Colorado that he was using for his PhD dissertation. (The subject: large-scale data processing and parallel numerical methods.) While the computer chugged, he clicked open a second window to check his OkCupid inbox.

McKinlay, a lanky 35-year-old with tousled hair, was one of about 40 million Americans looking for romance through websites like Match.com, J-Date, and e-Harmony, and he’d been searching in vain since his last breakup nine months earlier. He’d sent dozens of cutesy introductory messages to women touted as potential matches by OkCupid’s algorithms. Most were ignored; he’d gone on a total of six first dates.

On that early morning in June 2012, his compiler crunching out machine code in one window, his forlorn dating profile sitting idle in the other, it dawned on him that he was doing it wrong. He’d been approaching online matchmaking like any other user. Instead, he realized, he should be dating like a mathematician.

OkCupid was founded by Harvard math majors in 2004, and it first caught daters’ attention because of its computational approach to matchmaking. Members answer droves of multiple-choice survey questions on everything from politics, religion, and family to love, sex, and smartphones.

On average, respondents select 350 questions from a pool of thousands—“Which of the following is most likely to draw you to a movie?” or “How important is religion/God in your life?” For each, the user records an answer, specifies which responses they’d find acceptable in a mate, and rates how important the question is to them on a five-point scale from “irrelevant” to “mandatory.” OkCupid’s matching engine uses that data to calculate a couple’s compatibility. The closer to 100 percent—mathematical soul mate—the better.

But mathematically, McKinlay’s compatibility with women in Los Angeles was abysmal. OkCupid’s algorithms use only the questions that both potential matches decide to answer, and the match questions McKinlay had chosen—more or less at random—had proven unpopular. When he scrolled through his matches, fewer than 100 women would appear above the 90 percent compatibility mark. And that was in a city containing some 2 million women (approximately 80,000 of them on OkCupid). On a site where compatibility equals visibility, he was practically a ghost.

He realized he’d have to boost that number. If, through statistical sampling, McKinlay could ascertain which questions mattered to the kind of women he liked, he could construct a new profile that honestly answered those questions and ignored the rest. He could match every woman in LA who might be right for him, and none that weren’t.

by Kevin Poulsen, Wired |  Read more:
Image: Maurico Alejo

How Do Physicians Want to Die?

When you ask people how they’d like to die, most will say that they want to die quickly, painlessly, and peacefully… preferably in their sleep.

But, if you ask them whether they would want various types of interventions, were they on the cusp of death and already living a low-quality of life, they typically say “yes,” “yes,” and “can I have some more please.” Blood transfusions, feeding tubes, invasive testing, chemotherapy, dialysis, ventilation, and chest pumping CPR. Most people say “yes.”

But not physicians. Doctors, it turns out, overwhelmingly say “no.” The graph below shows the answers that physicians give when asked if they would want various interventions at the bitter end. The only intervention that doctors overwhelmingly want is pain medication. In no other case do even 20% of the physicians say “yes.”


What explains the difference between physician and non-physician responses to these types of questions. USC professor and family medicine doctor Ken Murray gives us a couple clues.

by Lisa Wade, Socialogical Images |  Read more: