Friday, June 8, 2012

The New Neuroscience of Choking

Last Sunday, at the Memorial golf tournament in Dublin, Ohio, Rickie Fowler looked like the man to beat. He entered the tournament with momentum: Fowler had recently gained his first ever P.G.A. tour victory, and he had finished in the top ten in his last four starts. On the first hole of the final round, Fowler sank a fourteen-foot birdie putt, placing him within two shots of the lead.

And that’s when things fell apart. Fowler pulled a shot on the second hole and never recovered. On the next hole, he hit his approach into a greenside bunker and ended up three-putting for a double bogey. He finished with an eighty-four, his worst round on the tour by five shots. Although he began the day in third place, he finished in a tie for fifty-second, sixteen shots behind the winner, Tiger Woods.

In short, Fowler choked. Like LeBron James—who keeps on missing free throws when the game is on the line—he seems to have been undone by the pressure of the situation. And choking isn’t just a hazard for athletes: the condition also afflicts opera singers and actors, hedge-fund traders and chess grandmasters. All of sudden, just when these experts most need to perform, their expertise is lost. The grace of talent disappears.

As Malcolm Gladwell pointed out in his 2000 article on the psychology of choking, the phenomenon can seem like an amorphous category of failure. Nevertheless, choking is actually triggered by a specific mental mistake: thinking too much. The sequence of events typically goes like this: When people get anxious about performing, they naturally become particularly self-conscious; they begin scrutinizing actions that are best performed on autopilot. The expert golfer, for instance, begins contemplating the details of his swing, making sure that the elbows are tucked and his weight is properly shifted. This kind of deliberation can be lethal for a performer. (...)

Sian Beilock, a professor of psychology at the University of Chicago, has documented the choking process in her lab. She uses putting on the golf green as her experimental paradigm. Not surprisingly, Beilock has shown that novice putters hit better shots when they consciously reflect on their actions. By concentrating on their golf game, they can avoid beginner’s mistakes.

A little experience, however, changes everything. After golfers have learned how to putt—once they have memorized the necessary movements—analyzing the stroke is a dangerous waste of time. And this is why, when experienced golfers are forced to think about their swing mechanics, they shank the ball. “We bring expert golfers into our lab, and we tell them to pay attention to a particular part of their swing, and they just screw up,” Beilock says. “When you are at a high level, your skills become somewhat automated. You don’t need to pay attention to every step in what you’re doing.”

But this only raises questions: What triggers all of these extra thoughts? And why does it only happen to some athletes, performers, and students? Everyone gets nervous; not everyone chokes.

by Jonah Lehrer, The New Yorker |  Read more:
Photograph of LeBron James by Jim Rogash/Getty Images.

Thursday, June 7, 2012

Why Google Isn’t Making Us Stupid…or Smart

Last year The Economist published a special report not on the global financial crisis or the polarization of the American electorate, but on the era of big data. Article after article cited one big number after another to bolster the claim that we live in an age of information superabundance. The data are impressive: 300 billion emails, 200 million tweets, and 2.5 billion text messages course through our digital networks every day, and, if these numbers were not staggering enough, scientists are reportedly awash in even more information. This past January astronomers surveying the sky with the Sloan telescope in New Mexico released over 49.5 terabytes of information—a mass of images and measurements—in one data drop. The Large Hadron Collider at CERN (the European Organization for Nuclear Research), however, produces almost that much information per second. Last year alone, the world’s information base is estimated to have doubled every eleven hours. Just a decade ago, computer professionals spoke of kilobytes and megabytes. Today they talk of the terabyte, the petabyte, the exabyte, the zettabyte, and now the yottabyte, each a thousand times bigger than the last.

Some see this as information abundance, others as information overload. The advent of digital information and with it the era of big data allows geneticists to decode the human genome, humanists to search entire bodies of literature, and businesses to spot economic trends. But it is also creating for many the sense that we are being overwhelmed by information. How are we to manage it all? What are we to make, as Ann Blair asks, of a zettabyte of information—a one with 21 zeros after it?1 From a more embodied, human perspective, these tremendous scales of information are rather meaningless. We do not experience information as pure data, be it a byte or a yottabyte, but as filtered and framed through the keyboards, screens, and touchpads of our digital technologies. However impressive these astronomical scales of information may be, our contemporary awe and increasing worry about all this data obscures the ways in which we actually engage it and the world of which it and we are a part. All of the chatter about information superabundance and overload tends not only to marginalize human persons, but also to render technology just as abstract as a yottabyte. An email is reduced to yet another data point, the Web to an infinite complex of protocols and machinery, Google to a neutral machine for producing information. Our compulsive talk about information overload can isolate and abstract digital technology from society, human persons, and our broader culture. We have become distracted by all the data and inarticulate about our digital technologies.

The more pressing, if more complex, task of our digital age, then, lies not in figuring out what comes after the yottabyte, but in cultivating contact with an increasingly technologically formed world.2 In order to understand how our lives are already deeply formed by technology, we need to consider information not only in the abstract terms of terrabytes and zettabytes, but also in more cultural terms. How do the technologies that humans form to engage the world come in turn to form us? What do these technologies that are of our own making and irreducible elements of our own being do to us? The analytical task lies in identifying and embracing forms of human agency particular to our digital age, without reducing technology to a mere mechanical extension of the human, to a mere tool. In short, asking whether Google makes us stupid, as some cultural critics recently have, is the wrong question. It assumes sharp distinctions between humans and technology that are no longer, if they ever were, tenable.

Two Narratives

The history of this mutual constitution of humans and technology has been obscured as of late by the crystallization of two competing narratives about how we experience all of this information. On the one hand, there are those who claim that the digitization efforts of Google, the social-networking power of Facebook, and the era of big data in general are finally realizing that ancient dream of unifying all knowledge. The digital world will become a “single liquid fabric of interconnected words and ideas,” a form of knowledge without distinctions or differences.3 Unlike other technological innovations, like print, which was limited to the educated elite, the internet is a network of “densely interlinked Web pages, blogs, news articles and Tweets [that] are all visible to anyone and everyone.”4 Our information age is unique not only in its scale, but in its inherently open and democratic arrangement of information. Information has finally been set free. Digital technologies, claim the most optimistic among us, will deliver a universal knowledge that will make us smarter and ultimately liberate us.5 These utopic claims are related to similar visions about a trans-humanist future in which technology will overcome what were once the historical limits of humanity: physical, intellectual, and psychological. The dream is of a post-human era.6

On the other hand, less sanguine observers interpret the advent of digitization and big data as portending an age of information overload. We are suffering under a deluge of data. Many worry that the Web’s hyperlinks that propel us from page to page, the blogs that reduce long articles to a more consumable line or two, and the tweets that condense thoughts to 140 characters have all created a culture of distraction. The very technologies that help us manage all of this information are undermining our ability to read with any depth or care. The Web, according to some, is a deeply flawed medium that facilitates a less intensive, more superficial form of reading. When we read online, we browse, we scan, we skim. The superabundance of information, such critics charge, however, is changing not only our reading habits, but also the way we think. As Nicholas Carr puts it, “what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles.”7 The constant distractions of the internet—think of all those hyperlinks and new message warnings that flash up on the screen—are degrading our ability “to pay sustained attention,” to read in depth, to reflect, to remember. For Carr and many others like him, true knowledge is deep, and its depth is proportional to the intensity of our attentiveness. In our digital world that encourages quantity over quality, Google is making us stupid.

Each of these narratives points to real changes in how technology impacts humans. Both the scale and the acceleration of information production and dissemination in our digital age are unique. Google, like every technology before it, may well be part of broader changes in the ways we think and experience the world. Both narratives, however, make two basic mistakes.

by Chad Wellmon, The Hedgehog Review |  Read more:

The Curious Case of Internet Privacy


Here's a story you've heard about the Internet: we trade our privacy for services. The idea is that your private information is less valuable to you than it is to the firms that siphon it out of your browser as you navigate the Web. They know what to do with it to turn it into value—for them and for you. This story has taken on mythic proportions, and no wonder, since it has billions of dollars riding on it.

But if it's a bargain, it's a curious, one-sided arrangement. To understand the kind of deal you make with your privacy a hundred times a day, please read and agree with the following:
By reading this agreement, you give Technology Review and its partners the unlimited right to intercept and examine your reading choices from this day forward, to sell the insights gleaned thereby, and to retain that information in perpetuity and supply it without limitation to any third party.
Actually, the text above is not exactly analogous to the terms on which we bargain with every mouse click. To really polish the analogy, I'd have to ask this magazine to hide that text in the margin of one of the back pages. And I'd have to end it with This agreement is subject to change at any time. What we agree to participate in on the Internet isn't a negotiated trade; it's a smorgasbord, and intimate facts of your life (your location, your interests, your friends) are the buffet.

Why do we seem to value privacy so little? In part, it's because we are told to. Facebook has more than once overridden its users' privacy preferences, replacing them with new default settings. Facebook then responds to the inevitable public outcry by restoring something that's like the old system, except slightly less private. And it adds a few more lines to an inexplicably complex privacy dashboard.

Even if you read the fine print, human beings are awful at pricing out the net present value of a decision whose consequences are far in the future. No one would take up smoking if the tumors sprouted with the first puff. Most privacy disclosures don't put us in immediate physical or emotional distress either. But given a large population making a large number of disclosures, harm is inevitable. We've all heard the stories about people who've been fired because they set the wrong privacy flag on that post where they blew off on-the-job steam.

The risks increase as we disclose more, something that the design of our social media conditions us to do. When you start out your life in a new social network, you are rewarded with social reinforcement as your old friends pop up and congratulate you on arriving at the party. Subsequent disclosures generate further rewards, but not always. Some disclosures seem like bombshells to you ("I'm getting a divorce") but produce only virtual cricket chirps from your social network. And yet seemingly insignificant communications ("Does my butt look big in these jeans?") can produce a torrent of responses. Behavioral scientists have a name for this dynamic: "intermittent reinforcement." It's one of the most powerful behavioral training techniques we know about. Give a lab rat a lever that produces a food pellet on demand and he'll only press it when he's hungry. Give him a lever that produces food pellets at random intervals, and he'll keep pressing it forever.

How does society get better at preserving privacy online? As Lawrence Lessig pointed out in his book Code and Other Laws of Cyberspace, there are four possible mechanisms: norms, law, code, and markets.

by Cory Doctorow, MIT Technology Review |  Read more:
Photo: Jonathan Worth | Creative Commons

Ray Bradbury (August, 1920 – June, 2012)

Martians, robots, dinosaurs, mummies, ghosts, time machines, rocket ships, carnival magicians, alarming doppelgängers who forecast murder and doom — the sort of sensational subjects that fascinate children are the stuff of Ray Bradbury’s fiction. Over a 70-year career, he used his fecund storytelling talents to fashion tales that have captivated legions of young people and inspired a host of imitators. His work informed the imagination of writers and filmmakers like Stephen King, Steven Spielberg and James Cameron, and helped transport science fiction out of the pulp magazine ghetto and into the mainstream.

Thanks to its lurid subject matter and its often easy-to-decipher morals, Mr. Bradbury’s work is often taught in middle school. He’s often one of the first writers who awaken students to the enthralling possibilities of storytelling and the use of fantastical metaphors to describe everyday human life. His finest tales have become classics not only because of their accessibility but also because of their exuberant “Twilight Zone” inventiveness, their social resonance, their prescient vision of a dystopian future, which he dreamed up with astonishing ingenuity and flair. Not surprisingly he had a magpie’s love of all sorts of literature — Poe, Shakespeare and Sherwood Anderson (whose “Winesburg, Ohio” reportedly inspired “The Martian Chronicles”) as well as H. G. Wells and L. Frank Baum — and borrowed devices and conventions from the classics and from various genres. “Something Wicked This Way Comes” would win acclaim as a groundbreaking work of horror and fantasy.

“Fahrenheit 451” (1953) — Mr. Bradbury’s famous novel-turned-movie about a futuristic world in which books are verboten — is at once a parable about McCarthyism and Stalinism, and a kind of fable about the perils of political correctness and the dangers of television and other technology. “The Martian Chronicles” (1950), a melancholy series of overlapping stories about the colonization of Mars, can be read as an allegory about the settling of the United States or seen as a mirror of postwar American life.

“A Sound of Thunder” (1952) — a short story about a time-traveler, who journeys back to the dinosaur era and accidentally steps on a butterfly, thereby altering the course of world history — spawned many imitations, and in some respects anticipated the chaos theory concept of “the butterfly effect,” which suggests that one small change can lead to enormous changes later on. He also uncannily foresaw inventions like flat-screen TVs, Walkman-like devices and virtual reality.

by Michiko Kakutani, NY Times |  Read more:
Charley Gallay/Getty Image

Wednesday, June 6, 2012

The Lean Startup

[ed. Didn't this used to be called Vaporware?]

Scott Cook is conducting an experiment. “It’s the corporate-counseling version of speed dating,” says the spectacled cofounder of Intuit, the finance software giant. He’s gathered his troops in a brightly lit conference room, where members of four Intuit departments are seated in front of 300 colleagues—plus 1,500 more watching via webcast—to hash out some business predicaments. Each team will take five minutes to present its problem. Then special guest Eric Ries will come up with a solution.

Arun Muthukumaran, a group manager of Intuit Payment Solutions, kicks off the proceedings. He describes a feature that could dramatically increase the number of small businesses that sign up for the company’s payment services. But implementation would burn up 20 employees’ time for a month. What if customers don’t bite?

Ries, dressed casually in a blazer, pastel shirt, and black denim, suggests a test: Rather than building the service and trying it out on customers, create a sign-up page that merely promises to deliver this groundbreaking capability. Then present it to some prospective clients. Compare their enrollment rate with that of a control group shown the usual sign-up page. The results will give the team the confidence either to proceed or toss the idea into the circular file. No one would actually get the new feature yet, of course, because it hasn’t been built.

“I guess we could piss off a few customers instead of thousands,” Muthukumaran says. Laughter ripples through the crowd.

Ries glances at his watch. “It’s 4:18 pm on Monday,” he says with a puckish grin. “On Wednesday at 4:18 pm, I expect an email telling me how it went.” The team members exchange glances that are equal parts bemusement and worry: They make software, not concepts. They build code through painstaking cycles of design, programming, and testing. Customers depend on their products and trust their brand. And this guy expects them to offer a feature that doesn’t even exist? Nevertheless, the rest of Intuit’s employees are exhilarated. The room breaks into fervent applause.

It’s something Ries is getting used to. At age 33, he is Silicon Valley’s latest guru. In the four years since he first posted his theories about running startups on an anonymous blog, his campaign to replace the typical product development approach—build it and they will come—with a system based on experimentation has become a juggernaut. Ries’ book The Lean Startup, published last summer, has sold 90,000 copies in the US. His blog, Startup Lessons Learned, has 75,000 subscribers, and his annual conference attracts 400 entrepreneurs, each paying more than $500. Harvard Business School has incorporated his ideas into its entrepreneurship curriculum, and an army of followers are propagating his principles through their own books, events, and apps. Whiz kids looking for investors pepper their PowerPoint decks with Lean Startup lingo, which has become so pervasive that TechCrunch announced a ban on Ries’ term pivot. Tech darlings like Dropbox, Groupon, and Zappos serve as Lean Startup poster children, and now the philosophy is reaching established companies, including GE and, this afternoon, Intuit.

Back in the presentation hall, Ries walks his audience through the tenets of his philosophy. The core motivation is simple, and a single slide sums it up: “Stop wasting people’s time.” Entrepreneurs and their managers, minions, advisers, and investors routinely pour their lives into products nobody wants. The business landscape is littered with the wreckage of nascent companies built at monumental effort and expense that imploded on contact with the market. (Paging Webvan! 3DO! Iridium!) Unlike an established company, a startup (or a new division within an established company) doesn’t know who its customers are or what products they need. Its prime directive is to discover a sustainable business model before running out of funding.

The key to this discovery, Ries proposes, is the scientific method: the business equivalent of clinical trials. Assumptions must be tested rigorously, Ries says—and here he rolls out one of those increasingly ubiquitous Lean Startup phrases—on a minimum viable product, or MVP. This is a simplified offering that reveals how real customers, not cloistered focus groups, respond. It may be a functional product or, like the Intuit team’s sign-up page, a come-on designed to elicit a reaction. Once tallied, customer responses produce actionable metrics, as opposed to popular vanity metrics, which create the illusion of success but yield little useful information about what customers want. By repeatedly cycling or iterating through a build-measure-learn loop—a method Ries calls validated learning—the Lean Startup develops a verified perspective that enables it to identify and fine-tune the mechanism that will keep the company growing, aka its engine of growth. Or, failing that, it can pivot to a new strategy. This, Ries insists, is the quickest, most efficient route to product/market fit (a phrase adopted from Silicon Valley kingpin Marc Andreessen), defined as the moment when a product achieves resonance with customers.

Never mind that this approach is a mashup of ideas culled from programming, marketing, manufacturing, and business strategy, leavened with hard-won insights that have circulated among Silicon Valley veterans for years. Ries makes no effort to hide his sources, and his presentation preempts his critics’ complaints. “Lean,” he explains, does not mean cheap; it means eliminating waste by testing ideas first. And it doesn’t mean small, but rather that companies shouldn’t ramp up personnel and facilities until they’ve validated their business model. His philosophy is not just for Internet and app companies—that’s just where it started. Reacting to customer behavior is not incompatible with creating breakthrough products like the iPhone, Ries says, which in the popular imagination sprang fully formed from the mind of Steve Jobs.

Right or wrong, the Lean Startup has a kind of inexorable logic, and Ries’ recommendations come as a bracing slap in the face to would-be tech moguls: Test your ideas before you bet the bank on them. Don’t listen to what focus groups say; watch what your customers do. Start with a modest offering and build on the aspects of it that prove valuable. Expect to get it wrong, and stay flexible (and solvent) enough to try again and again until you get it right.

by Ted Greenwald, Wired |  Read more:
Photo: Eric Ogden

Will Barnet, Woman and Cats, 1962
via:

Curation and the Questions No One Is Asking

[ed. Curation seems to be a hot topic these days (see previous post: You Are Not a Curator. Here's a more nuanced perspective. I'm not invested in any particular term I just think of it as aggregating, sharing, or the digital eqivalent of a filing cabinet.]

It’s been three months since our last Internet debate about “curation,” so by all means, let’s have another one!

The latest argument began last week after a mysterious tweet seemed to finally produce hard evidence that curators do, in fact, think they are better than everyone else. I’ve never met a “curator” who believes this, and it’s the same straw man argument that is concocted every three months.

So let’s get this out of the way now: Curation only exists because this is an incredible time for creation. It all starts and ends with a writer, a photographer, a filmmaker, or a publisher who creates or funds that work. The rest of us are just looking for something to inspire us, and when we do, we want to share it with others. And in the end, we all want to find ways to support the financing of creators’ work.

Yet every three months we get angry about the word “curation”—Is it “twee”? Who do these people think they are? Why don’t they get real jobs? Why are we so angry at people who are out there doing this for free?—but once again, we fail to ask any of the most pressing questions about curation in the Twitter and Facebook era.

Here are those questions, in order:

1. Is curation actually valuable, and do we have proof that it is, or is not?

A few successful curators, as I would define them, on Twitter include: Paul Kedrosky (@pkedrosky, 213,000+ followers), Anthony De Rosa (@antderosa, 30,000+ followers), Matthew Keys (@producermatthew, 11,000+ followers), Maria Popova (@brainpicker, 180,000+ followers), Heidi Moore (@moorehn, 18,000+ followers), Danyel Smith (@danamo, 20,000+ followers), Kevin Smokler (@weegee, 65,000 followers), and Jodi Ettenberg (a contributing editor for Longreads and Travelreads, whose @legalnomads has 14,000 followers).

You can argue about their respective tastes and whether you’re into what they’re slinging, but based on their follower counts, it’s tough to argue that what they do isn’t valuable to their audiences. When they link to a story, in most cases publishers will see a bump in new visitors. If you’re a publisher, you might just see “Twitter.com” in your Google Analytics referrals, but these are actual people, and their recommendations mean something to their followers.

To break it down further: For many curators, their work is valuable because their followers trust them to make objective, worthwhile recommendations, and they do so consistently. They’re valuable because they offer a consistent, reliable service.

Consistency is the defining trait that seems to separate “professional” curation and linkblogging from the occasional “oh hey look at this.” The web is a customer-service medium, and curation is just one of those services.

It doesn’t matter whether you believe the act of curation requires no more talent than managing the Employee Picks shelf at Barnes & Noble, or working the graveyard shift at your college radio station. People appreciate it if you save them a little time and point them to interesting work that might not show up in a “most popular” algorithm.

by Mark Armstrong, Read more:

Open Culture: 500 Free Movies Online

Where to watch free movies online? Let’s get you started. We have listed here 500+ quality films that you can watch online. The collection is divided into the following categories: Comedy & Drama; Film Noir, Horror & Hitchcock; Westerns & John Wayne; Silent Films; Documentaries, and Animation.

500 Free Movies

For example: Sid and Nancy

via:

Moral Taste Buds

Why working-class people vote conservative

Why on Earth would a working-class person ever vote for a conservative candidate? This question has obsessed the American left since Ronald Reagan first captured the votes of so many union members, farmers, urban Catholics and other relatively powerless people – the so-called "Reagan Democrats". Isn't the Republican party the party of big business? Don't the Democrats stand up for the little guy, and try to redistribute the wealth downwards?

Many commentators on the left have embraced some version of the duping hypothesis: the Republican party dupes people into voting against their economic interests by triggering outrage on cultural issues. "Vote for us and we'll protect the American flag!" say the Republicans. "We'll make English the official language of the United States! And most importantly, we'll prevent gay people from threatening your marriage when they … marry! Along the way we'll cut taxes on the rich, cut benefits for the poor, and allow industries to dump their waste into your drinking water, but never mind that. Only we can protect you from gay, Spanish-speaking flag-burners!"

One of the most robust findings in social psychology is that people find ways to believe whatever they want to believe. And the left really want to believe the duping hypothesis. It absolves them from blame and protects them from the need to look in the mirror or figure out what they stand for in the 21st century.

Here's a more painful but ultimately constructive diagnosis, from the point of view of moral psychology: politics at the national level is more like religion than it is like shopping. It's more about a moral vision that unifies a nation and calls it to greatness than it is about self-interest or specific policies. In most countries, the right tends to see that more clearly than the left. In America the Republicans did the hard work of drafting their moral vision in the 1970s, and Ronald Reagan was their eloquent spokesman. Patriotism, social order, strong families, personal responsibility (not government safety nets) and free enterprise. Those are values, not government programmes.

The Democrats, in contrast, have tried to win voters' hearts by promising to protect or expand programmes for elderly people, young people, students, poor people and the middle class. Vote for us and we'll use government to take care of everyone! But most Americans don't want to live in a nation based primarily on caring. That's what families are for.

One reason the left has such difficulty forging a lasting connection with voters is that the right has a built-in advantage – conservatives have a broader moral palate than the liberals (as we call leftists in the US). Think about it this way: our tongues have taste buds that are responsive to five classes of chemicals, which we perceive as sweet, sour, salty, bitter, and savoury. Sweetness is generally the most appealing of the five tastes, but when it comes to a serious meal, most people want more than that.

In the same way, you can think of the moral mind as being like a tongue that is sensitive to a variety of moral flavours. In my research with colleagues at YourMorals.org, we have identified six moral concerns as the best candidates for being the innate "taste buds" of the moral sense: care/harm, fairness/cheating, liberty/oppression, loyalty/betrayal, authority/subversion, and sanctity/degradation. Across many kinds of surveys, in the UK as well as in the USA, we find that people who self-identify as being on the left score higher on questions about care/harm. For example, how much would someone have to pay you to kick a dog in the head? Nobody wants to do this, but liberals say they would require more money than conservatives to cause harm to an innocent creature.

But on matters relating to group loyalty, respect for authority and sanctity (treating things as sacred and untouchable, not only in the context of religion), it sometimes seems that liberals lack the moral taste buds, or at least, their moral "cuisine" makes less use of them. For example, according to our data, if you want to hire someone to criticise your nation on a radio show in another nation (loyalty), give the finger to his boss (authority), or sign a piece of paper stating one's willingness to sell his soul (sanctity), you can save a lot of money by posting a sign: "Conservatives need not apply."

by Jonathan Haidt, The Guardian |  Read more:
Photograph: Michael Reynolds/EPA/Corbis

Making A Flamenco Guitar



Usually takes 299 hours. Done here in three minutes
via:

Tuesday, June 5, 2012


Yang Yanping Chinese, b. 1934
The deep autumn   
Ink and color
via:

"Don't Eat Fortune's Cookie"

My case illustrates how success is always rationalized. People really don’t like to hear success explained away as luck — especially successful people. As they age, and succeed, people feel their success was somehow inevitable. They don't want to acknowledge the role played by accident in their lives. There is a reason for this: the world does not want to acknowledge it either.

I wrote a book about this, called "Moneyball." It was ostensibly about baseball but was in fact about something else. There are poor teams and rich teams in professional baseball, and they spend radically different sums of money on their players. When I wrote my book the richest team in professional baseball, the New York Yankees, was then spending about $120 million on its 25 players. The poorest team, the Oakland A's, was spending about $30 million. And yet the Oakland team was winning as many games as the Yankees — and more than all the other richer teams.

This isn't supposed to happen. In theory, the rich teams should buy the best players and win all the time. But the Oakland team had figured something out: the rich teams didn't really understand who the best baseball players were. The players were misvalued. And the biggest single reason they were misvalued was that the experts did not pay sufficient attention to the role of luck in baseball success. Players got given credit for things they did that depended on the performance of others: pitchers got paid for winning games, hitters got paid for knocking in runners on base. Players got blamed and credited for events beyond their control. Where balls that got hit happened to land on the field, for example.

Forget baseball, forget sports. Here you had these corporate employees, paid millions of dollars a year. They were doing exactly the same job that people in their business had been doing forever. In front of millions of people, who evaluate their every move. They had statistics attached to everything they did. And yet they were misvalued — because the wider world was blind to their luck.

This had been going on for a century. Right under all of our noses. And no one noticed — until it paid a poor team so well to notice that they could not afford not to notice. And you have to ask: if a professional athlete paid millions of dollars can be misvalued who can't be? If the supposedly pure meritocracy of professional sports can't distinguish between lucky and good, who can?

The "Moneyball" story has practical implications. If you use better data, you can find better values; there are always market inefficiencies to exploit, and so on. But it has a broader and less practical message: don't be deceived by life's outcomes. Life's outcomes, while not entirely random, have a huge amount of luck baked into them. Above all, recognize that if you have had success, you have also had luck — and with luck comes obligation. You owe a debt, and not just to your Gods. You owe a debt to the unlucky.

I make this point because — along with this speech — it is something that will be easy for you to forget.

I now live in Berkeley, California. A few years ago, just a few blocks from my home, a pair of researchers in the Cal psychology department staged an experiment. They began by grabbing students, as lab rats. Then they broke the students into teams, segregated by sex. Three men, or three women, per team. Then they put these teams of three into a room, and arbitrarily assigned one of the three to act as leader. Then they gave them some complicated moral problem to solve: say what should be done about academic cheating, or how to regulate drinking on campus.

Exactly 30 minutes into the problem-solving the researchers interrupted each group. They entered the room bearing a plate of cookies. Four cookies. The team consisted of three people, but there were these four cookies. Every team member obviously got one cookie, but that left a fourth cookie, just sitting there. It should have been awkward. But it wasn't. With incredible consistency the person arbitrarily appointed leader of the group grabbed the fourth cookie, and ate it. Not only ate it, but ate it with gusto: lips smacking, mouth open, drool at the corners of their mouths. In the end all that was left of the extra cookie were crumbs on the leader's shirt.

This leader had performed no special task. He had no special virtue. He'd been chosen at random, 30 minutes earlier. His status was nothing but luck. But it still left him with the sense that the cookie should be his.

This experiment helps to explain Wall Street bonuses and CEO pay, and I'm sure lots of other human behavior. But it also is relevant to new graduates of Princeton University. In a general sort of way you have been appointed the leader of the group. Your appointment may not be entirely arbitrary. But you must sense its arbitrary aspect: you are the lucky few. Lucky in your parents, lucky in your country, lucky that a place like Princeton exists that can take in lucky people, introduce them to other lucky people, and increase their chances of becoming even luckier. Lucky that you live in the richest society the world has ever seen, in a time when no one actually expects you to sacrifice your interests to anything.

All of you have been faced with the extra cookie. All of you will be faced with many more of them. In time you will find it easy to assume that you deserve the extra cookie. For all I know, you may. But you'll be happier, and the world will be better off, if you at least pretend that you don't.

Never forget: In the nation's service. In the service of all nations.

by Michael Lewis, 2012 Baccalaureate Remarks, Princeton University |  Read more:

What Cool Things Can I Do with All This Free Cloud Storage Space?

Dear Lifehacker,
Anytime I see an offer for free cloud storage, I'm all over it. I have over 8GB of Dropbox space, 5GB on Google Drive, 20GB on Amazon Cloud Drive, 50GB on Box, and 7GB on Microsoft's SkyDrive—and I want to take advantage of all of it. Any suggestions?

Thanks,
Drowning in Free Space

Dear Drowning,
We hear you! With all the cloud services handing out free space like it's candy, it's easy to end up with a lot of unused space just waiting to be filled. Unfortunately, there's no way to consolidate all that storage space spread out across your accounts (though you can use services like previously mentioned Otixo and Primadesk to see all your online drives at once). One way to make use of all of these services without too much confusion is to separate the types of files you store across services, and in fact, you can do so in a way that takes advantage of the strengths of each.

For example, you can dedicate Dropbox to your active projects, because it's the syncing service where you have the most storage space. Use other services for backing up your photos, music, and other data.

These services all have unique strengths that can help you decide what to use them for. You don't need to use every single one of these services, but if you want to mix and match, here's an overview of what they're best for:

Best Uses for Different Cloud Services

Sync Your Music with Amazon Cloud Drive or Google Play Music

Neither Amazon Cloud Drive nor Google Play Music sync your files, so they're not useful for storing stuff that needs to always be up-to-date. They are, however, ideal for your music files.

If you buy your MP3s from Amazon, they're automatically stored to your Amazon Cloud Drive and don't count against your storage space. Even better, if you're on a paid plan (starting at $20/year for 20GB), you get unlimited storage space for all music, regardless of where you bought it. Amazon can stream your music on the web and on Android and iOS devices.

Google Play Music now incorporates the former Google Music service into Google's Play marketplace to store your songs—and books—online and stream them on the web and your Android phone. Play's limit for music is 20,000 songs, rather than a set amount of space in gigabytes. (You get unlimited space for ebooks and can use Play to rent movies but not store them in the cloud). Plus, Adam Pash's Music Plus Chrome extension makes Play Music even more awesome.

Learn more about the differences between Google Play Music and Amazon Cloud Drive in our cloud music comparison, which also includes iCloud. It's also worth noting that SugarSync can stream a folder of music to iOS and Android, and gives you 5GB of free space.

by Melanie Pinola, Lifehacker | Read more:

Bird of Paradise
via:

One Too Many, Gary Bunt. English born in 1957.
via: