Friday, June 8, 2012
The Lonely Polygamist
Meet Bill. He has four wives and thirty-one kids. And something's missing.
Polygamy is not something you try on a whim. You don't come home from work one day, pop open a beer, settle down for your nightly dose of Seinfeld reruns, and think, "Boy, my marriage is a bore. Maybe I should give polygamy a whirl." It's true that polygamy, as a concept, sounds downright inviting. Yes, there are lots of women involved, women of all shapes and sizes and personalities, a wonderful variety of women, and yes, they'll fulfill your every need, cook your dinner, do your laundry, sew the buttons on your shirts. And yes, you're allowed to sleep with these women, each of them, one for every night of the week if you want, and what's more, when you wake up in the morning, you won't have to deal with even the tiniest twinge of guilt, because these women, all of them, are your sweethearts, your soul mates, your wives.
Then what, you're asking yourself, could possibly be the problem?
The problem is this: Polygamy is not what you think it is. It has nothing to do with the little fantasy just spelled out for you. A life of polygamy is not a joyride, a guiltless sexual free-for-all. Being a polygamist is not for the easygoing or the weak of heart. It's like marine boot camp or working for the mob; if you're not cut out for it, if you don't have that essential thing inside, it will eat you alive. And polygamy doesn't just require simple cojones, either. It requires the devotion of a monk, the diplomatic prowess of Winston Churchill, the doggedness of a field general, the patience of a pine tree.
Put simply: You'd have to be crazy to want to be a polygamist.
That's what's so strange about Bill. Bill has four wives and thirty-one children. Bill is an ex-Mormon, and he doesn't seem crazy at all. If anything, he seems exceptionally sane, painfully regular, as normal as soup. He's certainly not the wild-eyed, woolly-bearded zealot you might expect. Approaching middle age, Bill has the unassuming air of an accountant. He wears white shirts, blue ties, and black wing tips. He is Joe Blow incarnate. The only thing exceptional about Bill is his height: He is six foot eight and prone to hitting his head on hanging lamps and potted plants.
Bill's wives are not who you'd expect, either. They're not ruddy-faced women with high collars buttoned up to their chins. These are the women you see every day of your life. They wear jeans and T-shirts; they drive minivans; they have jobs. Julia is a legal secretary; Emily manages part of Bill's business; Susan owns a couple of health-food stores; and Stacy stays at home with the younger children. They are also tall, all of them around six feet; if you didn't know better, you'd think Bill and his wives had a secret plan to create a race of giants.
Each of Bill's wives lives in a different house in the suburbs around Salt Lake City. They've lived in different configurations over the years--all in one place, two in one and two in another--but this is the way that seems best nowadays, since there are teenagers in the mix, and one thing everybody seems to agree on is how much teenagers need their space. Bill himself is homeless. He wanders from house to house like a nomad or a beggar, sometimes surprising a certain wife with the suddenness of his presence. In the past, he has used a rigid rotation schedule but now opts for a looser approach. He believes that intuition and nothing else should guide where he stays for the night.
Okay, now: Put yourself in Bill's size-14 wing tips for a minute. You've just finished an exhausting day at work. It's that time of the evening when you think to yourself, "Hmmm. Which house am I going to tonight?" You get in your car and head off toward Emily's house; you haven't seen Emily for several days, and besides, she's having trouble with one of your teenage daughters--she's not sticking to her curfew. But you remember that your son Walt has a soccer game on the other side of town at 5:30. You start to turn around, but then you think of Susan, wife number two, who has come down with the flu and is in need of some comfort and company. Then it hits you that not only did you promise to look at the bad alternator in Stacy's Volvo tonight, not only did you tell Emily that you'd be home in time to meet with the insurance man to go over all your policies, but that Annie, your six-year-old daughter, is having a birthday tomorrow and you've yet to get her a present.
Sitting there at the intersection--cars honking, people flipping you the bird--do you feel paralyzed? Do you feel like merging with the rest of the traffic onto I-15 and heading for Las Vegas, leaving it all behind?
This is Bill's life.
by Brady Udall, Standard-Examiner (1998) | Read more:
Polygamy is not something you try on a whim. You don't come home from work one day, pop open a beer, settle down for your nightly dose of Seinfeld reruns, and think, "Boy, my marriage is a bore. Maybe I should give polygamy a whirl." It's true that polygamy, as a concept, sounds downright inviting. Yes, there are lots of women involved, women of all shapes and sizes and personalities, a wonderful variety of women, and yes, they'll fulfill your every need, cook your dinner, do your laundry, sew the buttons on your shirts. And yes, you're allowed to sleep with these women, each of them, one for every night of the week if you want, and what's more, when you wake up in the morning, you won't have to deal with even the tiniest twinge of guilt, because these women, all of them, are your sweethearts, your soul mates, your wives.
Then what, you're asking yourself, could possibly be the problem?
The problem is this: Polygamy is not what you think it is. It has nothing to do with the little fantasy just spelled out for you. A life of polygamy is not a joyride, a guiltless sexual free-for-all. Being a polygamist is not for the easygoing or the weak of heart. It's like marine boot camp or working for the mob; if you're not cut out for it, if you don't have that essential thing inside, it will eat you alive. And polygamy doesn't just require simple cojones, either. It requires the devotion of a monk, the diplomatic prowess of Winston Churchill, the doggedness of a field general, the patience of a pine tree.
Put simply: You'd have to be crazy to want to be a polygamist.
That's what's so strange about Bill. Bill has four wives and thirty-one children. Bill is an ex-Mormon, and he doesn't seem crazy at all. If anything, he seems exceptionally sane, painfully regular, as normal as soup. He's certainly not the wild-eyed, woolly-bearded zealot you might expect. Approaching middle age, Bill has the unassuming air of an accountant. He wears white shirts, blue ties, and black wing tips. He is Joe Blow incarnate. The only thing exceptional about Bill is his height: He is six foot eight and prone to hitting his head on hanging lamps and potted plants.
Bill's wives are not who you'd expect, either. They're not ruddy-faced women with high collars buttoned up to their chins. These are the women you see every day of your life. They wear jeans and T-shirts; they drive minivans; they have jobs. Julia is a legal secretary; Emily manages part of Bill's business; Susan owns a couple of health-food stores; and Stacy stays at home with the younger children. They are also tall, all of them around six feet; if you didn't know better, you'd think Bill and his wives had a secret plan to create a race of giants.
Each of Bill's wives lives in a different house in the suburbs around Salt Lake City. They've lived in different configurations over the years--all in one place, two in one and two in another--but this is the way that seems best nowadays, since there are teenagers in the mix, and one thing everybody seems to agree on is how much teenagers need their space. Bill himself is homeless. He wanders from house to house like a nomad or a beggar, sometimes surprising a certain wife with the suddenness of his presence. In the past, he has used a rigid rotation schedule but now opts for a looser approach. He believes that intuition and nothing else should guide where he stays for the night.
Okay, now: Put yourself in Bill's size-14 wing tips for a minute. You've just finished an exhausting day at work. It's that time of the evening when you think to yourself, "Hmmm. Which house am I going to tonight?" You get in your car and head off toward Emily's house; you haven't seen Emily for several days, and besides, she's having trouble with one of your teenage daughters--she's not sticking to her curfew. But you remember that your son Walt has a soccer game on the other side of town at 5:30. You start to turn around, but then you think of Susan, wife number two, who has come down with the flu and is in need of some comfort and company. Then it hits you that not only did you promise to look at the bad alternator in Stacy's Volvo tonight, not only did you tell Emily that you'd be home in time to meet with the insurance man to go over all your policies, but that Annie, your six-year-old daughter, is having a birthday tomorrow and you've yet to get her a present.
Sitting there at the intersection--cars honking, people flipping you the bird--do you feel paralyzed? Do you feel like merging with the rest of the traffic onto I-15 and heading for Las Vegas, leaving it all behind?
This is Bill's life.
by Brady Udall, Standard-Examiner (1998) | Read more:
Team of Mascots
Just four years ago, when it was clear that he would be the Democratic presidential nominee, Barack Obama famously declared that, if elected, he would want “a team of rivals” in his Cabinet, telling Joe Klein, of Time magazine, “I don’t want to have people who just agree with me. I want people who are continually pushing me out of my comfort zone.” His inspiration was Doris Kearns Goodwin’s best-selling book about Abraham Lincoln, who appointed three men who had been his chief competitors for the presidency in 1860—and who held him, at that point, in varying degrees of contempt—to help him keep the Union together during the Civil War. To say that things haven’t worked out that way for Obama is the mildest understatement. “No! God, no!” one former senior Obama adviser told me when I asked if the president had lived up to this goal. There’s nothing sacred about the team-of-rivals idea—for one thing, it depends on who the rivals were. Obama does have one former rival, Hillary Clinton, in his Cabinet, and another, Joe Biden, is vice president. Mitt Romney would have fewer options. Can anyone really imagine Romney making Rick Santorum his secretary of health and human services, or Herman Cain his commerce secretary, or Newt Gingrich the administrator of nasa? Well, maybe the last, if only so Romney could have the satisfaction of sending the former Speaker—bang! zoom!—to the moon! For the record, Gingrich has said he’d be unlikely to accept any position in a Romney administration, and Romney himself has given almost no real hints about whom he might appoint. In light of his propensity to bow to prevailing political pressures, his Cabinet might well be, as he described himself, “severely conservative.” But the way presidents use their Cabinets says a lot about their style of governing. Richard Nixon created a deliberately weak Cabinet (he ignored his secretary of state William Rogers to the point of humiliation, in favor of his national-security adviser, Henry Kissinger), and he rewarded their loyalty by demanding all their resignations on the morning after his landslide re-election, in 1972. John F. Kennedy, having won a whisker-close election against Nixon, in 1960, wanted Republicans such as Douglas Dillon at Treasury and Robert McNamara at Defense to lend an air of bipartisan authority and competence. George W. Bush had a very powerful Cabinet, especially in the persons of Donald Rumsfeld, Robert Gates, and Condoleezza Rice, if only to compensate for his pronounced lack of experience in foreign policy and military affairs. (...)
The days when presidential Cabinets contained the likes of Thomas Jefferson as secretary of state, or Alexander Hamilton as secretary of the Treasury, are long since gone (and those early Cabinets displayed a fractiousness that no modern president would be likely to tolerate), though Cabinet officers retain symbols of office—from flags to drivers to, in some cases, chefs—befitting grander figures. The lingering public image of Cabinet meetings as the scene of important action is largely a myth. “They are not meetings where policy is determined or decisions are made,” the late Nicholas Katzenbach, who served Lyndon Johnson as attorney general, recalled in his memoirs. Nevertheless, Katzenbach attended them faithfully, “not because they were particularly interesting or important, but simply because”—remembering L.B.J.’s awful relationship with the previous attorney general, Bobby Kennedy—“I did not want the president to feel I was not on his team.” Even as recently as the 1930s, Cabinet figures such as Labor Secretary Frances Perkins, Interior Secretary Harold Ickes, and Postmaster General James A. Farley were important advisers to Franklin D. Roosevelt (and, in the cases of Perkins and Ickes, priceless diarists and chroniclers) in areas beyond their lanes of departmental responsibility, just as Robert F. Kennedy was his brother’s all-purpose sounding board and McNamara provided J.F.K. with advice on business and economics well outside his purview at the Pentagon. “Cabinet posts are great posts,” says Dan Glickman, who was Bill Clinton’s agriculture secretary. “But you realize that the days of Harry Hopkins and others who were in the Cabinet and were key advisers to the president—that really isn’t true anymore.” “In the case of Clinton,” Glickman went on, “it was a joy to work for him, because, in large part, he gave each of us lots of discretion. He said, ‘If it’s bad news, don’t call me. If it’s good news, call me. If it’s exceptionally good news, call me quicker.’ ” The way Cabinet officers relate personally to the president is—no surprise—often the crucial factor in their success or failure. Colin Powell had a worldwide profile and a higher approval rating than George W. Bush, and partly for those very reasons had trouble building a close rapport with a president who had lots to be modest about. Obama’s energy secretary, Steven Chu, may have a Nobel Prize in physics, but that counted for little when he once tried to make a too elaborate visual presentation to the president. Obama said to him after the third slide, as one witness recalls, “O.K., I got it. I’m done, Steve. Turn it off.” Attorney General Eric Holder has been particularly long-suffering, although he and his wife, Dr. Sharon Malone, are socially close to the Obamas. Set aside the controversy that surrounded his failure, as deputy attorney general at the end of the Clinton administration, to oppose a pardon for Marc Rich, the fugitive financier whose ex-wife was a Clinton donor. Holder, the first black attorney general, has taken a political beating more recently for musing that the country is a “nation of cowards” when it comes to talking about race, and for following through on what seemed to be the president’s own wishes on such matters as proposing to try the 9/11 mastermind Khalid Sheikh Mohammed in an American courtroom (in the middle of Manhattan, no less). The sharp growth in the White House staff in the years since World War II has also meant that policy functions once reserved for Cabinet officers are now performed by top aides inside the White House itself. Obama meets regularly and privately with Tim Geithner and Hillary Clinton, but almost certainly sees his national-security adviser, Tom Donilon, and his economic adviser, Gene Sperling, even more often. The relentless media cycle now moves so swiftly that any president, even one less inclined toward centralized discipline than Obama, might naturally rely on the White House’s quick-on-the-draw internal-messaging machine instead of bucking things through the bureaucratic channels of the executive departments. In dealing with a Cabinet, as with life itself, there is no substitute for experience. Clinton-administration veterans told me that their boss made better, fuller use of the Cabinet in his second term than he did in his first, when officials such as Les Aspin at the Pentagon and Warren Christopher at the State Department sometimes struggled to build a cohesive team. Lincoln’s choice of William H. Seward at State, Salmon P. Chase at Treasury, and Edward Bates as attorney general were far from universally applauded. “The construction of a Cabinet,” one editorial admonished at the time, “like the courting of a shrewd girl, belongs to a branch of the fine arts with which the new Executive is not acquainted.” Lincoln’s Cabinet did solve one political problem but it created others—Lincoln had to fight not one but two civil wars.
by Todd S. Purdum, Vanity Fair | Read more:
Darrow
The days when presidential Cabinets contained the likes of Thomas Jefferson as secretary of state, or Alexander Hamilton as secretary of the Treasury, are long since gone (and those early Cabinets displayed a fractiousness that no modern president would be likely to tolerate), though Cabinet officers retain symbols of office—from flags to drivers to, in some cases, chefs—befitting grander figures. The lingering public image of Cabinet meetings as the scene of important action is largely a myth. “They are not meetings where policy is determined or decisions are made,” the late Nicholas Katzenbach, who served Lyndon Johnson as attorney general, recalled in his memoirs. Nevertheless, Katzenbach attended them faithfully, “not because they were particularly interesting or important, but simply because”—remembering L.B.J.’s awful relationship with the previous attorney general, Bobby Kennedy—“I did not want the president to feel I was not on his team.” Even as recently as the 1930s, Cabinet figures such as Labor Secretary Frances Perkins, Interior Secretary Harold Ickes, and Postmaster General James A. Farley were important advisers to Franklin D. Roosevelt (and, in the cases of Perkins and Ickes, priceless diarists and chroniclers) in areas beyond their lanes of departmental responsibility, just as Robert F. Kennedy was his brother’s all-purpose sounding board and McNamara provided J.F.K. with advice on business and economics well outside his purview at the Pentagon. “Cabinet posts are great posts,” says Dan Glickman, who was Bill Clinton’s agriculture secretary. “But you realize that the days of Harry Hopkins and others who were in the Cabinet and were key advisers to the president—that really isn’t true anymore.” “In the case of Clinton,” Glickman went on, “it was a joy to work for him, because, in large part, he gave each of us lots of discretion. He said, ‘If it’s bad news, don’t call me. If it’s good news, call me. If it’s exceptionally good news, call me quicker.’ ” The way Cabinet officers relate personally to the president is—no surprise—often the crucial factor in their success or failure. Colin Powell had a worldwide profile and a higher approval rating than George W. Bush, and partly for those very reasons had trouble building a close rapport with a president who had lots to be modest about. Obama’s energy secretary, Steven Chu, may have a Nobel Prize in physics, but that counted for little when he once tried to make a too elaborate visual presentation to the president. Obama said to him after the third slide, as one witness recalls, “O.K., I got it. I’m done, Steve. Turn it off.” Attorney General Eric Holder has been particularly long-suffering, although he and his wife, Dr. Sharon Malone, are socially close to the Obamas. Set aside the controversy that surrounded his failure, as deputy attorney general at the end of the Clinton administration, to oppose a pardon for Marc Rich, the fugitive financier whose ex-wife was a Clinton donor. Holder, the first black attorney general, has taken a political beating more recently for musing that the country is a “nation of cowards” when it comes to talking about race, and for following through on what seemed to be the president’s own wishes on such matters as proposing to try the 9/11 mastermind Khalid Sheikh Mohammed in an American courtroom (in the middle of Manhattan, no less). The sharp growth in the White House staff in the years since World War II has also meant that policy functions once reserved for Cabinet officers are now performed by top aides inside the White House itself. Obama meets regularly and privately with Tim Geithner and Hillary Clinton, but almost certainly sees his national-security adviser, Tom Donilon, and his economic adviser, Gene Sperling, even more often. The relentless media cycle now moves so swiftly that any president, even one less inclined toward centralized discipline than Obama, might naturally rely on the White House’s quick-on-the-draw internal-messaging machine instead of bucking things through the bureaucratic channels of the executive departments. In dealing with a Cabinet, as with life itself, there is no substitute for experience. Clinton-administration veterans told me that their boss made better, fuller use of the Cabinet in his second term than he did in his first, when officials such as Les Aspin at the Pentagon and Warren Christopher at the State Department sometimes struggled to build a cohesive team. Lincoln’s choice of William H. Seward at State, Salmon P. Chase at Treasury, and Edward Bates as attorney general were far from universally applauded. “The construction of a Cabinet,” one editorial admonished at the time, “like the courting of a shrewd girl, belongs to a branch of the fine arts with which the new Executive is not acquainted.” Lincoln’s Cabinet did solve one political problem but it created others—Lincoln had to fight not one but two civil wars.
by Todd S. Purdum, Vanity Fair | Read more:
Darrow
The Library of Utopia
In his 1938 book World Brain, H.G. Wells imagined a time—not very distant, he believed—when every person on the planet would have easy access to "all that is thought or known."
The 1930s were a decade of rapid advances in microphotography, and Wells assumed that microfilm would be the technology to make the corpus of human knowledge universally available. "The time is close at hand," he wrote, "when any student, in any part of the world, will be able to sit with his projector in his own study at his or her convenience to examine any book, any document, in an exact replica."
Wells's optimism was misplaced. The Second World War put idealistic ventures on hold, and after peace was restored, technical constraints made his plan unworkable. Though microfilm would remain an important medium for storing and preserving documents, it proved too unwieldy, too fragile, and too expensive to serve as the basis for a broad system of knowledge transmission. But Wells's idea is still alive. Today, 75 years later, the prospect of creating a public repository of every book ever published—what the Princeton philosopher Peter Singer calls "the library of utopia"—seems well within our grasp. With the Internet, we have an information system that can store and transmit documents efficiently and cheaply, delivering them on demand to anyone with a computer or a smart phone. All that remains to be done is to digitize the more than 100 million books that have appeared since Gutenberg invented movable type, index their contents, add some descriptive metadata, and put them online with tools for viewing and searching.
Google had the smarts and the money to scan millions of books into its database, but the major problems with constructing a universal library has little to do with technology.It sounds straightforward. And if it were just a matter of moving bits and bytes around, a universal online library might already exist. Google, after all, has been working on the challenge for 10 years. But the search giant's book program has foundered; it is mired in a legal swamp. Now another momentous project to build a universal library is taking shape. It springs not from Silicon Valley but from Harvard University. The Digital Public Library of America—the DPLA—has big goals, big names, and big contributors. And yet for all the project's strengths, its success is far from assured. Like Google before it, the DPLA is learning that the major problem with constructing a universal library nowadays has little to do with technology. It's the thorny tangle of legal, commercial, and political issues that surrounds the publishing business. Internet or not, the world may still not be ready for the library of utopia.
by Nicholas Carr, MIT Technology Review | Read more:
Illustration: Stuart Bradford
The New Neuroscience of Choking
Last Sunday, at the Memorial golf tournament in Dublin, Ohio, Rickie Fowler looked like the man to beat. He entered the tournament with momentum: Fowler had recently gained his first ever P.G.A. tour victory, and he had finished in the top ten in his last four starts. On the first hole of the final round, Fowler sank a fourteen-foot birdie putt, placing him within two shots of the lead.
And that’s when things fell apart. Fowler pulled a shot on the second hole and never recovered. On the next hole, he hit his approach into a greenside bunker and ended up three-putting for a double bogey. He finished with an eighty-four, his worst round on the tour by five shots. Although he began the day in third place, he finished in a tie for fifty-second, sixteen shots behind the winner, Tiger Woods.
In short, Fowler choked. Like LeBron James—who keeps on missing free throws when the game is on the line—he seems to have been undone by the pressure of the situation. And choking isn’t just a hazard for athletes: the condition also afflicts opera singers and actors, hedge-fund traders and chess grandmasters. All of sudden, just when these experts most need to perform, their expertise is lost. The grace of talent disappears.
As Malcolm Gladwell pointed out in his 2000 article on the psychology of choking, the phenomenon can seem like an amorphous category of failure. Nevertheless, choking is actually triggered by a specific mental mistake: thinking too much. The sequence of events typically goes like this: When people get anxious about performing, they naturally become particularly self-conscious; they begin scrutinizing actions that are best performed on autopilot. The expert golfer, for instance, begins contemplating the details of his swing, making sure that the elbows are tucked and his weight is properly shifted. This kind of deliberation can be lethal for a performer. (...)
Sian Beilock, a professor of psychology at the University of Chicago, has documented the choking process in her lab. She uses putting on the golf green as her experimental paradigm. Not surprisingly, Beilock has shown that novice putters hit better shots when they consciously reflect on their actions. By concentrating on their golf game, they can avoid beginner’s mistakes.
A little experience, however, changes everything. After golfers have learned how to putt—once they have memorized the necessary movements—analyzing the stroke is a dangerous waste of time. And this is why, when experienced golfers are forced to think about their swing mechanics, they shank the ball. “We bring expert golfers into our lab, and we tell them to pay attention to a particular part of their swing, and they just screw up,” Beilock says. “When you are at a high level, your skills become somewhat automated. You don’t need to pay attention to every step in what you’re doing.”
But this only raises questions: What triggers all of these extra thoughts? And why does it only happen to some athletes, performers, and students? Everyone gets nervous; not everyone chokes.
by Jonah Lehrer, The New Yorker | Read more:
Photograph of LeBron James by Jim Rogash/Getty Images.

In short, Fowler choked. Like LeBron James—who keeps on missing free throws when the game is on the line—he seems to have been undone by the pressure of the situation. And choking isn’t just a hazard for athletes: the condition also afflicts opera singers and actors, hedge-fund traders and chess grandmasters. All of sudden, just when these experts most need to perform, their expertise is lost. The grace of talent disappears.
As Malcolm Gladwell pointed out in his 2000 article on the psychology of choking, the phenomenon can seem like an amorphous category of failure. Nevertheless, choking is actually triggered by a specific mental mistake: thinking too much. The sequence of events typically goes like this: When people get anxious about performing, they naturally become particularly self-conscious; they begin scrutinizing actions that are best performed on autopilot. The expert golfer, for instance, begins contemplating the details of his swing, making sure that the elbows are tucked and his weight is properly shifted. This kind of deliberation can be lethal for a performer. (...)
Sian Beilock, a professor of psychology at the University of Chicago, has documented the choking process in her lab. She uses putting on the golf green as her experimental paradigm. Not surprisingly, Beilock has shown that novice putters hit better shots when they consciously reflect on their actions. By concentrating on their golf game, they can avoid beginner’s mistakes.
A little experience, however, changes everything. After golfers have learned how to putt—once they have memorized the necessary movements—analyzing the stroke is a dangerous waste of time. And this is why, when experienced golfers are forced to think about their swing mechanics, they shank the ball. “We bring expert golfers into our lab, and we tell them to pay attention to a particular part of their swing, and they just screw up,” Beilock says. “When you are at a high level, your skills become somewhat automated. You don’t need to pay attention to every step in what you’re doing.”
But this only raises questions: What triggers all of these extra thoughts? And why does it only happen to some athletes, performers, and students? Everyone gets nervous; not everyone chokes.
by Jonah Lehrer, The New Yorker | Read more:
Photograph of LeBron James by Jim Rogash/Getty Images.
Thursday, June 7, 2012
Why Google Isn’t Making Us Stupid…or Smart

Some see this as information abundance, others as information overload. The advent of digital information and with it the era of big data allows geneticists to decode the human genome, humanists to search entire bodies of literature, and businesses to spot economic trends. But it is also creating for many the sense that we are being overwhelmed by information. How are we to manage it all? What are we to make, as Ann Blair asks, of a zettabyte of information—a one with 21 zeros after it?1 From a more embodied, human perspective, these tremendous scales of information are rather meaningless. We do not experience information as pure data, be it a byte or a yottabyte, but as filtered and framed through the keyboards, screens, and touchpads of our digital technologies. However impressive these astronomical scales of information may be, our contemporary awe and increasing worry about all this data obscures the ways in which we actually engage it and the world of which it and we are a part. All of the chatter about information superabundance and overload tends not only to marginalize human persons, but also to render technology just as abstract as a yottabyte. An email is reduced to yet another data point, the Web to an infinite complex of protocols and machinery, Google to a neutral machine for producing information. Our compulsive talk about information overload can isolate and abstract digital technology from society, human persons, and our broader culture. We have become distracted by all the data and inarticulate about our digital technologies.
The more pressing, if more complex, task of our digital age, then, lies not in figuring out what comes after the yottabyte, but in cultivating contact with an increasingly technologically formed world.2 In order to understand how our lives are already deeply formed by technology, we need to consider information not only in the abstract terms of terrabytes and zettabytes, but also in more cultural terms. How do the technologies that humans form to engage the world come in turn to form us? What do these technologies that are of our own making and irreducible elements of our own being do to us? The analytical task lies in identifying and embracing forms of human agency particular to our digital age, without reducing technology to a mere mechanical extension of the human, to a mere tool. In short, asking whether Google makes us stupid, as some cultural critics recently have, is the wrong question. It assumes sharp distinctions between humans and technology that are no longer, if they ever were, tenable.
Two Narratives
The history of this mutual constitution of humans and technology has been obscured as of late by the crystallization of two competing narratives about how we experience all of this information. On the one hand, there are those who claim that the digitization efforts of Google, the social-networking power of Facebook, and the era of big data in general are finally realizing that ancient dream of unifying all knowledge. The digital world will become a “single liquid fabric of interconnected words and ideas,” a form of knowledge without distinctions or differences.3 Unlike other technological innovations, like print, which was limited to the educated elite, the internet is a network of “densely interlinked Web pages, blogs, news articles and Tweets [that] are all visible to anyone and everyone.”4 Our information age is unique not only in its scale, but in its inherently open and democratic arrangement of information. Information has finally been set free. Digital technologies, claim the most optimistic among us, will deliver a universal knowledge that will make us smarter and ultimately liberate us.5 These utopic claims are related to similar visions about a trans-humanist future in which technology will overcome what were once the historical limits of humanity: physical, intellectual, and psychological. The dream is of a post-human era.6
On the other hand, less sanguine observers interpret the advent of digitization and big data as portending an age of information overload. We are suffering under a deluge of data. Many worry that the Web’s hyperlinks that propel us from page to page, the blogs that reduce long articles to a more consumable line or two, and the tweets that condense thoughts to 140 characters have all created a culture of distraction. The very technologies that help us manage all of this information are undermining our ability to read with any depth or care. The Web, according to some, is a deeply flawed medium that facilitates a less intensive, more superficial form of reading. When we read online, we browse, we scan, we skim. The superabundance of information, such critics charge, however, is changing not only our reading habits, but also the way we think. As Nicholas Carr puts it, “what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles.”7 The constant distractions of the internet—think of all those hyperlinks and new message warnings that flash up on the screen—are degrading our ability “to pay sustained attention,” to read in depth, to reflect, to remember. For Carr and many others like him, true knowledge is deep, and its depth is proportional to the intensity of our attentiveness. In our digital world that encourages quantity over quality, Google is making us stupid.
Each of these narratives points to real changes in how technology impacts humans. Both the scale and the acceleration of information production and dissemination in our digital age are unique. Google, like every technology before it, may well be part of broader changes in the ways we think and experience the world. Both narratives, however, make two basic mistakes.
by Chad Wellmon, The Hedgehog Review | Read more:
The Curious Case of Internet Privacy
Here's a story you've heard about the Internet: we trade our privacy for services. The idea is that your private information is less valuable to you than it is to the firms that siphon it out of your browser as you navigate the Web. They know what to do with it to turn it into value—for them and for you. This story has taken on mythic proportions, and no wonder, since it has billions of dollars riding on it.
But if it's a bargain, it's a curious, one-sided arrangement. To understand the kind of deal you make with your privacy a hundred times a day, please read and agree with the following:
By reading this agreement, you give Technology Review and its partners the unlimited right to intercept and examine your reading choices from this day forward, to sell the insights gleaned thereby, and to retain that information in perpetuity and supply it without limitation to any third party.Actually, the text above is not exactly analogous to the terms on which we bargain with every mouse click. To really polish the analogy, I'd have to ask this magazine to hide that text in the margin of one of the back pages. And I'd have to end it with This agreement is subject to change at any time. What we agree to participate in on the Internet isn't a negotiated trade; it's a smorgasbord, and intimate facts of your life (your location, your interests, your friends) are the buffet.
Why do we seem to value privacy so little? In part, it's because we are told to. Facebook has more than once overridden its users' privacy preferences, replacing them with new default settings. Facebook then responds to the inevitable public outcry by restoring something that's like the old system, except slightly less private. And it adds a few more lines to an inexplicably complex privacy dashboard.
Even if you read the fine print, human beings are awful at pricing out the net present value of a decision whose consequences are far in the future. No one would take up smoking if the tumors sprouted with the first puff. Most privacy disclosures don't put us in immediate physical or emotional distress either. But given a large population making a large number of disclosures, harm is inevitable. We've all heard the stories about people who've been fired because they set the wrong privacy flag on that post where they blew off on-the-job steam.
The risks increase as we disclose more, something that the design of our social media conditions us to do. When you start out your life in a new social network, you are rewarded with social reinforcement as your old friends pop up and congratulate you on arriving at the party. Subsequent disclosures generate further rewards, but not always. Some disclosures seem like bombshells to you ("I'm getting a divorce") but produce only virtual cricket chirps from your social network. And yet seemingly insignificant communications ("Does my butt look big in these jeans?") can produce a torrent of responses. Behavioral scientists have a name for this dynamic: "intermittent reinforcement." It's one of the most powerful behavioral training techniques we know about. Give a lab rat a lever that produces a food pellet on demand and he'll only press it when he's hungry. Give him a lever that produces food pellets at random intervals, and he'll keep pressing it forever.
How does society get better at preserving privacy online? As Lawrence Lessig pointed out in his book Code and Other Laws of Cyberspace, there are four possible mechanisms: norms, law, code, and markets.
by Cory Doctorow, MIT Technology Review | Read more:
Photo: Jonathan Worth | Creative Commons
Ray Bradbury (August, 1920 – June, 2012)
Martians, robots, dinosaurs, mummies, ghosts, time machines, rocket ships, carnival magicians, alarming doppelgängers who forecast murder and doom — the sort of sensational subjects that fascinate children are the stuff of Ray Bradbury’s fiction. Over a 70-year career, he used his fecund storytelling talents to fashion tales that have captivated legions of young people and inspired a host of imitators. His work informed the imagination of writers and filmmakers like Stephen King, Steven Spielberg and James Cameron, and helped transport science fiction out of the pulp magazine ghetto and into the mainstream.
Thanks to its lurid subject matter and its often easy-to-decipher morals, Mr. Bradbury’s work is often taught in middle school. He’s often one of the first writers who awaken students to the enthralling possibilities of storytelling and the use of fantastical metaphors to describe everyday human life. His finest tales have become classics not only because of their accessibility but also because of their exuberant “Twilight Zone” inventiveness, their social resonance, their prescient vision of a dystopian future, which he dreamed up with astonishing ingenuity and flair. Not surprisingly he had a magpie’s love of all sorts of literature — Poe, Shakespeare and Sherwood Anderson (whose “Winesburg, Ohio” reportedly inspired “The Martian Chronicles”) as well as H. G. Wells and L. Frank Baum — and borrowed devices and conventions from the classics and from various genres. “Something Wicked This Way Comes” would win acclaim as a groundbreaking work of horror and fantasy.
“Fahrenheit 451” (1953) — Mr. Bradbury’s famous novel-turned-movie about a futuristic world in which books are verboten — is at once a parable about McCarthyism and Stalinism, and a kind of fable about the perils of political correctness and the dangers of television and other technology. “The Martian Chronicles” (1950), a melancholy series of overlapping stories about the colonization of Mars, can be read as an allegory about the settling of the United States or seen as a mirror of postwar American life.
“A Sound of Thunder” (1952) — a short story about a time-traveler, who journeys back to the dinosaur era and accidentally steps on a butterfly, thereby altering the course of world history — spawned many imitations, and in some respects anticipated the chaos theory concept of “the butterfly effect,” which suggests that one small change can lead to enormous changes later on. He also uncannily foresaw inventions like flat-screen TVs, Walkman-like devices and virtual reality.
by Michiko Kakutani, NY Times | Read more:
Charley Gallay/Getty Image

“Fahrenheit 451” (1953) — Mr. Bradbury’s famous novel-turned-movie about a futuristic world in which books are verboten — is at once a parable about McCarthyism and Stalinism, and a kind of fable about the perils of political correctness and the dangers of television and other technology. “The Martian Chronicles” (1950), a melancholy series of overlapping stories about the colonization of Mars, can be read as an allegory about the settling of the United States or seen as a mirror of postwar American life.
“A Sound of Thunder” (1952) — a short story about a time-traveler, who journeys back to the dinosaur era and accidentally steps on a butterfly, thereby altering the course of world history — spawned many imitations, and in some respects anticipated the chaos theory concept of “the butterfly effect,” which suggests that one small change can lead to enormous changes later on. He also uncannily foresaw inventions like flat-screen TVs, Walkman-like devices and virtual reality.
by Michiko Kakutani, NY Times | Read more:
Charley Gallay/Getty Image
Wednesday, June 6, 2012
The Lean Startup
[ed. Didn't this used to be called Vaporware?]
Scott Cook is conducting an experiment. “It’s the corporate-counseling version of speed dating,” says the spectacled cofounder of Intuit, the finance software giant. He’s gathered his troops in a brightly lit conference room, where members of four Intuit departments are seated in front of 300 colleagues—plus 1,500 more watching via webcast—to hash out some business predicaments. Each team will take five minutes to present its problem. Then special guest Eric Ries will come up with a solution.
Arun Muthukumaran, a group manager of Intuit Payment Solutions, kicks off the proceedings. He describes a feature that could dramatically increase the number of small businesses that sign up for the company’s payment services. But implementation would burn up 20 employees’ time for a month. What if customers don’t bite?
Ries, dressed casually in a blazer, pastel shirt, and black denim, suggests a test: Rather than building the service and trying it out on customers, create a sign-up page that merely promises to deliver this groundbreaking capability. Then present it to some prospective clients. Compare their enrollment rate with that of a control group shown the usual sign-up page. The results will give the team the confidence either to proceed or toss the idea into the circular file. No one would actually get the new feature yet, of course, because it hasn’t been built.
“I guess we could piss off a few customers instead of thousands,” Muthukumaran says. Laughter ripples through the crowd.
Ries glances at his watch. “It’s 4:18 pm on Monday,” he says with a puckish grin. “On Wednesday at 4:18 pm, I expect an email telling me how it went.” The team members exchange glances that are equal parts bemusement and worry: They make software, not concepts. They build code through painstaking cycles of design, programming, and testing. Customers depend on their products and trust their brand. And this guy expects them to offer a feature that doesn’t even exist? Nevertheless, the rest of Intuit’s employees are exhilarated. The room breaks into fervent applause.
It’s something Ries is getting used to. At age 33, he is Silicon Valley’s latest guru. In the four years since he first posted his theories about running startups on an anonymous blog, his campaign to replace the typical product development approach—build it and they will come—with a system based on experimentation has become a juggernaut. Ries’ book The Lean Startup, published last summer, has sold 90,000 copies in the US. His blog, Startup Lessons Learned, has 75,000 subscribers, and his annual conference attracts 400 entrepreneurs, each paying more than $500. Harvard Business School has incorporated his ideas into its entrepreneurship curriculum, and an army of followers are propagating his principles through their own books, events, and apps. Whiz kids looking for investors pepper their PowerPoint decks with Lean Startup lingo, which has become so pervasive that TechCrunch announced a ban on Ries’ term pivot. Tech darlings like Dropbox, Groupon, and Zappos serve as Lean Startup poster children, and now the philosophy is reaching established companies, including GE and, this afternoon, Intuit.
Back in the presentation hall, Ries walks his audience through the tenets of his philosophy. The core motivation is simple, and a single slide sums it up: “Stop wasting people’s time.” Entrepreneurs and their managers, minions, advisers, and investors routinely pour their lives into products nobody wants. The business landscape is littered with the wreckage of nascent companies built at monumental effort and expense that imploded on contact with the market. (Paging Webvan! 3DO! Iridium!) Unlike an established company, a startup (or a new division within an established company) doesn’t know who its customers are or what products they need. Its prime directive is to discover a sustainable business model before running out of funding.
The key to this discovery, Ries proposes, is the scientific method: the business equivalent of clinical trials. Assumptions must be tested rigorously, Ries says—and here he rolls out one of those increasingly ubiquitous Lean Startup phrases—on a minimum viable product, or MVP. This is a simplified offering that reveals how real customers, not cloistered focus groups, respond. It may be a functional product or, like the Intuit team’s sign-up page, a come-on designed to elicit a reaction. Once tallied, customer responses produce actionable metrics, as opposed to popular vanity metrics, which create the illusion of success but yield little useful information about what customers want. By repeatedly cycling or iterating through a build-measure-learn loop—a method Ries calls validated learning—the Lean Startup develops a verified perspective that enables it to identify and fine-tune the mechanism that will keep the company growing, aka its engine of growth. Or, failing that, it can pivot to a new strategy. This, Ries insists, is the quickest, most efficient route to product/market fit (a phrase adopted from Silicon Valley kingpin Marc Andreessen), defined as the moment when a product achieves resonance with customers.
Never mind that this approach is a mashup of ideas culled from programming, marketing, manufacturing, and business strategy, leavened with hard-won insights that have circulated among Silicon Valley veterans for years. Ries makes no effort to hide his sources, and his presentation preempts his critics’ complaints. “Lean,” he explains, does not mean cheap; it means eliminating waste by testing ideas first. And it doesn’t mean small, but rather that companies shouldn’t ramp up personnel and facilities until they’ve validated their business model. His philosophy is not just for Internet and app companies—that’s just where it started. Reacting to customer behavior is not incompatible with creating breakthrough products like the iPhone, Ries says, which in the popular imagination sprang fully formed from the mind of Steve Jobs.
Right or wrong, the Lean Startup has a kind of inexorable logic, and Ries’ recommendations come as a bracing slap in the face to would-be tech moguls: Test your ideas before you bet the bank on them. Don’t listen to what focus groups say; watch what your customers do. Start with a modest offering and build on the aspects of it that prove valuable. Expect to get it wrong, and stay flexible (and solvent) enough to try again and again until you get it right.
by Ted Greenwald, Wired | Read more:
Photo: Eric Ogden
Scott Cook is conducting an experiment. “It’s the corporate-counseling version of speed dating,” says the spectacled cofounder of Intuit, the finance software giant. He’s gathered his troops in a brightly lit conference room, where members of four Intuit departments are seated in front of 300 colleagues—plus 1,500 more watching via webcast—to hash out some business predicaments. Each team will take five minutes to present its problem. Then special guest Eric Ries will come up with a solution.
Arun Muthukumaran, a group manager of Intuit Payment Solutions, kicks off the proceedings. He describes a feature that could dramatically increase the number of small businesses that sign up for the company’s payment services. But implementation would burn up 20 employees’ time for a month. What if customers don’t bite?
Ries, dressed casually in a blazer, pastel shirt, and black denim, suggests a test: Rather than building the service and trying it out on customers, create a sign-up page that merely promises to deliver this groundbreaking capability. Then present it to some prospective clients. Compare their enrollment rate with that of a control group shown the usual sign-up page. The results will give the team the confidence either to proceed or toss the idea into the circular file. No one would actually get the new feature yet, of course, because it hasn’t been built.
“I guess we could piss off a few customers instead of thousands,” Muthukumaran says. Laughter ripples through the crowd.
Ries glances at his watch. “It’s 4:18 pm on Monday,” he says with a puckish grin. “On Wednesday at 4:18 pm, I expect an email telling me how it went.” The team members exchange glances that are equal parts bemusement and worry: They make software, not concepts. They build code through painstaking cycles of design, programming, and testing. Customers depend on their products and trust their brand. And this guy expects them to offer a feature that doesn’t even exist? Nevertheless, the rest of Intuit’s employees are exhilarated. The room breaks into fervent applause.
It’s something Ries is getting used to. At age 33, he is Silicon Valley’s latest guru. In the four years since he first posted his theories about running startups on an anonymous blog, his campaign to replace the typical product development approach—build it and they will come—with a system based on experimentation has become a juggernaut. Ries’ book The Lean Startup, published last summer, has sold 90,000 copies in the US. His blog, Startup Lessons Learned, has 75,000 subscribers, and his annual conference attracts 400 entrepreneurs, each paying more than $500. Harvard Business School has incorporated his ideas into its entrepreneurship curriculum, and an army of followers are propagating his principles through their own books, events, and apps. Whiz kids looking for investors pepper their PowerPoint decks with Lean Startup lingo, which has become so pervasive that TechCrunch announced a ban on Ries’ term pivot. Tech darlings like Dropbox, Groupon, and Zappos serve as Lean Startup poster children, and now the philosophy is reaching established companies, including GE and, this afternoon, Intuit.
Back in the presentation hall, Ries walks his audience through the tenets of his philosophy. The core motivation is simple, and a single slide sums it up: “Stop wasting people’s time.” Entrepreneurs and their managers, minions, advisers, and investors routinely pour their lives into products nobody wants. The business landscape is littered with the wreckage of nascent companies built at monumental effort and expense that imploded on contact with the market. (Paging Webvan! 3DO! Iridium!) Unlike an established company, a startup (or a new division within an established company) doesn’t know who its customers are or what products they need. Its prime directive is to discover a sustainable business model before running out of funding.
The key to this discovery, Ries proposes, is the scientific method: the business equivalent of clinical trials. Assumptions must be tested rigorously, Ries says—and here he rolls out one of those increasingly ubiquitous Lean Startup phrases—on a minimum viable product, or MVP. This is a simplified offering that reveals how real customers, not cloistered focus groups, respond. It may be a functional product or, like the Intuit team’s sign-up page, a come-on designed to elicit a reaction. Once tallied, customer responses produce actionable metrics, as opposed to popular vanity metrics, which create the illusion of success but yield little useful information about what customers want. By repeatedly cycling or iterating through a build-measure-learn loop—a method Ries calls validated learning—the Lean Startup develops a verified perspective that enables it to identify and fine-tune the mechanism that will keep the company growing, aka its engine of growth. Or, failing that, it can pivot to a new strategy. This, Ries insists, is the quickest, most efficient route to product/market fit (a phrase adopted from Silicon Valley kingpin Marc Andreessen), defined as the moment when a product achieves resonance with customers.
Never mind that this approach is a mashup of ideas culled from programming, marketing, manufacturing, and business strategy, leavened with hard-won insights that have circulated among Silicon Valley veterans for years. Ries makes no effort to hide his sources, and his presentation preempts his critics’ complaints. “Lean,” he explains, does not mean cheap; it means eliminating waste by testing ideas first. And it doesn’t mean small, but rather that companies shouldn’t ramp up personnel and facilities until they’ve validated their business model. His philosophy is not just for Internet and app companies—that’s just where it started. Reacting to customer behavior is not incompatible with creating breakthrough products like the iPhone, Ries says, which in the popular imagination sprang fully formed from the mind of Steve Jobs.
Right or wrong, the Lean Startup has a kind of inexorable logic, and Ries’ recommendations come as a bracing slap in the face to would-be tech moguls: Test your ideas before you bet the bank on them. Don’t listen to what focus groups say; watch what your customers do. Start with a modest offering and build on the aspects of it that prove valuable. Expect to get it wrong, and stay flexible (and solvent) enough to try again and again until you get it right.
by Ted Greenwald, Wired | Read more:
Photo: Eric Ogden
Curation and the Questions No One Is Asking
[ed. Curation seems to be a hot topic these days (see previous post: You Are Not a Curator. Here's a more nuanced perspective. I'm not invested in any particular term I just think of it as aggregating, sharing, or the digital eqivalent of a filing cabinet.]
It’s been three months since our last Internet debate about “curation,” so by all means, let’s have another one!
The latest argument began last week after a mysterious tweet seemed to finally produce hard evidence that curators do, in fact, think they are better than everyone else. I’ve never met a “curator” who believes this, and it’s the same straw man argument that is concocted every three months.
So let’s get this out of the way now: Curation only exists because this is an incredible time for creation. It all starts and ends with a writer, a photographer, a filmmaker, or a publisher who creates or funds that work. The rest of us are just looking for something to inspire us, and when we do, we want to share it with others. And in the end, we all want to find ways to support the financing of creators’ work.
Yet every three months we get angry about the word “curation”—Is it “twee”? Who do these people think they are? Why don’t they get real jobs? Why are we so angry at people who are out there doing this for free?—but once again, we fail to ask any of the most pressing questions about curation in the Twitter and Facebook era.
Here are those questions, in order:
1. Is curation actually valuable, and do we have proof that it is, or is not?
A few successful curators, as I would define them, on Twitter include: Paul Kedrosky (@pkedrosky, 213,000+ followers), Anthony De Rosa (@antderosa, 30,000+ followers), Matthew Keys (@producermatthew, 11,000+ followers), Maria Popova (@brainpicker, 180,000+ followers), Heidi Moore (@moorehn, 18,000+ followers), Danyel Smith (@danamo, 20,000+ followers), Kevin Smokler (@weegee, 65,000 followers), and Jodi Ettenberg (a contributing editor for Longreads and Travelreads, whose @legalnomads has 14,000 followers).
You can argue about their respective tastes and whether you’re into what they’re slinging, but based on their follower counts, it’s tough to argue that what they do isn’t valuable to their audiences. When they link to a story, in most cases publishers will see a bump in new visitors. If you’re a publisher, you might just see “Twitter.com” in your Google Analytics referrals, but these are actual people, and their recommendations mean something to their followers.
To break it down further: For many curators, their work is valuable because their followers trust them to make objective, worthwhile recommendations, and they do so consistently. They’re valuable because they offer a consistent, reliable service.
Consistency is the defining trait that seems to separate “professional” curation and linkblogging from the occasional “oh hey look at this.” The web is a customer-service medium, and curation is just one of those services.
It doesn’t matter whether you believe the act of curation requires no more talent than managing the Employee Picks shelf at Barnes & Noble, or working the graveyard shift at your college radio station. People appreciate it if you save them a little time and point them to interesting work that might not show up in a “most popular” algorithm.
by Mark Armstrong, Read more:
It’s been three months since our last Internet debate about “curation,” so by all means, let’s have another one!
The latest argument began last week after a mysterious tweet seemed to finally produce hard evidence that curators do, in fact, think they are better than everyone else. I’ve never met a “curator” who believes this, and it’s the same straw man argument that is concocted every three months.
So let’s get this out of the way now: Curation only exists because this is an incredible time for creation. It all starts and ends with a writer, a photographer, a filmmaker, or a publisher who creates or funds that work. The rest of us are just looking for something to inspire us, and when we do, we want to share it with others. And in the end, we all want to find ways to support the financing of creators’ work.
Yet every three months we get angry about the word “curation”—Is it “twee”? Who do these people think they are? Why don’t they get real jobs? Why are we so angry at people who are out there doing this for free?—but once again, we fail to ask any of the most pressing questions about curation in the Twitter and Facebook era.
Here are those questions, in order:
1. Is curation actually valuable, and do we have proof that it is, or is not?
A few successful curators, as I would define them, on Twitter include: Paul Kedrosky (@pkedrosky, 213,000+ followers), Anthony De Rosa (@antderosa, 30,000+ followers), Matthew Keys (@producermatthew, 11,000+ followers), Maria Popova (@brainpicker, 180,000+ followers), Heidi Moore (@moorehn, 18,000+ followers), Danyel Smith (@danamo, 20,000+ followers), Kevin Smokler (@weegee, 65,000 followers), and Jodi Ettenberg (a contributing editor for Longreads and Travelreads, whose @legalnomads has 14,000 followers).
You can argue about their respective tastes and whether you’re into what they’re slinging, but based on their follower counts, it’s tough to argue that what they do isn’t valuable to their audiences. When they link to a story, in most cases publishers will see a bump in new visitors. If you’re a publisher, you might just see “Twitter.com” in your Google Analytics referrals, but these are actual people, and their recommendations mean something to their followers.
To break it down further: For many curators, their work is valuable because their followers trust them to make objective, worthwhile recommendations, and they do so consistently. They’re valuable because they offer a consistent, reliable service.
Consistency is the defining trait that seems to separate “professional” curation and linkblogging from the occasional “oh hey look at this.” The web is a customer-service medium, and curation is just one of those services.
It doesn’t matter whether you believe the act of curation requires no more talent than managing the Employee Picks shelf at Barnes & Noble, or working the graveyard shift at your college radio station. People appreciate it if you save them a little time and point them to interesting work that might not show up in a “most popular” algorithm.
by Mark Armstrong, Read more:
Open Culture: 500 Free Movies Online
Where to watch free movies online? Let’s get you started. We have
listed here 500+ quality films that you can watch online. The collection
is divided into the following categories: Comedy & Drama; Film Noir, Horror & Hitchcock; Westerns & John Wayne; Silent Films; Documentaries, and Animation.
500 Free Movies
For example: Sid and Nancy
500 Free Movies
For example: Sid and Nancy
Moral Taste Buds
Why working-class people vote conservative
Why on Earth would a working-class person ever vote for a conservative candidate? This question has obsessed the American left since Ronald Reagan first captured the votes of so many union members, farmers, urban Catholics and other relatively powerless people – the so-called "Reagan Democrats". Isn't the Republican party the party of big business? Don't the Democrats stand up for the little guy, and try to redistribute the wealth downwards?
Many commentators on the left have embraced some version of the duping hypothesis: the Republican party dupes people into voting against their economic interests by triggering outrage on cultural issues. "Vote for us and we'll protect the American flag!" say the Republicans. "We'll make English the official language of the United States! And most importantly, we'll prevent gay people from threatening your marriage when they … marry! Along the way we'll cut taxes on the rich, cut benefits for the poor, and allow industries to dump their waste into your drinking water, but never mind that. Only we can protect you from gay, Spanish-speaking flag-burners!"
One of the most robust findings in social psychology is that people find ways to believe whatever they want to believe. And the left really want to believe the duping hypothesis. It absolves them from blame and protects them from the need to look in the mirror or figure out what they stand for in the 21st century.
Here's a more painful but ultimately constructive diagnosis, from the point of view of moral psychology: politics at the national level is more like religion than it is like shopping. It's more about a moral vision that unifies a nation and calls it to greatness than it is about self-interest or specific policies. In most countries, the right tends to see that more clearly than the left. In America the Republicans did the hard work of drafting their moral vision in the 1970s, and Ronald Reagan was their eloquent spokesman. Patriotism, social order, strong families, personal responsibility (not government safety nets) and free enterprise. Those are values, not government programmes.
The Democrats, in contrast, have tried to win voters' hearts by promising to protect or expand programmes for elderly people, young people, students, poor people and the middle class. Vote for us and we'll use government to take care of everyone! But most Americans don't want to live in a nation based primarily on caring. That's what families are for.
One reason the left has such difficulty forging a lasting connection with voters is that the right has a built-in advantage – conservatives have a broader moral palate than the liberals (as we call leftists in the US). Think about it this way: our tongues have taste buds that are responsive to five classes of chemicals, which we perceive as sweet, sour, salty, bitter, and savoury. Sweetness is generally the most appealing of the five tastes, but when it comes to a serious meal, most people want more than that.
In the same way, you can think of the moral mind as being like a tongue that is sensitive to a variety of moral flavours. In my research with colleagues at YourMorals.org, we have identified six moral concerns as the best candidates for being the innate "taste buds" of the moral sense: care/harm, fairness/cheating, liberty/oppression, loyalty/betrayal, authority/subversion, and sanctity/degradation. Across many kinds of surveys, in the UK as well as in the USA, we find that people who self-identify as being on the left score higher on questions about care/harm. For example, how much would someone have to pay you to kick a dog in the head? Nobody wants to do this, but liberals say they would require more money than conservatives to cause harm to an innocent creature.
But on matters relating to group loyalty, respect for authority and sanctity (treating things as sacred and untouchable, not only in the context of religion), it sometimes seems that liberals lack the moral taste buds, or at least, their moral "cuisine" makes less use of them. For example, according to our data, if you want to hire someone to criticise your nation on a radio show in another nation (loyalty), give the finger to his boss (authority), or sign a piece of paper stating one's willingness to sell his soul (sanctity), you can save a lot of money by posting a sign: "Conservatives need not apply."
by Jonathan Haidt, The Guardian | Read more:
Photograph: Michael Reynolds/EPA/Corbis
Why on Earth would a working-class person ever vote for a conservative candidate? This question has obsessed the American left since Ronald Reagan first captured the votes of so many union members, farmers, urban Catholics and other relatively powerless people – the so-called "Reagan Democrats". Isn't the Republican party the party of big business? Don't the Democrats stand up for the little guy, and try to redistribute the wealth downwards?
Many commentators on the left have embraced some version of the duping hypothesis: the Republican party dupes people into voting against their economic interests by triggering outrage on cultural issues. "Vote for us and we'll protect the American flag!" say the Republicans. "We'll make English the official language of the United States! And most importantly, we'll prevent gay people from threatening your marriage when they … marry! Along the way we'll cut taxes on the rich, cut benefits for the poor, and allow industries to dump their waste into your drinking water, but never mind that. Only we can protect you from gay, Spanish-speaking flag-burners!"
One of the most robust findings in social psychology is that people find ways to believe whatever they want to believe. And the left really want to believe the duping hypothesis. It absolves them from blame and protects them from the need to look in the mirror or figure out what they stand for in the 21st century.
Here's a more painful but ultimately constructive diagnosis, from the point of view of moral psychology: politics at the national level is more like religion than it is like shopping. It's more about a moral vision that unifies a nation and calls it to greatness than it is about self-interest or specific policies. In most countries, the right tends to see that more clearly than the left. In America the Republicans did the hard work of drafting their moral vision in the 1970s, and Ronald Reagan was their eloquent spokesman. Patriotism, social order, strong families, personal responsibility (not government safety nets) and free enterprise. Those are values, not government programmes.
The Democrats, in contrast, have tried to win voters' hearts by promising to protect or expand programmes for elderly people, young people, students, poor people and the middle class. Vote for us and we'll use government to take care of everyone! But most Americans don't want to live in a nation based primarily on caring. That's what families are for.
One reason the left has such difficulty forging a lasting connection with voters is that the right has a built-in advantage – conservatives have a broader moral palate than the liberals (as we call leftists in the US). Think about it this way: our tongues have taste buds that are responsive to five classes of chemicals, which we perceive as sweet, sour, salty, bitter, and savoury. Sweetness is generally the most appealing of the five tastes, but when it comes to a serious meal, most people want more than that.
In the same way, you can think of the moral mind as being like a tongue that is sensitive to a variety of moral flavours. In my research with colleagues at YourMorals.org, we have identified six moral concerns as the best candidates for being the innate "taste buds" of the moral sense: care/harm, fairness/cheating, liberty/oppression, loyalty/betrayal, authority/subversion, and sanctity/degradation. Across many kinds of surveys, in the UK as well as in the USA, we find that people who self-identify as being on the left score higher on questions about care/harm. For example, how much would someone have to pay you to kick a dog in the head? Nobody wants to do this, but liberals say they would require more money than conservatives to cause harm to an innocent creature.
But on matters relating to group loyalty, respect for authority and sanctity (treating things as sacred and untouchable, not only in the context of religion), it sometimes seems that liberals lack the moral taste buds, or at least, their moral "cuisine" makes less use of them. For example, according to our data, if you want to hire someone to criticise your nation on a radio show in another nation (loyalty), give the finger to his boss (authority), or sign a piece of paper stating one's willingness to sell his soul (sanctity), you can save a lot of money by posting a sign: "Conservatives need not apply."
by Jonathan Haidt, The Guardian | Read more:
Photograph: Michael Reynolds/EPA/Corbis
Subscribe to:
Posts (Atom)