Sunday, June 10, 2012


Gustav Klimt, Portrait of Adele Bloch Bauer
via:

Looking for the umpteenth time at Gustav Klimt’s “Portrait of Adele Bloch-Bauer I” (1907) at the estimable Neue Galerie, on the occasion of a show celebrating Klimt’s hundred and fiftieth birthday, I’ve changed my mind. The gold- and silver-encrusted picture, bought by the museum’s co-founder Ronald Lauder for a headline-grabbing hundred and thirty-five million dollars, in 2006, isn’t a peculiarly incoherent painting, as I had once thought. It’s not a painting at all, but a largish, flattish bauble: a thing. It is classic less of its time than of ours, by sole dint of the money sunk in it.

“Adele” belongs to a special class of iffy art works whose price is their object. A dispirited version in pastels of Edvard Munch’s “The Scream,” which fetched a hundred and nineteen million last month. Another example is the sadly discolored van Gogh “Sunflowers,” which set a market record—forty million—in 1987, when sold to a Japanese insurance company. (The purchase amounted to a cherry on top of Japan’s then ballooning, doomed real-estate bubble.) And I remember asking the director of Australia’s National Gallery why, in 1973, he had plunked an unheard-of two million for Jackson Pollock’s amazing but, to my eye, overworked “Blue Poles.” He mused, “Well, I’ve always liked blue.”

by Peter Schjeldahl, The New Yorker | Changing My Mind About Gustav Klimt's "Adele" |  Read More:

Landscape in Cagnes, 1923 Felix Vallotton (by BoFransson)
via:

Warehouse Area, San Francisco by Minor White
via:

In 1839, a year after the first photo containing a human being was made, photography pioneer Robert Cornelius made the first ever portrait of a human being.

On a sunny day in October, Robert Cornelius set up his camera in the back of his father’s gas lamp-importing business on Chestnut Street in Center City, Philadelphia. After removing the lens cap, he sprinted into the frame, where he sat for more than a minute before covering up the lens. The picture he produced that day was the first photographic self-portrait. It is also widely considered the first successful photographic portrait of a human being.

[…] the words written on the back of the self-portrait, in Cornelius’ own hand, said it all: “The first light Picture ever taken. 1839.”

via:
See also: History of Photography  (Wikipedia)

The Ultralightlife



Lightweight Trail shoes that zip completely into themselves to minimize the space it takes up in your pack. Ultralightlife.
via: YMFY

A Roe, by Any Other Name



The swampy Atchafalaya Basin is a far cry from the cold waters of the Caspian Sea. And its lowly native bowfin, often derided as a throwaway fish, is no prized sturgeon. Yet it is laying golden eggs.

Bowfin caviar, from the single-employee Louisiana Caviar Company (motto: “Laissez-les manger beaucoup Cajun caviar!”) is earning a place on the menus at such top-notch establishments here as Commander’s Palace and Restaurant Stella. The executive chef of Galatoire’s Restaurant, Michael Sichel, served it up at the New Orleans Wine and Food Experience last month, an annual bacchanal.

And now, even the Russians are coming.

“There’s pretty good demand from lots of clients,” said Igor Taksir, a Russian-born exporter who ships the glistening roe, which is actually black but turns yellow-gold when cooked, to Moscow and Ukraine. Mr. Taksir said he was “skeptical in the beginning,” when he discovered bowfin caviar at a seafood show in Boston three years ago. “But when we started tasting,” he said, “we realized the quality was surprisingly good.”

Still, this is not the caviar of gilded dreams. If beluga sturgeon from the Caspian Sea, the king of them all, is paired best with Champagne, then bowfin from the bayou, some of it infused with hot pepper and served deep-fried, might go better with a beer. It represents what is a populist twist and an accommodation by chefs to the environmental and ethical realities that come with serving Russian and Iranian caviar.

Global efforts to all but ban the international trade of caviar from the Caspian Sea, where overfishing and pollution have depleted sturgeon populations, have opened enormous opportunities for affordable substitutes from unlikely places in America. Even landlocked Montana, North Dakota and Oklahoma have thriving markets based on wild river fish.

“I think any chef or any food person with a conscience is only eating domestic or farmed caviar,” said Mitchell Davis, executive vice president of the James Beard Foundation.

The world has come to have a taste for the growing American market of caviar and fish roe. Between 2001 and 2010, annual exports of white sturgeon, shovelnose sturgeon (also called hackleback) and paddlefish roe increased to about 37,712 pounds from roughly 5,214 pounds, with a majority of wild origin, according to the American branch of the Convention on International Trade in Endangered Species of Wild Fauna and Flora and the federal Fish and Wildlife Service.

Seventy percent of the total caviar and roe exported from the United States in 2010 went to countries in the European Union, Ukraine and Japan. (...)

Some caviar enthusiasts will never agree.

“I haven’t sampled bowfin myself, and quite frankly wouldn’t want to,” said Ryan Sutton, the food critic at Bloomberg News, who has lived and studied in Russia. Mr. Sutton was also critical of American paddlefish caviar, which he described as lacking both texture and flavor.

In caviar, a taster wants firmness and pop, “with a clean flavor of the sea,” Mr. Sutton said. (...)

Even the fact that the Food and Drug Administration allows the roe from fish other than the sturgeon to be called caviar — as long as it is qualified by the fish’s name, as in “bowfin caviar” — rubs some people the wrong way.

“The F.D.A. looks at the word ‘caviar’ as synonymous with roe, but that is not true,” said Douglas Peterson, associate professor of fisheries and aquaculture research at the Warnell School of Forestry and Natural Resources at the University of Georgia. “Caviar only comes from sturgeon,” he said. “Everything else is fish eggs.”

Susan Saulny, NY Times |  Read more:
Photo: William Widmer for The New York Times

Saturday, June 9, 2012

Herbie Hancock & Leonard Cohen


[ed. Sorry about the ad at the beginning. You can skip it after a few seconds...]

A Universe of Self-Replicating Code


...What's, in a way, missing in today's world is more biology of the Internet. More people like Nils Barricelli to go out and look at what's going on, not from a business or what's legal point of view, but just to observe what's going on.

Many of these things we read about in the front page of the newspaper every day, about what's proper or improper, or ethical or unethical, really concern this issue of autonomous self-replicating codes. What happens if you subscribe to a service and then as part of that service, unbeknownst to you, a piece of self-replicating code inhabits your machine, and it goes out and does something else? Who is responsible for that? And we're in an increasingly gray zone as to where that's going.

The most virulent codes, of course, are parasitic, just as viruses are. They're codes that go out and do things, particularly codes that go out and gather money. Which is essentially what these things like cookies do. They are small strings of code that go out and gather valuable bits of information, and they come back and sell it to somebody. It's a very interesting situation. You would have thought this was inconceivable 20 or 30 years ago. Yet, you probably wouldn't have to go … well, we're in New York, not San Francisco, but in San Francisco, you wouldn't have to go five blocks to find five or 10 companies whose income is based on exactly that premise. And doing very well at it.

Walking over here today, just three blocks from my hotel, the street right out front is blocked off. There are 20 police cars out there and seven satellite news vans, because Apple is releasing a new code. They're couching it as releasing a new piece of hardware, but it's really a new gateway into the closed world of Apple's code. And that's enough to block human traffic.

Why is Apple one of the world's most valuable companies? It's not only because their machines are so beautifully designed, which is great and wonderful, but because those machines represent a closed numerical system. And they're making great strides in expanding that system. It's no longer at all odd to have a Mac laptop. It's almost the normal thing.

But I'd like to take this to a different level, if I can change the subject... Ten or 20 years ago I was preaching that we should look at digital code as biologists: the Darwin Among the Machines stuff. People thought that was crazy, and now it's firmly the accepted metaphor for what's going on. And Kevin Kelly quoted me in Wired, he asked me for my last word on what companies should do about this. And I said, "Well, they should hire more biologists."

But what we're missing now, on another level, is not just biology, but cosmology. People treat the digital universe as some sort of metaphor, just a cute word for all these products. The universe of Apple, the universe of Google, the universe of Facebook, that these collectively constitute the digital universe, and we can only see it in human terms and what does this do for us?

We're missing a tremendous opportunity. We're asleep at the switch because it's not a metaphor. In 1945 we actually did create a new universe. This is a universe of numbers with a life of their own, that we only see in terms of what those numbers can do for us. Can they record this interview? Can they play our music? Can they order our books on Amazon? If you cross the mirror in the other direction, there really is a universe of self-reproducing digital code. When I last checked, it was growing by five trillion bits per second. And that's not just a metaphor for something else. It actually is. It's a physical reality.

by George Dyson, Edge |  Read more:

What is Cool?

[ed. Is there really a Journal of Individual Differences?]

Do rebelliousness, emotional control, toughness and thrill-seeking still make up the essence of coolness?

Can performers James Dean and Miles Davis still be considered the models of cool?

Research led by a University of Rochester Medical Center psychologist and published by the Journal of Individual Differences has found the characteristics associated with coolness today are markedly different than those that generated the concept of cool.

“When I set out to find what people mean by coolness, I wanted to find corroboration of what I thought coolness was,” said Ilan Dar-Nimrod, Ph.D., lead author of “Coolness: An Empirical Investigation.” “I was not prepared to find that coolness has lost so much of its historical origins and meaning—the very heavy countercultural, somewhat individualistic pose I associated with cool.

“James Dean is no longer the epitome of cool,” Dar-Nimrod said. “The much darker version of what coolness is still there, but it is not the main focus. The main thing is: Do I like this person? Is this person nice to people, attractive, confident and successful? That’s cool today, at least among young mainstream individuals.”  (...)

“We have a kind of a schizophrenic coolness concept in our mind,” Dar-Nimrod said. “Almost any one of us will be cool in some people’s eyes, which suggests the idiosyncratic way coolness is evaluated. But some will be judged as cool in many people’s eyes, which suggests there is a core valuation to coolness, and today that does not seem to be the historical nature of cool. We suggest there is some transition from the countercultural cool to a generic version of it’s good and I like it. But this transition is by no way completed.”

by Michael Wentzel, University of Rochester |  Read more:

Charles Sheeler, The Upstairs (1938)
via:

Bell X1


Friday, June 8, 2012

Late Fragment


[ed. Today was the last day in my little house. So much life lived there, so many good memories.]

Late Fragment

And did you get what
you wanted from this life, even so?
I did.
And what did you want?
To call myself beloved, to feel myself
beloved on the earth.

Raymond Carver

The Lonely Polygamist

Meet Bill. He has four wives and thirty-one kids. And something's missing.

Polygamy is not something you try on a whim. You don't come home from work one day, pop open a beer, settle down for your nightly dose of Seinfeld reruns, and think, "Boy, my marriage is a bore. Maybe I should give polygamy a whirl." It's true that polygamy, as a concept, sounds downright inviting. Yes, there are lots of women involved, women of all shapes and sizes and personalities, a wonderful variety of women, and yes, they'll fulfill your every need, cook your dinner, do your laundry, sew the buttons on your shirts. And yes, you're allowed to sleep with these women, each of them, one for every night of the week if you want, and what's more, when you wake up in the morning, you won't have to deal with even the tiniest twinge of guilt, because these women, all of them, are your sweethearts, your soul mates, your wives.

Then what, you're asking yourself, could possibly be the problem?

The problem is this: Polygamy is not what you think it is. It has nothing to do with the little fantasy just spelled out for you. A life of polygamy is not a joyride, a guiltless sexual free-for-all. Being a polygamist is not for the easygoing or the weak of heart. It's like marine boot camp or working for the mob; if you're not cut out for it, if you don't have that essential thing inside, it will eat you alive. And polygamy doesn't just require simple cojones, either. It requires the devotion of a monk, the diplomatic prowess of Winston Churchill, the doggedness of a field general, the patience of a pine tree.

Put simply: You'd have to be crazy to want to be a polygamist.

That's what's so strange about Bill. Bill has four wives and thirty-one children. Bill is an ex-Mormon, and he doesn't seem crazy at all. If anything, he seems exceptionally sane, painfully regular, as normal as soup. He's certainly not the wild-eyed, woolly-bearded zealot you might expect. Approaching middle age, Bill has the unassuming air of an accountant. He wears white shirts, blue ties, and black wing tips. He is Joe Blow incarnate. The only thing exceptional about Bill is his height: He is six foot eight and prone to hitting his head on hanging lamps and potted plants.

Bill's wives are not who you'd expect, either. They're not ruddy-faced women with high collars buttoned up to their chins. These are the women you see every day of your life. They wear jeans and T-shirts; they drive minivans; they have jobs. Julia is a legal secretary; Emily manages part of Bill's business; Susan owns a couple of health-food stores; and Stacy stays at home with the younger children. They are also tall, all of them around six feet; if you didn't know better, you'd think Bill and his wives had a secret plan to create a race of giants.

Each of Bill's wives lives in a different house in the suburbs around Salt Lake City. They've lived in different configurations over the years--all in one place, two in one and two in another--but this is the way that seems best nowadays, since there are teenagers in the mix, and one thing everybody seems to agree on is how much teenagers need their space. Bill himself is homeless. He wanders from house to house like a nomad or a beggar, sometimes surprising a certain wife with the suddenness of his presence. In the past, he has used a rigid rotation schedule but now opts for a looser approach. He believes that intuition and nothing else should guide where he stays for the night.

Okay, now: Put yourself in Bill's size-14 wing tips for a minute. You've just finished an exhausting day at work. It's that time of the evening when you think to yourself, "Hmmm. Which house am I going to tonight?" You get in your car and head off toward Emily's house; you haven't seen Emily for several days, and besides, she's having trouble with one of your teenage daughters--she's not sticking to her curfew. But you remember that your son Walt has a soccer game on the other side of town at 5:30. You start to turn around, but then you think of Susan, wife number two, who has come down with the flu and is in need of some comfort and company. Then it hits you that not only did you promise to look at the bad alternator in Stacy's Volvo tonight, not only did you tell Emily that you'd be home in time to meet with the insurance man to go over all your policies, but that Annie, your six-year-old daughter, is having a birthday tomorrow and you've yet to get her a present.

Sitting there at the intersection--cars honking, people flipping you the bird--do you feel paralyzed? Do you feel like merging with the rest of the traffic onto I-15 and heading for Las Vegas, leaving it all behind?

This is Bill's life.

by Brady Udall, Standard-Examiner (1998) |  Read more:

Team of Mascots

Just four years ago, when it was clear that he would be the Democratic presidential nominee, Barack Obama famously declared that, if elected, he would want “a team of rivals” in his Cabinet, telling Joe Klein, of Time magazine, “I don’t want to have people who just agree with me. I want people who are continually pushing me out of my comfort zone.” His inspiration was Doris Kearns Goodwin’s best-selling book about Abraham Lincoln, who appointed three men who had been his chief competitors for the presidency in 1860—and who held him, at that point, in varying degrees of contempt—to help him keep the Union together during the Civil War. To say that things haven’t worked out that way for Obama is the mildest understatement. “No! God, no!” one former senior Obama adviser told me when I asked if the president had lived up to this goal. There’s nothing sacred about the team-of-rivals idea—for one thing, it depends on who the rivals were. Obama does have one former rival, Hillary Clinton, in his Cabinet, and another, Joe Biden, is vice president. Mitt Romney would have fewer options. Can anyone really imagine Romney making Rick Santorum his secretary of health and human services, or Herman Cain his commerce secretary, or Newt Gingrich the administrator of nasa? Well, maybe the last, if only so Romney could have the satisfaction of sending the former Speaker—bang! zoom!—to the moon! For the record, Gingrich has said he’d be unlikely to accept any position in a Romney administration, and Romney himself has given almost no real hints about whom he might appoint. In light of his propensity to bow to prevailing political pressures, his Cabinet might well be, as he described himself, “severely conservative.” But the way presidents use their Cabinets says a lot about their style of governing. Richard Nixon created a deliberately weak Cabinet (he ignored his secretary of state William Rogers to the point of humiliation, in favor of his national-security adviser, Henry Kissinger), and he rewarded their loyalty by demanding all their resignations on the morning after his landslide re-election, in 1972. John F. Kennedy, having won a whisker-close election against Nixon, in 1960, wanted Republicans such as Douglas Dillon at Treasury and Robert McNamara at Defense to lend an air of bipartisan authority and competence. George W. Bush had a very powerful Cabinet, especially in the persons of Donald Rumsfeld, Robert Gates, and Condoleezza Rice, if only to compensate for his pronounced lack of experience in foreign policy and military affairs.  (...)

The days when presidential Cabinets contained the likes of Thomas Jefferson as secretary of state, or Alexander Hamilton as secretary of the Treasury, are long since gone (and those early Cabinets displayed a fractiousness that no modern president would be likely to tolerate), though Cabinet officers retain symbols of office—from flags to drivers to, in some cases, chefs—befitting grander figures. The lingering public image of Cabinet meetings as the scene of important action is largely a myth. “They are not meetings where policy is determined or decisions are made,” the late Nicholas Katzenbach, who served Lyndon Johnson as attorney general, recalled in his memoirs. Nevertheless, Katzenbach attended them faithfully, “not because they were particularly interesting or important, but simply because”—remembering L.B.J.’s awful relationship with the previous attorney general, Bobby Kennedy—“I did not want the president to feel I was not on his team.” Even as recently as the 1930s, Cabinet figures such as Labor Secretary Frances Perkins, Interior Secretary Harold Ickes, and Postmaster General James A. Farley were important advisers to Franklin D. Roosevelt (and, in the cases of Perkins and Ickes, priceless diarists and chroniclers) in areas beyond their lanes of departmental responsibility, just as Robert F. Kennedy was his brother’s all-purpose sounding board and McNamara provided J.F.K. with advice on business and economics well outside his purview at the Pentagon. “Cabinet posts are great posts,” says Dan Glickman, who was Bill Clinton’s agriculture secretary. “But you realize that the days of Harry Hopkins and others who were in the Cabinet and were key advisers to the president—that really isn’t true anymore.” “In the case of Clinton,” Glickman went on, “it was a joy to work for him, because, in large part, he gave each of us lots of discretion. He said, ‘If it’s bad news, don’t call me. If it’s good news, call me. If it’s exceptionally good news, call me quicker.’ ” The way Cabinet officers relate personally to the president is—no surprise—often the crucial factor in their success or failure. Colin Powell had a worldwide profile and a higher approval rating than George W. Bush, and partly for those very reasons had trouble building a close rapport with a president who had lots to be modest about. Obama’s energy secretary, Steven Chu, may have a Nobel Prize in physics, but that counted for little when he once tried to make a too elaborate visual presentation to the president. Obama said to him after the third slide, as one witness recalls, “O.K., I got it. I’m done, Steve. Turn it off.” Attorney General Eric Holder has been particularly long-suffering, although he and his wife, Dr. Sharon Malone, are socially close to the Obamas. Set aside the controversy that surrounded his failure, as deputy attorney general at the end of the Clinton administration, to oppose a pardon for Marc Rich, the fugitive financier whose ex-wife was a Clinton donor. Holder, the first black attorney general, has taken a political beating more recently for musing that the country is a “nation of cowards” when it comes to talking about race, and for following through on what seemed to be the president’s own wishes on such matters as proposing to try the 9/11 mastermind Khalid Sheikh Mohammed in an American courtroom (in the middle of Manhattan, no less). The sharp growth in the White House staff in the years since World War II has also meant that policy functions once reserved for Cabinet officers are now performed by top aides inside the White House itself. Obama meets regularly and privately with Tim Geithner and Hillary Clinton, but almost certainly sees his national-security adviser, Tom Donilon, and his economic adviser, Gene Sperling, even more often. The relentless media cycle now moves so swiftly that any president, even one less inclined toward centralized discipline than Obama, might naturally rely on the White House’s quick-on-the-draw internal-messaging machine instead of bucking things through the bureaucratic channels of the executive departments. In dealing with a Cabinet, as with life itself, there is no substitute for experience. Clinton-administration veterans told me that their boss made better, fuller use of the Cabinet in his second term than he did in his first, when officials such as Les Aspin at the Pentagon and Warren Christopher at the State Department sometimes struggled to build a cohesive team. Lincoln’s choice of William H. Seward at State, Salmon P. Chase at Treasury, and Edward Bates as attorney general were far from universally applauded. “The construction of a Cabinet,” one editorial admonished at the time, “like the courting of a shrewd girl, belongs to a branch of the fine arts with which the new Executive is not acquainted.” Lincoln’s Cabinet did solve one political problem but it created others—Lincoln had to fight not one but two civil wars.

by Todd S. Purdum, Vanity Fair |  Read more:
Darrow

The Library of Utopia


In his 1938 book World Brain, H.G. Wells imagined a time—not very distant, he believed—when every person on the planet would have easy access to "all that is thought or known."

The 1930s were a decade of rapid advances in microphotography, and Wells assumed that microfilm would be the technology to make the corpus of human knowledge universally available. "The time is close at hand," he wrote, "when any student, in any part of the world, will be able to sit with his projector in his own study at his or her convenience to examine any book, any document, in an exact replica."

Wells's optimism was misplaced. The Second World War put idealistic ventures on hold, and after peace was restored, technical constraints made his plan unworkable. Though microfilm would remain an important medium for storing and preserving documents, it proved too unwieldy, too fragile, and too expensive to serve as the basis for a broad system of knowledge transmission. But Wells's idea is still alive. Today, 75 years later, the prospect of creating a public repository of every book ever published—what the Princeton philosopher Peter Singer calls "the library of utopia"—seems well within our grasp. With the Internet, we have an information system that can store and transmit documents efficiently and cheaply, delivering them on demand to anyone with a computer or a smart phone. All that remains to be done is to digitize the more than 100 million books that have appeared since Gutenberg invented movable type, index their contents, add some descriptive metadata, and put them online with tools for viewing and searching.
Google had the smarts and the money to scan millions of books into its database, but the major problems with constructing a universal library has little to do with technology.
It sounds straightforward. And if it were just a matter of moving bits and bytes around, a universal online library might already exist. Google, after all, has been working on the challenge for 10 years. But the search giant's book program has foundered; it is mired in a legal swamp. Now another momentous project to build a universal library is taking shape. It springs not from Silicon Valley but from Harvard University. The Digital Public Library of America—the DPLA—has big goals, big names, and big contributors. And yet for all the project's strengths, its success is far from assured. Like Google before it, the DPLA is learning that the major problem with constructing a universal library nowadays has little to do with technology. It's the thorny tangle of legal, commercial, and political issues that surrounds the publishing business. Internet or not, the world may still not be ready for the library of utopia.

by Nicholas Carr, MIT Technology Review |  Read more:
Illustration: Stuart Bradford

The New Neuroscience of Choking

Last Sunday, at the Memorial golf tournament in Dublin, Ohio, Rickie Fowler looked like the man to beat. He entered the tournament with momentum: Fowler had recently gained his first ever P.G.A. tour victory, and he had finished in the top ten in his last four starts. On the first hole of the final round, Fowler sank a fourteen-foot birdie putt, placing him within two shots of the lead.

And that’s when things fell apart. Fowler pulled a shot on the second hole and never recovered. On the next hole, he hit his approach into a greenside bunker and ended up three-putting for a double bogey. He finished with an eighty-four, his worst round on the tour by five shots. Although he began the day in third place, he finished in a tie for fifty-second, sixteen shots behind the winner, Tiger Woods.

In short, Fowler choked. Like LeBron James—who keeps on missing free throws when the game is on the line—he seems to have been undone by the pressure of the situation. And choking isn’t just a hazard for athletes: the condition also afflicts opera singers and actors, hedge-fund traders and chess grandmasters. All of sudden, just when these experts most need to perform, their expertise is lost. The grace of talent disappears.

As Malcolm Gladwell pointed out in his 2000 article on the psychology of choking, the phenomenon can seem like an amorphous category of failure. Nevertheless, choking is actually triggered by a specific mental mistake: thinking too much. The sequence of events typically goes like this: When people get anxious about performing, they naturally become particularly self-conscious; they begin scrutinizing actions that are best performed on autopilot. The expert golfer, for instance, begins contemplating the details of his swing, making sure that the elbows are tucked and his weight is properly shifted. This kind of deliberation can be lethal for a performer. (...)

Sian Beilock, a professor of psychology at the University of Chicago, has documented the choking process in her lab. She uses putting on the golf green as her experimental paradigm. Not surprisingly, Beilock has shown that novice putters hit better shots when they consciously reflect on their actions. By concentrating on their golf game, they can avoid beginner’s mistakes.

A little experience, however, changes everything. After golfers have learned how to putt—once they have memorized the necessary movements—analyzing the stroke is a dangerous waste of time. And this is why, when experienced golfers are forced to think about their swing mechanics, they shank the ball. “We bring expert golfers into our lab, and we tell them to pay attention to a particular part of their swing, and they just screw up,” Beilock says. “When you are at a high level, your skills become somewhat automated. You don’t need to pay attention to every step in what you’re doing.”

But this only raises questions: What triggers all of these extra thoughts? And why does it only happen to some athletes, performers, and students? Everyone gets nervous; not everyone chokes.

by Jonah Lehrer, The New Yorker |  Read more:
Photograph of LeBron James by Jim Rogash/Getty Images.

Thursday, June 7, 2012

Why Google Isn’t Making Us Stupid…or Smart

Last year The Economist published a special report not on the global financial crisis or the polarization of the American electorate, but on the era of big data. Article after article cited one big number after another to bolster the claim that we live in an age of information superabundance. The data are impressive: 300 billion emails, 200 million tweets, and 2.5 billion text messages course through our digital networks every day, and, if these numbers were not staggering enough, scientists are reportedly awash in even more information. This past January astronomers surveying the sky with the Sloan telescope in New Mexico released over 49.5 terabytes of information—a mass of images and measurements—in one data drop. The Large Hadron Collider at CERN (the European Organization for Nuclear Research), however, produces almost that much information per second. Last year alone, the world’s information base is estimated to have doubled every eleven hours. Just a decade ago, computer professionals spoke of kilobytes and megabytes. Today they talk of the terabyte, the petabyte, the exabyte, the zettabyte, and now the yottabyte, each a thousand times bigger than the last.

Some see this as information abundance, others as information overload. The advent of digital information and with it the era of big data allows geneticists to decode the human genome, humanists to search entire bodies of literature, and businesses to spot economic trends. But it is also creating for many the sense that we are being overwhelmed by information. How are we to manage it all? What are we to make, as Ann Blair asks, of a zettabyte of information—a one with 21 zeros after it?1 From a more embodied, human perspective, these tremendous scales of information are rather meaningless. We do not experience information as pure data, be it a byte or a yottabyte, but as filtered and framed through the keyboards, screens, and touchpads of our digital technologies. However impressive these astronomical scales of information may be, our contemporary awe and increasing worry about all this data obscures the ways in which we actually engage it and the world of which it and we are a part. All of the chatter about information superabundance and overload tends not only to marginalize human persons, but also to render technology just as abstract as a yottabyte. An email is reduced to yet another data point, the Web to an infinite complex of protocols and machinery, Google to a neutral machine for producing information. Our compulsive talk about information overload can isolate and abstract digital technology from society, human persons, and our broader culture. We have become distracted by all the data and inarticulate about our digital technologies.

The more pressing, if more complex, task of our digital age, then, lies not in figuring out what comes after the yottabyte, but in cultivating contact with an increasingly technologically formed world.2 In order to understand how our lives are already deeply formed by technology, we need to consider information not only in the abstract terms of terrabytes and zettabytes, but also in more cultural terms. How do the technologies that humans form to engage the world come in turn to form us? What do these technologies that are of our own making and irreducible elements of our own being do to us? The analytical task lies in identifying and embracing forms of human agency particular to our digital age, without reducing technology to a mere mechanical extension of the human, to a mere tool. In short, asking whether Google makes us stupid, as some cultural critics recently have, is the wrong question. It assumes sharp distinctions between humans and technology that are no longer, if they ever were, tenable.

Two Narratives

The history of this mutual constitution of humans and technology has been obscured as of late by the crystallization of two competing narratives about how we experience all of this information. On the one hand, there are those who claim that the digitization efforts of Google, the social-networking power of Facebook, and the era of big data in general are finally realizing that ancient dream of unifying all knowledge. The digital world will become a “single liquid fabric of interconnected words and ideas,” a form of knowledge without distinctions or differences.3 Unlike other technological innovations, like print, which was limited to the educated elite, the internet is a network of “densely interlinked Web pages, blogs, news articles and Tweets [that] are all visible to anyone and everyone.”4 Our information age is unique not only in its scale, but in its inherently open and democratic arrangement of information. Information has finally been set free. Digital technologies, claim the most optimistic among us, will deliver a universal knowledge that will make us smarter and ultimately liberate us.5 These utopic claims are related to similar visions about a trans-humanist future in which technology will overcome what were once the historical limits of humanity: physical, intellectual, and psychological. The dream is of a post-human era.6

On the other hand, less sanguine observers interpret the advent of digitization and big data as portending an age of information overload. We are suffering under a deluge of data. Many worry that the Web’s hyperlinks that propel us from page to page, the blogs that reduce long articles to a more consumable line or two, and the tweets that condense thoughts to 140 characters have all created a culture of distraction. The very technologies that help us manage all of this information are undermining our ability to read with any depth or care. The Web, according to some, is a deeply flawed medium that facilitates a less intensive, more superficial form of reading. When we read online, we browse, we scan, we skim. The superabundance of information, such critics charge, however, is changing not only our reading habits, but also the way we think. As Nicholas Carr puts it, “what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles.”7 The constant distractions of the internet—think of all those hyperlinks and new message warnings that flash up on the screen—are degrading our ability “to pay sustained attention,” to read in depth, to reflect, to remember. For Carr and many others like him, true knowledge is deep, and its depth is proportional to the intensity of our attentiveness. In our digital world that encourages quantity over quality, Google is making us stupid.

Each of these narratives points to real changes in how technology impacts humans. Both the scale and the acceleration of information production and dissemination in our digital age are unique. Google, like every technology before it, may well be part of broader changes in the ways we think and experience the world. Both narratives, however, make two basic mistakes.

by Chad Wellmon, The Hedgehog Review |  Read more: