Friday, May 24, 2013

Little Brother is Watching You

It’s clear that the “expectation of privacy” would vary a great deal based on circumstances, but the matter of “changing and varied social norms” bears further scrutiny. Is the proliferation of recording devices altering our concept of privacy itself? I asked Abbi, who is a P.P.E. major (Philosophy, Politics, and Economics), whether he thought the “expectation of privacy” had changed in his lifetime. His response was striking:
People my age know that there are probably twice as many photos on the Internet of us, that we’ve never seen, or even know were taken, as there are that we’ve seen. It’s a reality we live with; it’s something people are worried about, and try to have some control over, say by controlling the privacy on their social media accounts. 
But at the same time, people my age tend to know that nowhere is really safe, I guess. You’re at risk of being recorded all the time, and at least for me, and I think for a lot of people who are more reasonable, that’s only motivation to be the best person you can be; to exhibit as good character as you can, because if all eyes are on you, you don’t really have the option to be publicly immoral, or to do wrong without being accountable.
Kennerly had a different response to the same question:
In many ways, the ubiquity of recording devices (we all have one in our pockets) doesn’t really change the analysis: you’ve never had the guarantee, by law or by custom, that a roomful of strangers will keep your secrets, even if they say they will. Did Abbi violate some part of the social compact by deceiving Luntz? In my opinion, yes. But falsity has a place in our society, and, as the Supreme Court confirmed last summer in United States v. Alvarez, certain false statements (outside of defamation, fraud, and perjury) can indeed receive First Amendment protection. As Judge Kozinski said in that case (when it was in front of the 9th Circuit), “white lies, exaggerations and deceptions [ ] are an integral part of human intercourse.”
Let me quote Kozinski at length:
Saints may always tell the truth, but for mortals living means lying. We lie to protect our privacy (“No, I don’t live around here”); to avoid hurt feelings (“Friday is my study night”); to make others feel better (“Gee you’ve gotten skinny”); to avoid recriminations (“I only lost $10 at poker”); to prevent grief (“The doc says you’re getting better”); to maintain domestic tranquility (“She’s just a friend”); to avoid social stigma (“I just haven’t met the right woman”); for career advancement (“I’m sooo lucky to have a smart boss like you”); to avoid being lonely (“I love opera”); to eliminate a rival (“He has a boyfriend”); to achieve an objective (“But I love you so much”); to defeat an objective (“I’m allergic to latex”); to make an exit (“It’s not you, it’s me”); to delay the inevitable (“The check is in the mail”); to communicate displeasure (“There’s nothing wrong”); to get someone off your back (“I’ll call you about lunch”); to escape a nudnik (“My mother’s on the other line”); to namedrop (“We go way back”); to set up a surprise party (“I need help moving the piano”); to buy time (“I’m on my way”); to keep up appearances (“We’re not talking divorce”); to avoid taking out the trash (“My back hurts”); to duck an obligation (“I’ve got a headache”); to maintain a public image (“I go to church every Sunday”); to make a point (“Ich bin ein Berliner”); to save face (“I had too much to drink”); to humor (“Correct as usual, King Friday”); to avoid embarrassment (“That wasn’t me”); to curry favor (“I’ve read all your books”); to get a clerkship (“You’re the greatest living jurist”); to save a dollar (“I gave at the office”); or to maintain innocence (“There are eight tiny reindeer on the rooftop”)….
An important aspect of personal autonomy is the right to shape one’s public and private persona by choosing when to tell the truth about oneself, when to conceal, and when to deceive. Of course, lies are often disbelieved or discovered, and that, too, is part of the push and pull of social intercourse. But it’s critical to leave such interactions in private hands, so that we can make choices about who we are. How can you develop a reputation as a straight shooter if lying is not an option?

by Maria Bustillos, New Yorker |  Read more:
Illustration by Tom Bachtell

From Here You Can See Everything

In Infinite Jest, David Foster Wallace imagines a film (also called Infinite Jest) so entertaining that anyone who starts watching it will die watching it, smiling vacantly at the screen in a pool of their own soiling. It’s the ultimate gripper of eyeballs. Media, in this absurdist rendering, evolves past parasite to parasitoid, the kind of overly aggressive parasite that kills its host.

Wallace himself had a strained relationship with television. He said in his 1993 essay “E Unibus Pluram” that television “can become malignantly addictive,” which, he explained, means, “(1) it causes real problems for the addict, and (2) it offers itself as relief from the very problems is causes.” Though I don’t think he would have labeled himself a television addict, Wallace was known to indulge in multi-day television binges. One can imagine those binges raised to the power of Netflix Post-Play and all seven seasons of The West Wing.

That sort of binge-television viewing has become a normal, accepted part of American culture. Saturdays with a DVD box set, a couple bottles of wine, and a big carton of goldfish crackers are a pretty common new feature of American weekends. Netflix bet big on this trend with their release of House of Cards. They released all 13 episodes of the first season at once: roughly one full Saturday’s worth. It’s a show designed for the binge. The New York Times quoted the show’s producer as saying, with a laugh, “Our goal is to shut down a portion of America for a whole day.” They don’t say what kind of laugh it was.

The scariest part of this new binge culture is that hours spent bingeing don’t seem to displace other media consumption hours; we’re just adding them to our weekly totals. Lump in hours on Facebook, Pinterest, YouTube, and maybe even the occasional non-torrented big-screen feature film and you’re looking at a huge number of hours per person. (...)

In Wallace’s book, a Canadian terrorist informant of foggy allegiance asks an American undercover agent a form of the question: “If Americans would choose to press play on the film Infinite Jest, knowing it will kill them, doesn’t that mean they are already dead inside, that they have chosen entertainment over life?” Of course vanishingly few Americans would press play on a film that was sure to end their lives. But there’s a truth in this absurdity. Almost every American I know does trade large portions of his life for entertainment, hour by weeknight hour, binge by Saturday binge, Facebook check by Facebook check. I’m one of them.

by James A. Pearson, The Morning News |  Read more:
Image: Alistair Frost, Metaphors don't count, 2011. Courtesy the artist and Zach Feuer Gallery, New York.

Why You Like What You Like

Food presents the most interesting gateway to thinking about liking. Unlike music or art, we have a very direct relationship with what we eat: survival. Also, every time you sit down to a meal you have myriad “affective responses,” as psychologists call them.

One day, I join Debra Zellner, a professor of psychology at Montclair State University who studies food liking, for lunch at the Manhattan restaurant Del Posto. “What determines what you’re selecting?” Zellner asks, as I waver between the Heritage Pork Trio with Ribollita alla Casella & Black Cabbage Stew and the Wild Striped Bass with Soft Sunchokes, Wilted Romaine & Warm Occelli Butter.

“What I’m choosing, is that liking? It’s not liking the taste,” Zellner says, “because I don’t have it in my mouth.”

My choice is the memory of all my previous choices—“every eating experience is a learning experience,” as the psychologist Elizabeth Capaldi has written. But there is novelty here too, an anticipatory leap forward, driven in part by the language on the menu. Words such as “warm” and “soft” and “heritage” are not free riders: They are doing work. In his book The Omnivorous Mind, John S. Allen, a neuroanthropologist, notes that simply hearing an onomatopoetic word like “crispy” (which the chef Mario Batali calls “innately appealing”) is “likely to evoke the sense of eating that type of food.” When Zellner and I mull over the choices, calling out what “sounds good,” there is undoubtedly something similar going on.

As I take a sip of wine—a 2004 Antico Broilo, a Friulian red—another element comes into play: How you classify something influences how much you like it. Is it a good wine? Is it a good red wine? Is it a good wine from the refosco grape? Is it a good red wine from Friuli ?

Categorization, says Zellner, works in several ways. Once you have had a really good wine, she says, “you can’t go back. You wind up comparing all these lesser things to it.” And yet, when she interviewed people about their drinking of, and liking for, “gourmet coffee” and “specialty beer” compared with “regular” versions such as Folgers and Budweiser, the “ones who categorized actually like the everyday beer much more than the people who put all beer in the same category,” she says. Their “hedonic contrast” was reduced. In other words, the more they could discriminate what was good about the very good, the more they could enjoy the less good. We do this instinctively—you have undoubtedly said something like “it’s not bad, for airport food.”

There is a kind of tragic irony when it comes to enjoying food: As we eat something, we begin to like it less. From a dizzy peak of anticipatory wanting, we slide into a slow despond of dimming affection, slouching into revulsion (“get this away from me,” you may have said, pushing away a once-loved plate of Atomic Wings).

In the phenomenon known as “sensory specific satiety,” the body in essence sends signals when it has had enough of a certain food. In one study, subjects who’d rated the appeal of several foods were asked about them again after eating one for lunch; this time they rated the food’s pleasantness lower. They were not simply “full,” but their bodies were striving for balance, for novelty. If you have ever had carb-heavy, syrup-drenched pancakes for breakfast, you are not likely to want them again at lunch. It’s why we break meals up into courses: Once you had the mixed greens, you are not going to like or want more mixed greens. But dessert is a different story.

Sated as we are at the end of a meal, we are suddenly faced with a whole new range of sensations. The capacity is so strong it has been dubbed the “dessert effect.” Suddenly there’s a novel, nutritive gustatory sensation—and how could our calorie-seeking brains resist that? As the neuroscientist Gary Wenk notes, “your neurons can only tolerate a total deprivation of sugar for a few minutes before they begin to die.” (Quick, apply chocolate!) As we finish dessert, we may be beginning to get the “post-ingestive” nutritional benefits of our main course. Sure, that chocolate tastes good, but the vegetables may be making you feel so satisfied. In the end, memory blurs it all. A study co-authored by Rozin suggests that the pleasure we remember from a meal has little to do with how much we consumed, or how long we spent doing it (under a phenomenon called “duration neglect”). “A few bites of a favorite dish in a meal,” the researchers write, “may do the full job for memory.”

by Tom Vanderbilt, Smithsonian |  Read more:
Image: Bartholomew Cooke

Ron Hincks, Impulsive (1966)
via:

Mses. Streep and Clinton.
via:

Thursday, May 23, 2013

We Need an International Minimum Wage


The deadly collapse of a garment factory in Bangladesh has sparked calls for better worker treatment. The revelation that Apple manages to avoid almost all taxes has drawn vague calls for tax reform. A more direct path to fairness: let's just have a reasonable international minimum wage.

We live in a global economy, as pundits are so fond of proclaiming. The global economy is the delightful playground of multinational corporations. They're able to drastically lower their labor costs by outsourcing work to the world's poorest and most desperate people. And they're able to escape paying taxes, like normal businesses do, by deploying armies of lawyers to play various countries' tax codes off against one another. The result is that money that should, in fairness, go to workers and governments ends up in the pockets of the corporation. The global economy is extremely advantageous to corporations, who owe no loyalty to anyone or anything except their stock price; it is disadvantageous to normal human beings, who exist in the world and not as a notional accounting trick.

In America, we accept the minimum wage as a given. It enjoys broad support. It is the realization of an ideal: that there is a point at which low pay becomes a moral outrage. (Where that point is, of course, is up for continuous debate.) Do not mistake the minimum wage for some sort of consensus of nonpartisan economists; it is a moral statement by our society. A statement of our belief that the economically powerful should not have a free hand to exploit the powerless.

Yet we are all hypocrites. We protect ourselves with a minimum wage, while at the same time enjoying the low consumer prices that come with ultra-low wages being paid to workers abroad. Our own purchasing habits reward companies for paying wages that are sure to keep their workers in poverty for life. We soothe ourselves by saying that these desperately poor workers are still better off than they would be without a job; yet we would reject that argument if an employer here tried to use it to pay us less than the minimum wage. We simply do not care if people halfway around the world who we do not see are exploited, if it saves us money.

Many business interests say that raising the wretched wages in one country will simply send the factories to another, even poorer country. That's a great reason to institute an international standard that would render that strategy moot. Bangladesh, where more than a thousand garment workers died in the Rana Plaza collapse thanks to the cutthroat quest to drive down prices, represents the bottom of the international manufacturing economy. The minimum wage of garment workers there is less than $50 per month. For all of our lofty rhetoric about a connected world and freedom and opportunity, we happily acquiesce to a system which keeps these workers— desperate, poor, and with little bargaining power— trapped in poverty. Can you live on $50 per month in Bangladesh? Yes, clearly. You can live in poverty.

Opponents of all sorts of "living wage" laws say that those who would advocate such a thing misunderstand the inherent economic forces of capitalism. Not true. We understand them all too well. We understand that, as history has amply demonstrated and continues to demonstrate, absent regulation, economic power imbalances will drive worker wages and working conditions down to outrageous and intolerable levels. People will, indeed, work all day for two dollars if that is their only option. That does not make it morally acceptable to pay people two dollars a day. Capitalism must be forcefully tempered by morality if we are to claim to be a moral people.

The system that we have— in which the vast bulk of profit flows to corporate shareholders, rather than workers and governments— is not a state of nature. It is a choice.

by Hamilton Nolan, Gawker |  Read more:
Image by Jim Cooke. Photo via AP

A Distinctive Tenderness

Some years ago I read that Sherwood Anderson’s Winesburg, Ohio (1919) was – with the exception of Scott Fitzgerald’s The Great Gatsby (1925) – the book most often taught in classes surveying twentieth-century American fiction. Whether this is true or not, Anderson (1876–1941) has certainly become, for most readers, the author of a single, groundbreaking work. Yet at least a half dozen of the stories he wrote in the 1920s and 30s are equal, or superior, to any of those in Winesburg, Ohio. “I’m a Fool”, “I Want To Know Why”, “The Egg”, “The Man Who Became a Woman” and “Death in the Woods”, to mention only the best known, underscore that Anderson should be honoured as more than a one-book author. He is, in fact, the creator of the modern American short story; the John the Baptist who prepared the way for (and influenced) writers as different as Ernest Hemingway, Eudora Welty and Ray Bradbury.

Even William Faulkner acknowledged his importance, calling him “the father of my generation of American writers and the tradition of American writing which our successors will carry on. He has never received his proper evaluation”. While Anderson’s prose can sometimes take on sonorous, biblical rhythms or echo the grandstanding rhetoric of county-fair oratory, his best short fiction manages to combine the folksiness of Mark Twain, the naturalist daring of Theodore Dreiser (to whom he dedicated his collection Horses and Men), and, more surprisingly, a linguistic freshness and simplicity he discovered in Gertrude Stein’s Tender Buttons and Three Lives. Above all, though, Anderson exhibits that distinctive tenderness for his characters, despite all their flaws and foibles, that we associate with Russian writers like Chekhov and Turgenev. He once called the latter’s Memoirs of a Sportsman “the sweetest thing in all literature”.

If that’s true, Winesburg, Ohio must be one of the most quietly bittersweet. In a cycle of linked vignettes, what we might now describe as a mosaic novel, the book portrays the loneliness, isolation and desperate yearning of the citizens of an 1890s town in the middle of farm country. At the end, its main recurring character, young George Willard, leaves Winesburg for a new life in the big city. Thematically, the stories might be summed up with the once-famous phrase from the film Cool Hand Luke: “what we have here is failure to communicate”.

In “Paper Pills”, for instance, a doctor scribbles his most intimate thoughts on small scraps that he screws up into little round balls that no one ever sees. In “Godliness”, a rich old man, who identifies with the Old Testament patriarchs, prepares to sacrifice a lamb and anoint his grandson with its blood – and is struck down by a stone from the frightened boy’s sling shot. Lonely Alice Hindman, in “Adventure”, runs naked into the street to offer herself to the first man she encounters. He turns out to be decrepit and half-witted, so she retreats to her room, “and turning her face to the wall, began trying to force herself to face bravely the fact that many people must live and die alone, even in Winesburg”.

Pathos, not cynicism or satire, is Winesburg, Ohio’s dominant mood throughout. Consider its most famous story, “The Strength of God”. One Sunday morning the Reverend Curtis Harman, at work in his study high up in the bell tower of the Presbyterian church, discovers that through a pane in a stained-glass window, one depicting Christ with a little child, he can peer down into the bedroom of the schoolteacher Kate Swift. He is shocked to see her lying on her bed, smoking a cigarette and reading a book. That day he preaches a sermon which he hopes will “touch and awaken” this woman “far gone in secret sin”.

But the memory of Kate Swift’s white skin soon begins to haunt him. On another Sunday morning he takes a stone and chips a corner of the window, so that he can more easily see directly into her bed. Afterwards ashamed, Harman resists going to the bell tower for weeks, but breaks down once, twice, three times. Finally on a cold January day, when he is feeling feverish, he climbs its steps and grimly waits:

“He thought of his wife and for the moment almost hated her. ‘She has always been ashamed of passion and has cheated me’, he thought. ‘Man has a right to expect living passion and beauty in a woman. He has no right to forget that he is an animal and in me there is something that is Greek. I will throw off the woman of my bosom and seek other women. I will besiege this schoolteacher. I will fly in the face of all men and if I am a creature of carnal lusts I will live then for my lusts.’”

At the story’s climax, Harman rushes into the office of the Winesburg Eagle newspaper and lifts up a bleeding fist, which he has just driven through the stained-glass window. With “his eyes glowing and his voice ringing with fervor”, he announces that “God has appeared to me in the person of Kate Swift, kneeling naked on a bed”.

by Michael Dirda, TLS |  Read more:
Photograph: Eric Schaal

need coffee...
via:

Charley Harper Skimmerscape
via:

Riding the Wave

The sleek look is still prevalent in the moneyed precincts of Manhattan, but for a certain segment of the population, what has come to be known as “beach hair” (tousled, tawny, done to look undone) reigns supreme after Memorial Day. Even if one is nowhere near an actual beach.

Among its enthusiasts are Brett Heyman, 33, founder of the luxury accessories line Edie Parker, named for her daughter, Edie. Both Edies came into the world about three years ago, and that is when Ms. Heyman, with “definitely a lot going on in my life,” met Chris Lospalluto, a hair stylist at Sharon Dorram Color in Sally Hershberger’s Upper East Side location, who has since given her the unfussy, low-maintenance wave she seeks.

“I have messy hair to begin with, and it’s just a better version with Chris,” Ms. Heyman said, noting that Mr. Lospalluto works fast. “It’s not ‘Real Housewife’-y — that’s always the fear — and he always gets my references: the whole ‘I was surfing in Costa Rica for a month’ look.” (...)

The beach wave, once a shrugging result of summer weather and activities, has taken on a certain artfulness: stretching year-round, coast to coast, and with surprising staying power. Indeed Oribe, the hair guru based in Miami Beach, tracks the laid-back style back more than a decade. “It came from Gisele,” he said, referring to the Brazilian supermodel Gisele Bündchen. “But she has that hair naturally. It just dries like that.”

If you lack Gisele’s genetics, there is a cornucopia of sprays, mousses and creams aiming to tousle, tumble and clump. Perhaps the best known, Bumble and bumble Surf Spray, is a salty solution first introduced in 2001 that the company said has become its No. 1 seller. It is expanding, with the addition of a Surf shampoo and conditioner arriving on shelves this month.

For effective beach waves, said Jordan M, an editorial stylist at Bumble and bumble, women must “own their texture” and resist the urge to overpreen. “I see a lot of girls trying to do beach hair, but it ends up ‘Barbie doll,’ ” he said, perhaps because the long-running trend has taken on polish over the years. This season’s waves, he said, are a balance between Alexa Chung’s (“It’s a little chicer than your typical beach hair”) and that hardy-perennial summer reference: 1960s Brigitte Bardot frolicking in the South of France. (“It has that dry texture but still has a full wave to it.”). (...)

Mr. Lospalluto uses sprays with and without salt, depending on hair density and texture. Come summer, he’ll give ends some weight by rubbing in Serge Normant’s dry oil spray or Shu Uemura’s new Touch of Gloss wax. He also cautioned against going too tousled.

“Then it becomes bedhead and it doesn’t translate to everyday life,” Mr. Lospalluto said. “Everybody likes the idea of a look that came off the runway, but for going to a meeting? Looking like you’ve just had a romp in the restroom is not appropriate.”

by Bee-Shyuan Chang, NY Times |  Read more:
Photo: Casey Kelbaugh for The New York Times

What’s in Your Green Tea?

For many, no drink is more synonymous with good health than green tea, the ancient Chinese beverage known for its soothing aroma and abundance of antioxidants. By some estimates, Americans drink nearly 10 billion servings of green tea each year.

But a new report by an independent laboratory shows that green tea can vary widely from one cup to the next. Some bottled varieties appear to be little more than sugar water, containing little of the antioxidants that have given the beverage its good name. And some green tea leaves, particularly those from China, are contaminated with lead, though the metal does not appear to leach out during the brewing process.

The report was published this week by ConsumerLab.com, an independent site that tests health products of all kinds. The company, which had previously tested a variety of green tea supplements typically found in health food stores, took a close look at brewed and bottled green tea products, a segment that has grown rapidly since the 1990s.

It found that green tea brewed from loose tea leaves was perhaps the best and most potent source of antioxidants like epigallocatechin gallate, or EGCG, though plain and simple tea bags made by Lipton and Bigelow were the most cost-efficient source. Green tea’s popularity has been fueled in part by a barrage of research linking EGCG to benefits like weight loss to cancer prevention, but the evidence comes largely from test tube studies, research on animals and large population studies, none of it very rigorous, and researchers could not rule out the contribution of other healthy behaviors that tend to cluster together.

Green tea is one of the most popular varieties of tea in the United States, second only to black tea, which is made from the leaves of the same plant. EGCG belongs to a group of antioxidant compounds called catechins that are also found in fruits, vegetables, wine and cocoa.

The new research was carried out in several phases. In one, researchers tested four brands of green tea beverages sold in stores. One variety, Diet Snapple Green Tea, contained almost no EGCG. Another bottled brand, Honest Tea’s Green Tea With Honey, claimed to carry 190 milligrams of catechins, but the report found that it contained only about 60 percent of that figure. The drink also contained 70 milligrams of caffeine, about two-thirds the amount in a regular cup of coffee, as well as 18 grams of sugar, about half the amount found in a can of Sprite. (...)

But the most surprising phase of the study was an analysis of the lead content in the green tea leaves. The leaves in the Lipton and Bigelow tea bags contained 1.25 to 2.5 micrograms of lead per serving. The leaves from Teavana, however, did not contain measurable amounts.

by Anahad O'Connor, NY Times |  Read more:
Photo: Everett Kennedy Brown/European Pressphoto Agency

The New Science of Giving


Like any popular food writer, Gary Taubes gets more than his share of e-mails about his work. So he didn't give it much thought one day two years ago when he got a five-line comment about a podcast he'd given the week before. It was plainly signed "John."

The man was intrigued by Taubes's theories on why people get fat—more specifically, the food writer's argument that most of the science on obesity is either badly flawed or inconclusive. What was needed, Taubes had said, was a comprehensive experiment that can answer some of the key questions about how our bodies process food. The problem is that such a study is hugely expensive. "From the little I know about the science of nutrition, your study makes a lot of sense," the listener wrote, adding that he ran a foundation focused on public policy.

Taubes noticed that the full name in the email was John Arnold, and a quick search turned up a curious figure under that name: a wunderkind natural-gas trader at Enron who later founded his own hedge fund. The fund was secretive—little-known in its hometown, Houston, much less the rest of the country—but legendary in hedge-fund circles for its mega-returns. It was starting to get interesting.

Taubes passed the name onto Peter Attia, a medical doctor with whom he had recently founded a nonprofit focused on nutrition science.
 Attia recalls that when he called to see if he could set up a meeting with Arnold, the response was, "First give us the names of 20 top experts in the field, half of whom think you are crazy." A few weeks later, he found himself in a conference room located just off the trading floor at Arnold's Houston office, during which it became apparent that Arnold and his staff had already spoken with most, if not all, of the experts Attia provided. And something else was apparent: Though boyish and just 37, Arnold was dead serious about launching the obesity study. Indeed, his ambitions couldn't have been higher. He wanted to know if all the best and brightest food scientists got together—and had unlimited resources—what could they accomplish?

Arnold, it turns out, had accumulated a fortune estimated at $4 billion in the past decade—only a handful of people on Wall Street made more during that time. Although he had not yet announced it, Arnold had decided to give almost all of it away. In October 2012, he closed his hedge fund, Centaurus Energy, and retired. In U.S. history, there may have never been a self-made individual with so much money who devoted himself to philanthropy at such a young age. 


But as Taubes and Attia discovered, Arnold and his wife, Laura, have a somewhat unique approach to giving. Most billionaires tend to write checks to good causes they're part of, hospitals where they were treated or universities they attended. These are the so-called "grateful-recipient" donors. Or there are donors who make sizable gifts to meet an obvious need in a community, such as hunger or education. But at a time when charitable giving in the U.S. is still down from its peak in 2007, the Arnolds want to try something new and somewhat grander. John says the goal is to make "transformational" changes to society.

The Arnolds want to see if they can use their money to solve some of the country's biggest problems through data analysis and science, with an unsentimental focus on results and an aversion to feel-good projects—the success of which can't be quantified. No topic is too ambitious: Along with obesity, the Arnolds plan to dig into criminal justice and pension reform, among others. Anne Milgram, the former New Jersey attorney general hired to tackle the criminal-justice issue, has a name for all this: She calls it the "Moneyball" approach to giving, a reference to the book and movie about how the Oakland A's used smart statistical analysis to upend some of baseball's conventional wisdom. And the Arnolds are in no hurry for answers. Indeed, they believe patience is a key resource behind their giving.

Today, the Laura and John Arnold Foundation is bankrolling a $26 million nutrition study by Attia's nonprofit, an effort that involves the use of metabolic chambers and that Attia likes to call "the Manhattan Project of obesity." And that's just part of the splash the foundation is making: Out of virtually nowhere, the couple gave away or pledged $423 million last year, vaulting them to the third-highest givers in the country, according to the latest ranking from The Chronicle of Philanthropy. The Arnolds aren't stopping at research; they're also funding reform efforts that they say align with the findings of their studies—and the political candidates who agree.

But as the Arnolds' profile grows, of course, not everyone is a fan of this science of giving, especially since it comes at a cost to the many individuals and local organizations who need direct help now and could benefit from their billions. The answer to the most asked question may not be known for years: Will their plan work?"

by Brad Reagan, WSJ |  Read more:
Photograph by Henrik Olund; Peter Attia Photographed at Translational Research Institute for Metabolism and Diabete

The Need for Critical Science Journalism

The bulk of contemporary science journalism falls under the category of "infotainment". This expression describes science writing that informs a non-specialist target audience about new scientific discoveries in an entertaining fashion. The "informing" typically consists of giving the reader some historical background surrounding the scientific study, summarises key findings and then describes the significance and implications of the research. Analogies are used to convey complex scientific concepts so that a reader without a professional scientific background can grasp the ideas driving the research.

Direct quotes from the researchers also help illustrate the motivations, relevance, and emotional impact of the findings. The entertainment component varies widely, ranging from an enticing or witty style of writing to the choice of the subject matter. Freaky copulation techniques in the animal kingdom, discoveries that change our views about the beginnings of the universe or of life, heart-warming stories about ailing children that might be cured through new scientific breakthroughs, sci-fi robots, quirky anecdotes or heroic struggles of the scientists involved in the research – these are examples of topics that will capture the imagination of the intended audience.

However, infotainment science journalism rarely challenges the validity of the scientific research study or criticises its conclusions. Perfunctory comments, either by the journalist or in the form of quotes – such as "It is not clear whether these findings will also apply to humans" or "This is just a first step and more research is needed" are usually found at the end of such pieces – but it is rare to find an independent or detailed critical analysis.  (...)

When a scientist is asked to write an editorial about a new scientific paper, she is expected to not only mention the novelty and significance of the paper – there is also an expectation to point out major flaws and limitations, including those that might have been inadequately addressed during the peer review process.

Such an analytical and critical approach can be somewhat antithetical to infotainment science journalism. It is difficult to write an infotainment-style gripping narrative about the discovery of a new protein that acts as a master regulator of ageing if one has to remind the reader that upon critical analysis of the data, the alleged "master regulator" is just one of 20 other proteins that could also be seen as "master regulators" and that there were potential flaws in how cellular ageing was assessed.

Infotainment science journalism will continue to be the dominant form of science writing, because the portrayal of science as an exciting adventure with great promise and few uncertainties is bound to garner a large readership. Hopefully, we will also see a growth in critical and investigative science journalism that critically analyses and challenges scientific studies so that readers can choose from a broad array of science journalism offerings.

To help distinguish between infotainment science journalism and critical science journalism, the reader can evaluate a science news article or blogpost using the following criteria:

by Jalees Rehman, The Guardian |  Read more:
Photo: Radu Razvan/Alamy

Wednesday, May 22, 2013

Feist


[ed. Full concert, Paris, 2005. Awesome.]

Kinski | Mastroianni (Stay as You Are)
via:

Teens Are Turning Away from Facebook?


[ed. It doesn't take a lot of insight to realize teenagers will always need a place to hang out away from the prying eyes of adults. The question is where they'll do it. Privacy controls on FB have become too complicated (and FB too complicit in exploiting those complications). Yet, the future of social networking hardly seems (or should be) limited to 140 characters (i.e Twitter), or Tumblr blogs. Kids are smart. They'll figure out what's real and what isn't. In the mean time, one should ask whenever they read articles like this what the motives are for writing such a story (and why such a heavy emphasis on Tumblr?).]

Teenagers really are over Facebook. In February the social network warned investors that "our younger users ... are aware of and actively engaging with other products and services similar to, or as a substitute for, Facebook." And in April the investment bank Piper Jaffray reported that products and services like Tumblr and Twitter were further eroding Facebook's dominance among the Justin Bieber set. But why? In a deep report published on Tuesday, Pew Research explains that teenagers departing the social network's blue confines are looking for something more... real. More authentic. Which, ironically, was the initial draw of Facebook, one of the first social networks to require real names.

Pew shows how Facebook has been slowly colonized by the very forces teens signed up to escape: watchful parents, too-old adults, and "drama" — nasty conversations that would never arise in real life. To contend with these annoying developments, teens aren't deleting their Facebook accounts; they're just using them less and less, spending more time on Twitter and Instagram, where conservations are limited to short-form text, links, and simple photos; or Tumblr, which emphasizes content over consolidated user profiles. Here's how one (anonymous) interviewee put it to Pew during a focus group:
Female (age 15): “I have a Facebook, a Tumblr, and Twitter. I don’t use Facebook or Twitter much. I rather use Tumblr to look for interesting stories. I like Tumblr because I don’t have to present a specific or false image of myself and I don’t have to interact with people I don’t necessarily want to talk to.”  (...)
"Where people post unnecessary pictures and say unnecessary things" is probably not the slogan Facebook was hoping for, especially among such an impressionable demographic. But it's probably music to Marissa Mayer's ears — Yahoo's $1.1 billion deal for Tumblr is being seen as something of a sea change in social media from what The New York Times described in today's paper as a "passive" kind of "social directory," as opposed to, say, Tumblr, one of many sites that have "come up with ways to let people control and generate content and project identity." Because that's, you know, a little bit more real these days.

by J.K. Trotter, Atlantic Wire |  Read more:
Image: Shutterstock

elcafedeloslibres, “Old memories, Collage, 2010”
via: