Friday, May 24, 2013

The Rise and Fall of Charm in American Men


If one were to recast The Rockford Files, as Universal Pictures is intending to do, would the Frat Pack actor Vince Vaughn seem the wisest choice to play Jim Rockford, the character James Garner inhabited with such sly intelligence and bruised suavity? Universal apparently thinks so.

One can say many things about the talents of Vaughn, and were Universal embarking on a bit of polyester parody—remaking, say, Tony Rome, among the least of the neo-noirs—Vaughn’s gift for sending up low pop would be just so. But to aim low in this case is to miss the deceptive grace that Garner brought to the original, and prompts a bigger question: Whatever happened to male charm—not just our appreciation of it, or our idea of it, but the thing itself?

Yes, yes, George Clooney—let’s get him out of the way. For nearly 20 years, any effort to link men and charm has inevitably led to Clooney. Ask women or men to name a living, publicly recognized charming man, and 10 out of 10 will say Clooney. That there exists only one choice—and an aging one—proves that we live in a culture all but devoid of male charm.

Mention Clooney, and the subject turns next to whether (or to what extent) he’s the modern version of that touchstone of male charm, Cary Grant. Significantly, Grant came to his charm only when he came, rather late, to his adulthood. An abandoned child and a teenage acrobat, he spent his first six years in Hollywood playing pomaded pretty boys. In nearly 30 stilted movies—close to half of all the pictures he would ever make—his acting was tentative, his personality unformed, his smile weak, his manner ingratiating, and his delivery creaky. See how woodenly he responds to Mae West’s most famous (and most misquoted) line, in She Done Him Wrong: “Why don’t you come up sometime and see me?” But in 1937 he made the screwball comedy The Awful Truth, and all at once the persona of Cary Grant gloriously burgeoned. Out of nowhere he had assimilated his offhand wit, his playful knowingness, and, in a neat trick that allowed him to be simultaneously cool and warm, his arch mindfulness of the audience he was letting in on the joke.

Grant had developed a new way to interact with a woman onscreen: he treated his leading lady as both a sexually attractive female and an idiosyncratic personality, an approach that often required little more than just listening to her—a tactic that had previously been as ignored in the pictures as it remains, among men, in real life. His knowing but inconspicuously generous style let the actress’s performance flourish, making his co-star simultaneously regal and hilarious.

In short, Grant suddenly and fully developed charm, a quality that is tantalizing because it simultaneously demands detachment and engagement. Only the self-aware can have charm: It’s bound up with a sensibility that at best approaches wisdom, or at least worldliness, and at worst goes well beyond cynicism. It can’t exist in the undeveloped personality. It’s an attribute foreign to many men because most are, for better and for worse, childlike. These days, it’s far more common among men over 70—probably owing to the era in which they reached maturity rather than to the mere fact of their advanced years. What used to be called good breeding is necessary (but not sufficient) for charm: no one can be charming who doesn’t draw out the overlooked, who doesn’t shift the spotlight onto others—who doesn’t, that is, possess those long-forgotten qualities of politesse and civilité. A great hostess perforce has charm (while legendary hostesses are legion—Elizabeth Montagu, Madame Geoffrin, Viscountess Melbourne, Countess Greffulhe—I can’t think of a single legendary host), but today this social virtue goes increasingly unrecognized. Still, charm is hardly selfless. All of these acts can be performed only by one at ease with himself yet also intensely conscious of himself and of his effect on others. And although it’s bound up with considerateness, it really has nothing to do with, and is in fact in some essential ways opposed to, goodness. Another word for the lightness of touch that charm requires in humor, conversation, and all other aspects of social relations is subtlety, which carries both admirable and dangerous connotations. Charm’s requisite sense of irony is also the requisite for social cruelty (...)

by Benjamin Schwarz, The Atlantic |  Read more:
Illustration: Thomas Allen

Helen Frankenthaler. Broome Street at Night 1987
via:

Glaeser on Cities

Edward Glaeser of Harvard University and author of The Triumph of Cities talks with EconTalk host Russ Roberts about American cities. The conversation begins with a discussion of the history of Detroit over the last century and its current plight. What might be done to improve Detroit's situation? Why are other cities experiencing similar challenges to those facing Detroit? Why are some cities thriving and growing? What policies might help ailing cities and what policies have helped those cities that succeed? The conversation concludes with a discussion of why cities have such potential for growth. (ed. Podcast)

Intro. [Recording date: April 15, 2013.] Russ: Topic is cities; start with recent post you had at the New York Times's blog, Economix, on Detroit. Give us a brief history of that city. It's not doing well right now, but it wasn't always that way, was it?

Guest: No. If you look back 120 years ago or so, Detroit looked like one of the most entrepreneurial places on the planet. It seemed as if there was an automotive genius on every street corner. If you look back 60 years ago, Detroit was among the most productive places on the planet, with the companies that were formed by those automotive geniuses coming to fruition and producing cars that were the technological wonder of the world. So, Detroit's decline is of more recent heritage, of the past 50 years. And it's an incredible story, an incredible tragedy. And it tells us a great deal about the way that cities work and the way that local economies function.

Russ: So, what went wrong? 

Guest: If we go back to those small-scale entrepreneurs of 120 years ago--it's not just Henry Ford; it's the Dodge brothers, the Fisher brothers, David Dunbar Buick, Billy Durant nearby Flint--all of these men were trying to figure out how to solve this technological problem, making the automobile cost effective, produce cheap, solid cars for ordinary people to run in the world. They managed to do that, Ford above all, by taking advantage of each other's ideas, each other supplies, financing that was collaboratively arranged. And together they were able to achieve this remarkable technological feat. The problem was the big idea was a vast, vertically integrated factory. And that's a great recipe for short run productivity, but a really bad recipe for long run reinvention. And a bad recipe for urban areas more generally, because once you've got a River Rouge plant, once you've got this mass vertically integrated factory, it doesn't need the city; it doesn't give to the city. It's very, very productive but you could move it outside the city, as indeed Ford did when he moved his plant from the central city of Detroit to River Rouge. And then of course once you are at this stage of the technology of an industry, you can move those plants to wherever it is that cost minimization dictates you should go. And that's of course exactly what happens. Jobs first suburbanized, then moved to lower cost areas. The work of Tom Holmes at the U. of Minnesota shows how remarkable the difference is in state policies towards unions, labor, how powerful those policies were in explaining industrial growth after 1947. And of course it globalizes. It leaves cities altogether. And that's exactly what happened in automobiles. In some sense--and what was left was relatively little, because it's a sort of inversion[?] of the natural resource curse, because it was precisely because Detroit had these incredibly productive machines that they squeezed out all other sources of invention--rather than having lots of small entrepreneurs you had middle managers for General Motors (GM) and Ford. And those guys were not going to be particularly adept at figuring out some new industry and new activity when the automobile production moved elsewhere or declined. And that's at least how I think about this--that successful cities today are marked by small firms, smart people, and connections to the outside world. And that was what Detroit was about in 1890 but it's not what Detroit was about in 1970. And I think that sowed the seeds of decline.

4:25 Russ: So, one way to describe what you are saying is in the early part of the 20th century, Detroit was something like Silicon Valley, a hub of creative talent, a lot of complementarity between the ideas and the supply chain and interactions between those people that all came together. Lots of competition, which encouraged people to try harder and innovate, or do the best they could. Are you suggesting then that Silicon Valley is prone to this kind of change at some point? If the computer were to become less important somewhere down the road or produced in a different way? 

Guest: The question is to what extent do the Silicon Valley firms become dominated by very strong returns to scale, a few dominant firms capitalize on it. I think it's built into the genes of every industry that they will eventually decline. The question is whether or not the region then reinvents itself. And there are two things that enable particular regions to reinvent themselves. One is skills, measured education, human capital. The year, the share or the fraction in the metropolitan area with a college degree as of 1940 or 1960 or 1970 has been a very good predictor of whether, particularly northeastern or northwestern metropolitan areas, have been able to turn themselves around. And a particular form of human capital, entrepreneurial human capital, also seems to be critical, despite the fact that our proxies for entrepreneurial talent are relatively weak. We typically use things like the number of establishments per worker in a given area, or the share of employment in startups from some initial time period. Those weak proxies are still very, very strong predictors of urban regeneration, places that have lots of little firms have managed to do much better than places that were dominated by a few large firms, particularly if they are in a single industry. So, let's think for a second about Silicon Valley. Silicon Valley has lots of skilled workers. That's good. But what I don't know is whether Silicon Valley is going to look like it's dominated by a few large firms, Google playing the role of General Motors. Or whether or not it will continue to have lots of little startups. There's nothing wrong with big firms in terms of productivity. But they tend to train middle managers, not entrepreneurs. So that's, I think the other thing to look for. And one of the things that we have seen historically is that those little entrepreneurs are pretty good at switching industries when they need to. Think about New York, which, the dominated industry in New York was garment manufacturing. It was a large industrial cluster in the 1950s than automobile production was. But those small scale people who led those garment firms, they were pretty adept at doing something else when the industry jettisoned hundreds of thousands of jobs in the 1960s. No way that the middle managers for U.S. Steel or General Motors were not.

by Edward Glaeser, Hosted by Russ Roberts, Library of Economics and Liberty |  Read more:
Photo: Julian Dufort, Money Magazine

Sandra MeisnerWhen the night falls.


My Dog Sighs. Disposable city art (free for the taking)
via:

Little Brother is Watching You

It’s clear that the “expectation of privacy” would vary a great deal based on circumstances, but the matter of “changing and varied social norms” bears further scrutiny. Is the proliferation of recording devices altering our concept of privacy itself? I asked Abbi, who is a P.P.E. major (Philosophy, Politics, and Economics), whether he thought the “expectation of privacy” had changed in his lifetime. His response was striking:
People my age know that there are probably twice as many photos on the Internet of us, that we’ve never seen, or even know were taken, as there are that we’ve seen. It’s a reality we live with; it’s something people are worried about, and try to have some control over, say by controlling the privacy on their social media accounts. 
But at the same time, people my age tend to know that nowhere is really safe, I guess. You’re at risk of being recorded all the time, and at least for me, and I think for a lot of people who are more reasonable, that’s only motivation to be the best person you can be; to exhibit as good character as you can, because if all eyes are on you, you don’t really have the option to be publicly immoral, or to do wrong without being accountable.
Kennerly had a different response to the same question:
In many ways, the ubiquity of recording devices (we all have one in our pockets) doesn’t really change the analysis: you’ve never had the guarantee, by law or by custom, that a roomful of strangers will keep your secrets, even if they say they will. Did Abbi violate some part of the social compact by deceiving Luntz? In my opinion, yes. But falsity has a place in our society, and, as the Supreme Court confirmed last summer in United States v. Alvarez, certain false statements (outside of defamation, fraud, and perjury) can indeed receive First Amendment protection. As Judge Kozinski said in that case (when it was in front of the 9th Circuit), “white lies, exaggerations and deceptions [ ] are an integral part of human intercourse.”
Let me quote Kozinski at length:
Saints may always tell the truth, but for mortals living means lying. We lie to protect our privacy (“No, I don’t live around here”); to avoid hurt feelings (“Friday is my study night”); to make others feel better (“Gee you’ve gotten skinny”); to avoid recriminations (“I only lost $10 at poker”); to prevent grief (“The doc says you’re getting better”); to maintain domestic tranquility (“She’s just a friend”); to avoid social stigma (“I just haven’t met the right woman”); for career advancement (“I’m sooo lucky to have a smart boss like you”); to avoid being lonely (“I love opera”); to eliminate a rival (“He has a boyfriend”); to achieve an objective (“But I love you so much”); to defeat an objective (“I’m allergic to latex”); to make an exit (“It’s not you, it’s me”); to delay the inevitable (“The check is in the mail”); to communicate displeasure (“There’s nothing wrong”); to get someone off your back (“I’ll call you about lunch”); to escape a nudnik (“My mother’s on the other line”); to namedrop (“We go way back”); to set up a surprise party (“I need help moving the piano”); to buy time (“I’m on my way”); to keep up appearances (“We’re not talking divorce”); to avoid taking out the trash (“My back hurts”); to duck an obligation (“I’ve got a headache”); to maintain a public image (“I go to church every Sunday”); to make a point (“Ich bin ein Berliner”); to save face (“I had too much to drink”); to humor (“Correct as usual, King Friday”); to avoid embarrassment (“That wasn’t me”); to curry favor (“I’ve read all your books”); to get a clerkship (“You’re the greatest living jurist”); to save a dollar (“I gave at the office”); or to maintain innocence (“There are eight tiny reindeer on the rooftop”)….
An important aspect of personal autonomy is the right to shape one’s public and private persona by choosing when to tell the truth about oneself, when to conceal, and when to deceive. Of course, lies are often disbelieved or discovered, and that, too, is part of the push and pull of social intercourse. But it’s critical to leave such interactions in private hands, so that we can make choices about who we are. How can you develop a reputation as a straight shooter if lying is not an option?

by Maria Bustillos, New Yorker |  Read more:
Illustration by Tom Bachtell

From Here You Can See Everything

In Infinite Jest, David Foster Wallace imagines a film (also called Infinite Jest) so entertaining that anyone who starts watching it will die watching it, smiling vacantly at the screen in a pool of their own soiling. It’s the ultimate gripper of eyeballs. Media, in this absurdist rendering, evolves past parasite to parasitoid, the kind of overly aggressive parasite that kills its host.

Wallace himself had a strained relationship with television. He said in his 1993 essay “E Unibus Pluram” that television “can become malignantly addictive,” which, he explained, means, “(1) it causes real problems for the addict, and (2) it offers itself as relief from the very problems is causes.” Though I don’t think he would have labeled himself a television addict, Wallace was known to indulge in multi-day television binges. One can imagine those binges raised to the power of Netflix Post-Play and all seven seasons of The West Wing.

That sort of binge-television viewing has become a normal, accepted part of American culture. Saturdays with a DVD box set, a couple bottles of wine, and a big carton of goldfish crackers are a pretty common new feature of American weekends. Netflix bet big on this trend with their release of House of Cards. They released all 13 episodes of the first season at once: roughly one full Saturday’s worth. It’s a show designed for the binge. The New York Times quoted the show’s producer as saying, with a laugh, “Our goal is to shut down a portion of America for a whole day.” They don’t say what kind of laugh it was.

The scariest part of this new binge culture is that hours spent bingeing don’t seem to displace other media consumption hours; we’re just adding them to our weekly totals. Lump in hours on Facebook, Pinterest, YouTube, and maybe even the occasional non-torrented big-screen feature film and you’re looking at a huge number of hours per person. (...)

In Wallace’s book, a Canadian terrorist informant of foggy allegiance asks an American undercover agent a form of the question: “If Americans would choose to press play on the film Infinite Jest, knowing it will kill them, doesn’t that mean they are already dead inside, that they have chosen entertainment over life?” Of course vanishingly few Americans would press play on a film that was sure to end their lives. But there’s a truth in this absurdity. Almost every American I know does trade large portions of his life for entertainment, hour by weeknight hour, binge by Saturday binge, Facebook check by Facebook check. I’m one of them.

by James A. Pearson, The Morning News |  Read more:
Image: Alistair Frost, Metaphors don't count, 2011. Courtesy the artist and Zach Feuer Gallery, New York.

Why You Like What You Like

Food presents the most interesting gateway to thinking about liking. Unlike music or art, we have a very direct relationship with what we eat: survival. Also, every time you sit down to a meal you have myriad “affective responses,” as psychologists call them.

One day, I join Debra Zellner, a professor of psychology at Montclair State University who studies food liking, for lunch at the Manhattan restaurant Del Posto. “What determines what you’re selecting?” Zellner asks, as I waver between the Heritage Pork Trio with Ribollita alla Casella & Black Cabbage Stew and the Wild Striped Bass with Soft Sunchokes, Wilted Romaine & Warm Occelli Butter.

“What I’m choosing, is that liking? It’s not liking the taste,” Zellner says, “because I don’t have it in my mouth.”

My choice is the memory of all my previous choices—“every eating experience is a learning experience,” as the psychologist Elizabeth Capaldi has written. But there is novelty here too, an anticipatory leap forward, driven in part by the language on the menu. Words such as “warm” and “soft” and “heritage” are not free riders: They are doing work. In his book The Omnivorous Mind, John S. Allen, a neuroanthropologist, notes that simply hearing an onomatopoetic word like “crispy” (which the chef Mario Batali calls “innately appealing”) is “likely to evoke the sense of eating that type of food.” When Zellner and I mull over the choices, calling out what “sounds good,” there is undoubtedly something similar going on.

As I take a sip of wine—a 2004 Antico Broilo, a Friulian red—another element comes into play: How you classify something influences how much you like it. Is it a good wine? Is it a good red wine? Is it a good wine from the refosco grape? Is it a good red wine from Friuli ?

Categorization, says Zellner, works in several ways. Once you have had a really good wine, she says, “you can’t go back. You wind up comparing all these lesser things to it.” And yet, when she interviewed people about their drinking of, and liking for, “gourmet coffee” and “specialty beer” compared with “regular” versions such as Folgers and Budweiser, the “ones who categorized actually like the everyday beer much more than the people who put all beer in the same category,” she says. Their “hedonic contrast” was reduced. In other words, the more they could discriminate what was good about the very good, the more they could enjoy the less good. We do this instinctively—you have undoubtedly said something like “it’s not bad, for airport food.”

There is a kind of tragic irony when it comes to enjoying food: As we eat something, we begin to like it less. From a dizzy peak of anticipatory wanting, we slide into a slow despond of dimming affection, slouching into revulsion (“get this away from me,” you may have said, pushing away a once-loved plate of Atomic Wings).

In the phenomenon known as “sensory specific satiety,” the body in essence sends signals when it has had enough of a certain food. In one study, subjects who’d rated the appeal of several foods were asked about them again after eating one for lunch; this time they rated the food’s pleasantness lower. They were not simply “full,” but their bodies were striving for balance, for novelty. If you have ever had carb-heavy, syrup-drenched pancakes for breakfast, you are not likely to want them again at lunch. It’s why we break meals up into courses: Once you had the mixed greens, you are not going to like or want more mixed greens. But dessert is a different story.

Sated as we are at the end of a meal, we are suddenly faced with a whole new range of sensations. The capacity is so strong it has been dubbed the “dessert effect.” Suddenly there’s a novel, nutritive gustatory sensation—and how could our calorie-seeking brains resist that? As the neuroscientist Gary Wenk notes, “your neurons can only tolerate a total deprivation of sugar for a few minutes before they begin to die.” (Quick, apply chocolate!) As we finish dessert, we may be beginning to get the “post-ingestive” nutritional benefits of our main course. Sure, that chocolate tastes good, but the vegetables may be making you feel so satisfied. In the end, memory blurs it all. A study co-authored by Rozin suggests that the pleasure we remember from a meal has little to do with how much we consumed, or how long we spent doing it (under a phenomenon called “duration neglect”). “A few bites of a favorite dish in a meal,” the researchers write, “may do the full job for memory.”

by Tom Vanderbilt, Smithsonian |  Read more:
Image: Bartholomew Cooke

Ron Hincks, Impulsive (1966)
via:

Mses. Streep and Clinton.
via:

Thursday, May 23, 2013

We Need an International Minimum Wage


The deadly collapse of a garment factory in Bangladesh has sparked calls for better worker treatment. The revelation that Apple manages to avoid almost all taxes has drawn vague calls for tax reform. A more direct path to fairness: let's just have a reasonable international minimum wage.

We live in a global economy, as pundits are so fond of proclaiming. The global economy is the delightful playground of multinational corporations. They're able to drastically lower their labor costs by outsourcing work to the world's poorest and most desperate people. And they're able to escape paying taxes, like normal businesses do, by deploying armies of lawyers to play various countries' tax codes off against one another. The result is that money that should, in fairness, go to workers and governments ends up in the pockets of the corporation. The global economy is extremely advantageous to corporations, who owe no loyalty to anyone or anything except their stock price; it is disadvantageous to normal human beings, who exist in the world and not as a notional accounting trick.

In America, we accept the minimum wage as a given. It enjoys broad support. It is the realization of an ideal: that there is a point at which low pay becomes a moral outrage. (Where that point is, of course, is up for continuous debate.) Do not mistake the minimum wage for some sort of consensus of nonpartisan economists; it is a moral statement by our society. A statement of our belief that the economically powerful should not have a free hand to exploit the powerless.

Yet we are all hypocrites. We protect ourselves with a minimum wage, while at the same time enjoying the low consumer prices that come with ultra-low wages being paid to workers abroad. Our own purchasing habits reward companies for paying wages that are sure to keep their workers in poverty for life. We soothe ourselves by saying that these desperately poor workers are still better off than they would be without a job; yet we would reject that argument if an employer here tried to use it to pay us less than the minimum wage. We simply do not care if people halfway around the world who we do not see are exploited, if it saves us money.

Many business interests say that raising the wretched wages in one country will simply send the factories to another, even poorer country. That's a great reason to institute an international standard that would render that strategy moot. Bangladesh, where more than a thousand garment workers died in the Rana Plaza collapse thanks to the cutthroat quest to drive down prices, represents the bottom of the international manufacturing economy. The minimum wage of garment workers there is less than $50 per month. For all of our lofty rhetoric about a connected world and freedom and opportunity, we happily acquiesce to a system which keeps these workers— desperate, poor, and with little bargaining power— trapped in poverty. Can you live on $50 per month in Bangladesh? Yes, clearly. You can live in poverty.

Opponents of all sorts of "living wage" laws say that those who would advocate such a thing misunderstand the inherent economic forces of capitalism. Not true. We understand them all too well. We understand that, as history has amply demonstrated and continues to demonstrate, absent regulation, economic power imbalances will drive worker wages and working conditions down to outrageous and intolerable levels. People will, indeed, work all day for two dollars if that is their only option. That does not make it morally acceptable to pay people two dollars a day. Capitalism must be forcefully tempered by morality if we are to claim to be a moral people.

The system that we have— in which the vast bulk of profit flows to corporate shareholders, rather than workers and governments— is not a state of nature. It is a choice.

by Hamilton Nolan, Gawker |  Read more:
Image by Jim Cooke. Photo via AP

A Distinctive Tenderness

Some years ago I read that Sherwood Anderson’s Winesburg, Ohio (1919) was – with the exception of Scott Fitzgerald’s The Great Gatsby (1925) – the book most often taught in classes surveying twentieth-century American fiction. Whether this is true or not, Anderson (1876–1941) has certainly become, for most readers, the author of a single, groundbreaking work. Yet at least a half dozen of the stories he wrote in the 1920s and 30s are equal, or superior, to any of those in Winesburg, Ohio. “I’m a Fool”, “I Want To Know Why”, “The Egg”, “The Man Who Became a Woman” and “Death in the Woods”, to mention only the best known, underscore that Anderson should be honoured as more than a one-book author. He is, in fact, the creator of the modern American short story; the John the Baptist who prepared the way for (and influenced) writers as different as Ernest Hemingway, Eudora Welty and Ray Bradbury.

Even William Faulkner acknowledged his importance, calling him “the father of my generation of American writers and the tradition of American writing which our successors will carry on. He has never received his proper evaluation”. While Anderson’s prose can sometimes take on sonorous, biblical rhythms or echo the grandstanding rhetoric of county-fair oratory, his best short fiction manages to combine the folksiness of Mark Twain, the naturalist daring of Theodore Dreiser (to whom he dedicated his collection Horses and Men), and, more surprisingly, a linguistic freshness and simplicity he discovered in Gertrude Stein’s Tender Buttons and Three Lives. Above all, though, Anderson exhibits that distinctive tenderness for his characters, despite all their flaws and foibles, that we associate with Russian writers like Chekhov and Turgenev. He once called the latter’s Memoirs of a Sportsman “the sweetest thing in all literature”.

If that’s true, Winesburg, Ohio must be one of the most quietly bittersweet. In a cycle of linked vignettes, what we might now describe as a mosaic novel, the book portrays the loneliness, isolation and desperate yearning of the citizens of an 1890s town in the middle of farm country. At the end, its main recurring character, young George Willard, leaves Winesburg for a new life in the big city. Thematically, the stories might be summed up with the once-famous phrase from the film Cool Hand Luke: “what we have here is failure to communicate”.

In “Paper Pills”, for instance, a doctor scribbles his most intimate thoughts on small scraps that he screws up into little round balls that no one ever sees. In “Godliness”, a rich old man, who identifies with the Old Testament patriarchs, prepares to sacrifice a lamb and anoint his grandson with its blood – and is struck down by a stone from the frightened boy’s sling shot. Lonely Alice Hindman, in “Adventure”, runs naked into the street to offer herself to the first man she encounters. He turns out to be decrepit and half-witted, so she retreats to her room, “and turning her face to the wall, began trying to force herself to face bravely the fact that many people must live and die alone, even in Winesburg”.

Pathos, not cynicism or satire, is Winesburg, Ohio’s dominant mood throughout. Consider its most famous story, “The Strength of God”. One Sunday morning the Reverend Curtis Harman, at work in his study high up in the bell tower of the Presbyterian church, discovers that through a pane in a stained-glass window, one depicting Christ with a little child, he can peer down into the bedroom of the schoolteacher Kate Swift. He is shocked to see her lying on her bed, smoking a cigarette and reading a book. That day he preaches a sermon which he hopes will “touch and awaken” this woman “far gone in secret sin”.

But the memory of Kate Swift’s white skin soon begins to haunt him. On another Sunday morning he takes a stone and chips a corner of the window, so that he can more easily see directly into her bed. Afterwards ashamed, Harman resists going to the bell tower for weeks, but breaks down once, twice, three times. Finally on a cold January day, when he is feeling feverish, he climbs its steps and grimly waits:

“He thought of his wife and for the moment almost hated her. ‘She has always been ashamed of passion and has cheated me’, he thought. ‘Man has a right to expect living passion and beauty in a woman. He has no right to forget that he is an animal and in me there is something that is Greek. I will throw off the woman of my bosom and seek other women. I will besiege this schoolteacher. I will fly in the face of all men and if I am a creature of carnal lusts I will live then for my lusts.’”

At the story’s climax, Harman rushes into the office of the Winesburg Eagle newspaper and lifts up a bleeding fist, which he has just driven through the stained-glass window. With “his eyes glowing and his voice ringing with fervor”, he announces that “God has appeared to me in the person of Kate Swift, kneeling naked on a bed”.

by Michael Dirda, TLS |  Read more:
Photograph: Eric Schaal

need coffee...
via:

Charley Harper Skimmerscape
via:

Riding the Wave

The sleek look is still prevalent in the moneyed precincts of Manhattan, but for a certain segment of the population, what has come to be known as “beach hair” (tousled, tawny, done to look undone) reigns supreme after Memorial Day. Even if one is nowhere near an actual beach.

Among its enthusiasts are Brett Heyman, 33, founder of the luxury accessories line Edie Parker, named for her daughter, Edie. Both Edies came into the world about three years ago, and that is when Ms. Heyman, with “definitely a lot going on in my life,” met Chris Lospalluto, a hair stylist at Sharon Dorram Color in Sally Hershberger’s Upper East Side location, who has since given her the unfussy, low-maintenance wave she seeks.

“I have messy hair to begin with, and it’s just a better version with Chris,” Ms. Heyman said, noting that Mr. Lospalluto works fast. “It’s not ‘Real Housewife’-y — that’s always the fear — and he always gets my references: the whole ‘I was surfing in Costa Rica for a month’ look.” (...)

The beach wave, once a shrugging result of summer weather and activities, has taken on a certain artfulness: stretching year-round, coast to coast, and with surprising staying power. Indeed Oribe, the hair guru based in Miami Beach, tracks the laid-back style back more than a decade. “It came from Gisele,” he said, referring to the Brazilian supermodel Gisele Bündchen. “But she has that hair naturally. It just dries like that.”

If you lack Gisele’s genetics, there is a cornucopia of sprays, mousses and creams aiming to tousle, tumble and clump. Perhaps the best known, Bumble and bumble Surf Spray, is a salty solution first introduced in 2001 that the company said has become its No. 1 seller. It is expanding, with the addition of a Surf shampoo and conditioner arriving on shelves this month.

For effective beach waves, said Jordan M, an editorial stylist at Bumble and bumble, women must “own their texture” and resist the urge to overpreen. “I see a lot of girls trying to do beach hair, but it ends up ‘Barbie doll,’ ” he said, perhaps because the long-running trend has taken on polish over the years. This season’s waves, he said, are a balance between Alexa Chung’s (“It’s a little chicer than your typical beach hair”) and that hardy-perennial summer reference: 1960s Brigitte Bardot frolicking in the South of France. (“It has that dry texture but still has a full wave to it.”). (...)

Mr. Lospalluto uses sprays with and without salt, depending on hair density and texture. Come summer, he’ll give ends some weight by rubbing in Serge Normant’s dry oil spray or Shu Uemura’s new Touch of Gloss wax. He also cautioned against going too tousled.

“Then it becomes bedhead and it doesn’t translate to everyday life,” Mr. Lospalluto said. “Everybody likes the idea of a look that came off the runway, but for going to a meeting? Looking like you’ve just had a romp in the restroom is not appropriate.”

by Bee-Shyuan Chang, NY Times |  Read more:
Photo: Casey Kelbaugh for The New York Times

What’s in Your Green Tea?

For many, no drink is more synonymous with good health than green tea, the ancient Chinese beverage known for its soothing aroma and abundance of antioxidants. By some estimates, Americans drink nearly 10 billion servings of green tea each year.

But a new report by an independent laboratory shows that green tea can vary widely from one cup to the next. Some bottled varieties appear to be little more than sugar water, containing little of the antioxidants that have given the beverage its good name. And some green tea leaves, particularly those from China, are contaminated with lead, though the metal does not appear to leach out during the brewing process.

The report was published this week by ConsumerLab.com, an independent site that tests health products of all kinds. The company, which had previously tested a variety of green tea supplements typically found in health food stores, took a close look at brewed and bottled green tea products, a segment that has grown rapidly since the 1990s.

It found that green tea brewed from loose tea leaves was perhaps the best and most potent source of antioxidants like epigallocatechin gallate, or EGCG, though plain and simple tea bags made by Lipton and Bigelow were the most cost-efficient source. Green tea’s popularity has been fueled in part by a barrage of research linking EGCG to benefits like weight loss to cancer prevention, but the evidence comes largely from test tube studies, research on animals and large population studies, none of it very rigorous, and researchers could not rule out the contribution of other healthy behaviors that tend to cluster together.

Green tea is one of the most popular varieties of tea in the United States, second only to black tea, which is made from the leaves of the same plant. EGCG belongs to a group of antioxidant compounds called catechins that are also found in fruits, vegetables, wine and cocoa.

The new research was carried out in several phases. In one, researchers tested four brands of green tea beverages sold in stores. One variety, Diet Snapple Green Tea, contained almost no EGCG. Another bottled brand, Honest Tea’s Green Tea With Honey, claimed to carry 190 milligrams of catechins, but the report found that it contained only about 60 percent of that figure. The drink also contained 70 milligrams of caffeine, about two-thirds the amount in a regular cup of coffee, as well as 18 grams of sugar, about half the amount found in a can of Sprite. (...)

But the most surprising phase of the study was an analysis of the lead content in the green tea leaves. The leaves in the Lipton and Bigelow tea bags contained 1.25 to 2.5 micrograms of lead per serving. The leaves from Teavana, however, did not contain measurable amounts.

by Anahad O'Connor, NY Times |  Read more:
Photo: Everett Kennedy Brown/European Pressphoto Agency