Wednesday, January 23, 2013

Restaurants Turn Camera Shy


When it comes to people taking photographs of their meals, the chef David Bouley has seen it all. There are the foreign tourists who, despite their big cameras, tend to be very discreet. There are those who use a flash and annoy everyone around them. There are those who come equipped with gorillapods — those small, flexible tripods to use on their tables.

There are even those who stand on their chairs to shoot their plates from above.

“We get on top of those folks right away or else it’s like a circus,” Mr. Bouley said.

But rather than tell people they can’t shoot their food — the food they are so proud to eat that they need to share it immediately with everyone they know — he simply takes them back into his kitchen to shoot as the plates come out. “We’ll say, ‘That shot will look so much better on the marble table in our kitchen,’ ” Mr. Bouley said. “It’s like, here’s the sauce, here’s the plate. Snap it. We make it like an adventure for them instead of telling them no.”

Not every chef or restaurant owner is as accommodating, especially these days, as cameras have become as common as utensils. People are posting a shot of their quinoa salad online, or their ramen noodles on their blog. A growing backlash has prompted not only dirty looks from nearby diners, but also creative measures like Mr. Bouley’s and even some outright photo bans.

On a visit to Momofuku Ko, one diner thought nothing of subtly raising her iPhone and snapping a picture of her shaved foie. Like tens of thousands of others, she takes photos of her plates constantly, sometimes to the annoyance of her spouse, a chef.

“It just seemed very casual at Ko,” she recalled. The host was wearing jeans, hip-hop was on the playlist and a 12-year-old was sitting next to them. And this — this dish was the famous, fabulous shaved foie from the star chef David Chang. It only seemed natural to record it for posterity.

Then came the slapdown. A man in the open kitchen asked her to please put her phone away. No photos allowed.

“I was definitely embarrassed,” said the woman, who was so mortified that she spoke on condition of anonymity. Because the Michelin-starred restaurant is small — it seats only 12 — everyone at Ko witnessed the exchange. “I don’t want to be that person,” she added, stressing that she never, ever takes flash photography, never stands up for a shot and is always respectful of those around her. Since she is a part-owner of several restaurants, she knew why she was being chastised. “But I was caught off guard,” she acknowledged.

Mr. Chang is one of several chefs who either prohibit food photography (at Ko in New York) or have a policy against flashes (at Seiobo in Sydney, Australia, and Shoto in Toronto). High-end places like Per Se, Le Bernardin and Fat Duck discourage flash photography as well, though on a recent trip to the Thomas Keller restaurant Per Se, flashes were going off left and right, bouncing off the expansive windows overlooking Columbus Circle.

“It’s reached epic proportions,” says Steven Hall, the spokesman for Bouley and many other restaurants, who has worked in the business for 16 years. “Everybody wants to get their shot. They don’t care how it affects people around them.”

by Helene Stapinski, NY Times |  Read more:
Illustration: Mark Matcho

“Django Unchained”: Put-On, Revenge, and the Aesthetics of Trash


I have to face it: Quentin Tarantino’s “Django Unchained” is his most entertaining piece of moviemaking since “Pulp Fiction.” Some of it, particularly in the first half, is excruciatingly funny, and all of it has been brought off in a spirit of burlesque merriment—violent absurdity pushed to the level of flagrancy and beyond. That’s the place where Tarantino is happiest: out at the edge, playing with genre conventions, turning expectations inside out, ginning up the violence to exploitation-movie levels. The film is in two parts: the first half is a mock Western; the second is a mock-revenge melodrama about slavery, set in the deep South and ending in fountains of redemptive spurting blood. “Django” is a crap masterpiece, garrulous and repetitive, rich with jokes and cruelties, including some Old South cruelties that Tarantino invented for himself. It’s a very strange movie, luridly sadistic and morally ambitious at the same time, and the audience is definitely alive to it, revelling in its incongruities, enjoying what’s lusciously and profanely over the top.

What’s even stranger than the movie, however, is how seriously some of our high-minded critics have taken it as a portrait of slavery. Didn’t they notice that Tarantino throws in an “S.N.L.”-type skit about the Ku Klux Klan, who gather on their horses for a raid only to complain petulantly that they can’t see well out of their slitted white hoods? Or that Samuel L. Jackson does a roaring, bug-eyed parody of an Uncle Tom house slave in the second half? Or that the heroine of the movie, a female slave, is called Broomhilda von Shaft? Could Mel Brooks have done any better? (“Lili von Shtupp,” I suppose, is slightly better.) Yes, we are told that Broomhilda’s German mistress gave her the name and taught her German, but Tarantino is never more improbable than when he supplies explanations for his most bizarre fancies. Some of his characters spring from old genre movies, some spring full-blown from the master’s head. None have much basis in life, or in any social reality to speak of. (Remember the Jews who killed Nazis with baseball bats?) Yes, of course, there were killers in the Old West and cruel slave masters in the South—central characters in the movie—but Tarantino juices everything into gaudy pop fantasy. I enjoyed parts of “Django Unchained” very much, but I’m surprised that anyone can take it as anything more than an enormous put-on.

Much has already been written about the movie, but I would like to add a few notes of appreciation and complaint (don’t read past the middle of this post if you haven’t seen the movie).

1. Tarantino the Rhetorician

Tarantino loves elaborate rhetoric—the extremes of politeness, the exquisitely beautiful word, the lengthy, ridiculous argument that becomes funny precisely because it’s so entirely beside the point. Remember the stiff formalities among the criminals in “Reservoir Dogs”? Or the early conversation between John Travolta and Samuel L. Jackson in “Pulp Fiction”? The two men are about to kill some punks who owe drug money to their boss. They stop to chat. The topic at hand: a man massaged the feet of the boss’s wife and, as punishment, was tossed out of a window. Is massaging a woman’s feet an offense worthy of death, like adultery? The thugs have quite a dispute about the matter; they could be bishops at the Council of Trent arguing the fine points of Church liturgy. Then they go ahead and blow the punks away. That’s the essential Tarantino joke—discourse and mayhem, punctilio and murder, linked together.

“Django” is set in 1858 and thereafter. A German bounty hunter, King Schultz (Christoph Waltz), poses as a dentist and spins around Texas, speaking perfect English. King Schultz is a mannerly scoundrel. When he encounters some white men transporting slaves through the dark woods, he says, “Among your company, I’m led to believe, there is a specimen I hope to acquire.” After shooting one of the white men, who howls in pain, he says, “If you could keep your caterwauling down to a minimum, I would like to speak to young Django.” Just as he did in “Inglorious Basterds,” in which Waltz was a polite S.S. killer, Tarantino writes fancy talk for this self-amused, highly elocutionary Austrian actor. The added comedy here is that the foreigner is so much more articulate than the tobacco-stained, scraggly-assed, lunkhead Americans he meets everywhere. He’s the Old World instructing the New in the fine points of etiquette and speech while enjoying the savage opportunities of the Wild West.

King Schultz teams up with Django, a slave he liberates, played by the growling Jamie Foxx (who doesn’t always seem to be in on the joke). The two travel around the West, killing wanted men for money. Schultz flimflams everybody, and in some cases shoots the person he’s teasing, popping him in the chest with a tiny pistol. Up until the middle of the movie, Tarantino comes close to moral realism: the cold-hearted Schultz is a complete cynic; he does what he does for money. We can accept that as some sort of truth. But then Schultz risks his life to help Django find his slave wife, who has been sold to a plantation owner in Mississippi, and the movie becomes nonsensical. The vicious comic cynicism of the first half gives way to vicious unbelievable sentiment in the second half. The murderous bounty hunter has a heart of gold.

by David Denby, New Yorker |  Read more:
Photo: Django Unchained

The Force: How Much Military is Enough?

The long history of military spending in the United States begins with the establishment of the War Department, in 1789. At first, the Secretary of War, a Cabinet member who, from the start, was a civilian, was called the Secretary at War, a holdover from the Revolution but also a prepositional manifestation of an ideological commitment: the department was chiefly to be called upon only if the nation was at war. Early Americans considered a standing army—a permanent army kept even in times of peace—to be a form of tyranny. “What a deformed monster is a standing army in a free nation,” Josiah Quincy, of Boston, wrote in 1774. Instead, they favored militias. About the first thing Henry Knox did when he became George Washington’s War Secretary was to draft a plan for establishing a uniform militia.

Beginning in 1822, congressional oversight was handled by two standing committees: one for the Army, the other for the Navy. A committee on the militia, established in 1815, was abolished in 1911—the militia itself having been essentially abandoned. Six years later, the United States entered the First World War, and the staggering devastation of that war raised both new and old fears about the business of arming men. In 1934, the publication of “Merchants of Death,” a best-seller and a Book-of-the-Month-Club selection, contributed to the formation, that year, of the Senate Munitions Committee, headed by Gerald P. Nye, a North Dakota Republican. Not coincidentally, that was also the year Congress passed the National Firearms Act, which, among other things, strictly regulated the private ownership of machine guns. (Keeping military weapons out of the hands of civilians seemed to the Supreme Court, when it upheld the Firearms Act, in 1939, entirely consistent with the Second Amendment, which provides for the arming of militias.) For two years, Nye led the most rigorous inquiry into the arms industry that any branch of the federal government has ever conducted. He convened ninety-three hearings. He thought the ability to manufacture weapons should be restricted to the government. “The removal of the element of profit from war would materially remove the danger of more war,” he said. That never came to pass, partly because Nye was unable to distinguish his opposition to arms profiteering from his advocacy of isolationism, a position that had become indefensible.

Not until the Second World War did the United States establish what would become a standing army. And even that didn’t happen without dissent. In May of 1941, Robert Taft, a Republican senator from Ohio, warned that America’s entry into the Second World War would mean, ultimately, that the United States “will have to maintain a police force perpetually in Germany and throughout Europe.” Taft, like Nye, was an ardent isolationist. “Frankly, the American people don’t want to rule the world, and we are not equipped to do it. Such imperialism is wholly foreign to our ideals of democracy and freedom,” he said. “It is not our manifest destiny or our national destiny.” In 1944, when Nye ran for reëlection, he was defeated. Taft three times failed to win the Republican Presidential nomination. The Second World War demonstrated the folly of their vantage on foreign policy. It also made it more difficult to speak out against arms manufacturers and proponents of boundless military spending.

A peace dividend expected after the Allied victory in 1945 never came. Instead, the fight against Communism arrived, as well as a new bureaucratic regime. In 1946, the standing committees on military and naval affairs combined to become the Armed Services Committee. Under amendments to the National Security Act of 1947, which created the position of the chairman of the Joint Chiefs of Staff, the War Department, now housed for the first time in a building of its own, became the Department of Defense.

Meanwhile, during Senate hearings concerning the future of the national defense, military contractors such as Lockheed Martin—which was an object of Nye’s investigation in the nineteen-thirties, and built more than ten thousand aircraft during the Second World War—argued not only for military expansion but also for federal subsidies. In 1947, Lockheed Martin’s chief executive told a Senate committee that the nation needed funding for military production that was “adequate, continuous, and permanent.”

In the nineteen-fifties, at the height of both the Korean War and McCarthyism, the United States’ foreign policy had become the containment of Communism the world over, and military spending made up close to three-quarters of the federal budget. “Defense,” no less than “national security,” is a product and an artifact of the Cold War. So, in large part, is the budget for it.

by Jill Lepore, New Yorker |  Read more:
Photograph by Grant Cornett.

Tuesday, January 22, 2013


Yuri Kudrin
via:

Dissatisfaction City

The Las Vegas strip is alien. Its buildings are at once too fanciful and too utilitarian to be of this world, hulking rectangular prisms designed for the masses yet draped in the trappings of opulence and the fantastic. Its incongruity promises further strangeness, luring thousands to the city each year in the hopes of finding an escape from everyday tedium.

But what makes Las Vegas so interesting — if anything — is how it is just like the rest of America but more so. Though it casts itself as a bulwark of hedonism, an oasis of social rebellion where vacationers can escape the ennui and conformity of their everyday lives, there’s nothing countercultural about it. Rather, Las Vegas takes contemporary capitalism to its logical extreme, unleashing the social forces that underlie every American city and embodying them in ersatz monuments.

Las Vegas has taken to heart the famous lesson attributed — probably falsely — to Marx that to teach a man to fish is to ruin a wonderful business opportunity. It is a city of yearning, awash in dopamine and desire, where the economically powerful capitalize on managed dissatisfaction. There is no moment when blackjack players feel they have gotten all they came for, as there is always the anticipation of what one more hand might bring. The same is true of the city’s other primary industry: the business of sex. The city’s dancers and strip clubs don’t sell sexual satisfaction so much as stoke inexhaustible desire. Even casino architecture promotes dissatisfaction: The common spaces never fully achieve closure, instead gently twisting out of sight, always suggesting that something more can be had if one would venture a little further.

At the same time, the spaces are designed to refresh and rejuvenate, to psychologically prepare gamblers to play just one more round. The ploy works: After hours of wandering through the casinos watching pensioners slowly pour their savings into the slot machines’ flashing abyss, it becomes clear that some sort of psychological brake is being overridden. If mutually voluntary interactions are supposed to leave both parties better off, as neoclassical economists would have us believe, then this is a clear and enraging counterexample.

Casino owners tend to write off these all-day, paycheck-liquidating customers as “problem gamblers,” framing them as a small group of cognitively abnormal individuals unfit for “gaming” and in need of psychological assistance. The implication is that most gambling is done by “healthy gamblers” who represent the normal majority. However, while “problem gamblers” may not be representative of the general public, they are very much representative of what allows Las Vegas casinos to thrive. In Addiction by Design: Machine Gambling in Las Vegas, Natasha Dow Schüll reports that though “problem gamblers” amount to only a small subset of the general population, they account for between 30 percent to 60 percent of casino profits.

Further, Schüll debunks the myth that there is a clear cutoff between those who are healthy and those who “have a problem,” pointing out that almost all regular gamblers sit on a spectrum of reckless gambling. By fostering an artificial dichotomy between “normal” and “problem” gamblers, she argues, casinos render invisible their exploitation of existing addicts and their efforts to create new ones through the proliferation of the sorts of stimulation that can lure people into giving up everything.

Casinos feature what researchers call “playground designs” — spaces designed to energize, stimulate, and promote exploration — and a growing body of empirical evidence suggests that they measurably increase the amount people gamble. The architectural manipulation is so brazen that it is perversely accepted as natural: Of course commercial space is engineered to make us spend. And why wouldn’t there be privately owned, multi-million-dollar edifices built to control our behavior? When we imagine social control, the jump is almost inevitably made to dystopias of mass coercion and centralized authority. In Las Vegas, however we see its true form: spaces, structures, and spectacles controlled by the few to extract wealth from the many. What does social control really look like? It looks like a casino.

by Jesse Elias Spafford, New Inquiry |  Read more:
Photo: uncredited

At the Edge of America

“Imagine living in a community where you know that residing in every single home in the neighborhood are people who think much as do you, respect most of the values as do you, and will not try to force any of their values on you or your children,” another post from October on The Citadel’s blog, this one written by someone with the pseudonym Just a III Guy, reads. “Imagine living in a neighborhood where you know every single neighbor on your street, in your neighborhood, and in the entire town, has qualified Riflemen inside, ready to come to your aid at a moments notice, whether to help you change a tire, fix a problem, or cover your back in a firefight with an Enemy of Liberty.”

The Citadel, as envisioned and advertised by its creators, is to be a walled community of 3,500 to 7,000 “patriotic American families” who are ready for when The Shit Hits The Fan (TSHTF), i.e. the myriad potential society-collapsing disasters, either natural or man made, anticipated by preppers, survivalists, along with other fringe and breakaway strands of -ers and -ists. The Citadel is to be a place for people who want to be “removed and protected from peril in order to preserve ourselves, our posterity, and Liberty in the event of a national economic implosion.” And in whatever time is to be had before grid-down, economic collapse, The Citadel will provide a place to live “a free/freer life in Idaho (or elsewhere in the American Redoubt) amongst the current strong, self-reliant and Liberty-loving residents of the region.”

According to the project’s blog, The Citadel, if completed, will feature the following: the III Arms Factory, a curtain wall and towers, a main gate, a town green named after the Battle of Lexington leader John Parker, a town hall, a community armory, a firearms museum, a farmers’ market, a medical center, a retirement facility, schools including a boarding school, a library, a tourist visitor center, a town center featuring retail and commercial spaces, houses, canals, a lake, ponds, firearms ranges, an archery range, sports fields, a hotel, a bank (III Bank), churches, a power plant, underground shelters, a post office, a fire house, a stockade/jail, a biomass plant, walking trails, orchards, gardens, parks, an outdoor pavilions, a large amphitheater, something called a “command and control center,” a media center, an airstrip, a helipad, a shuttle system, and a parking center. A bird’s-eye artist’s rendering of the project gives off a strong medieval vibe.

The location currently favored for the construction of The Citadel is Benewah County in northern Idaho. Though the project got rolling only last summer, project organizers claim to have already purchased 20 “mountaintop” acres of land in Idaho, and, if all goes well, they hope to purchase another 2,000 or 3,000, potentially nearby.

The writings associated with The Citadel project offer hints at some of the strands of thought that the organizers are taking up. In choosing Idaho, The Citadel project explicitly borrowed from the ideas of James Wesley, Rawles (yes, Rawles keeps a comma in his name), a survivalist author, religious separatist, and editor of survivalblog.com, who coined the term “American Redoubt” to describe a “conscious retrenchment into safe haven states.” Another idea central to The Citadel project is Rightful Liberty, a concept taken from the writings of Thomas Jefferson: “Rightful liberty is unobstructed action according to our will within limits drawn around us by the equal rights of others.” As such, the FAQ section of The Citadel’s website states, the community will have no racial or religious barriers.

by Eric Lach, TPM |  Read more:
Illustration: Credit: iiicitadel.com

12 Rude Revelations About Sex

Sex, we have been led to believe, is as natural as breathing. But in fact, contends British philosopher Alain de Botton, it is "close to rocket science in complexity." It's not only a powerful force, it's often contrary to many other things we care about. Sex inherently sets up conflicts within us. We crave sex with people we don't know or love. It makes us want to do things that seem immoral or degrading, like slapping someone or being tied up. We feel awkward asking the people we love for the sex acts we really want.

There's no denying that sex has its sweaty charms, and in its most exquisite moments dissolves the isolation that embodied life imposes on us. But those moments are rare, the exception rather than the rule, says de Botton, founder of London's School of Life. "Sex is always going to cause us headaches; it's not something we can miraculously grow relaxed about." We suffer privately, feeling "painfully strange about the sex we are either longing to have or struggling to avoid."

If we turn to sex books to help us work out this central experience of our lives, we are typically assured that most problems are mechanical, a matter of method. In his own new book, How to Think More About Sex, de Botton makes the case that our difficulties stem more from the multiplicity of things we want out of life, or the accrual of everyday resentments, or the weirdness of the sex drive itself. Here are some of the most basic questions it answers. —The Editors

Why do most people lie about their true desires?

It is rare to go through life without feeling that we are somehow a bit odd about sex. It is an area in which most of us have a painful impression, in our heart of hearts, that we are quite unusual. Despite being one of the most private activities, sex is nevertheless surrounded by a range of powerfully socially sanctioned ideas that codify how normal people are meant to feel about and deal with the matter. In truth, however, few of us are remotely normal sexually. We are almost all haunted by guilt and neuroses, by phobias and disruptive desires, by indifference and disgust. We are universally deviant—but only in relation to some highly distorted ideals of normality.

Most of what we are sexually remains impossible to communicate with anyone whom we would want to think well of us. Men and women in love instinctively hold back from sharing more than a fraction of their desires out of a fear, usually accurate, of generating intolerable disgust in their partners.

Nothing is erotic that isn't also, with the wrong person, revolting, which is precisely what makes erotic moments so intense: At the precise juncture where disgust could be at its height, we find only welcome and permission. Think of two tongues exploring the deeply private realm of the mouth—that dark, moist cavity that no one but our dentist usually enters. The privileged nature of the union between two people is sealed by an act that, with someone else, would horrify them both.

What unfolds between a couple in the bedroom is an act of mutual reconciliation between two secret sexual selves emerging at last from sinful solitude. Their behavior is starkly at odds with the behavior expected of them by the civilized world. At last, in the semi-darkness a couple can confess to the many wondrous and demented things that having a body drives them to want.

Why is sex more difficult to talk about in this era, not less?

Whatever discomfort we feel around sex is commonly aggravated by the idea that we belong to a liberated age—and ought by now to be finding sex a straightforward and untroubling matter, a little like tennis, something that everyone should have as often as possible to relieve the stresses of modern life.

The narrative of enlightenment and progress skirts an unbudging fact: Sex is not something we can ever expect to feel easily liberated from. It is a fundamentally disruptive and overwhelming force, at odds with the majority of our ambitions and all but incapable of being discreetly integrated within civilized society. Sex is not fundamentally democratic or kind. It refuses to sit neatly on top of love. Tame it though we might try, it tends to wreak havoc across our lives; it leads us to destroy our relationships, threatens our productivity, and compels us to stay up too late in nightclubs talking to people whom we don't like but whose exposed midriffs we wish to touch. Our best hope should be for a respectful accommodation with an anarchic and reckless power.

by Alain de Botton, Psychology Today | Read more:
Image via:

Photo: markk

The Vegans Have Landed


The animal rights movement wants to prevent the most powerful species on the planet from oppressing every other species, just as human rights campaigners try to stop the most powerful people from oppressing those who are least powerful. The problem, they say, is ‘human privilege’, a privilege that almost all of us abuse. Yet the injustice they’re fighting is not the entire apparatus of human domination (even if some activists think that’s what they’re against). Rather, it is one significant aspect of it: our treatment of animals as resources — as food, clothing, entertainment, and subjects of research. Animals feel pain and care about their survival, and so their advocates say we should expand our circle of concern beyond humans to the rest of the animal kingdom.

According to animal rights theory, respecting the interests of animals in this way would mean abolishing the use of them as resources. So we’d all have to become vegans who neither eat animals nor use any other animal products. Vegan advocates face a daunting challenge, though, since most of us have a strong prejudice in favour of humans. This makes it relatively difficult for us to empathise with non-humans, so we are reluctant to give up the spoils of animal domination — meat, eggs, cheese, wool, fur and leather — and exchange them for tofu, pleather (plastic leather) and animal liberation.

In the face of this inertia, some have asked us to imagine ourselves in the position of the animals that we exploit and kill. Jonathan Safran Foer puts this in the form of an alien invasion in his anti-factory farming treatise, Eating Animals (2009):
If we were to one day encounter a form of life more powerful and intelligent than our own, and it regarded us as we regard fish, what would be our argument against being eaten?
Suppose that we are doing our usual thing of exploiting animals because they aren’t smart or powerful enough to fight back. An alien species that is smarter and more powerful than us lands on Earth and decides to follow our example by exploiting and killing us. Why shouldn’t aliens use their technological and cerebral edge to turn us into food, clothes, entertainment and research subjects, just as we do to animals now?

This is, of course, a sci-fi repackaging of the ‘Golden Rule’ — that is, one should treat others as one would like to be treated oneself. This argument resonates because most of us have picked up a version of ‘do as you would be done by’ somewhere along the way, no matter how secular our upbringings. Could it be, then, that if we want to be consistent with our own values, the animal activists are right that we need to go vegan?

We might object that there is something misleading about the alien scenario. It wants to make us see things from the animals’ point of view, yet fudges it by putting us in the animals’ place while maintaining our human cultural beliefs and cognitive abilities. There are certainly similarities between human and non-human experiences, especially when it comes to pain, but as with the Epsilons in Aldous Huxley’s novel Brave New World (1932) who are genetically designed to tolerate a subservient existence, we assume that cows, pigs, lambs and chickens who are raised on farms and killed in slaughterhouses do not suffer the horror and existential anguish that humans would in the same circumstances. This is why the alien hypothetical is something of a cheat, and equally why comparing factory farms to the Holocaust and human slavery rings false.

Even so, if animals want to avoid suffering and want to live, as surely they do, using them as resources violates those interests. Given that humans cause animals so much suffering and death while offering them so little in return, there’s no denying that for most other animals on this planet, we might as well be a malevolent invasion.

So, my objection to the alien invasion scenario is more sweeping. If we want to take the interests of animals seriously, then the biggest failure of the analogy is that it underestimates just how malign we are. Sure, if we were replaced as the dominant animals on the planet, we’d probably prefer the new ruling species to be vegan. But if aliens with superior technology and minds came here and were determined to treat us the way that vegan humans treat animals on this planet, we’d still be in serious trouble. Veganism would hardly figure as a safeguard of our wellbeing.

Universal veganism wouldn’t stop the road-building, logging, urban and suburban development, pollution, resource consumption, and other forms of land transformation that kills animals by the billions. So what does veganism do exactly? Theoretically, it ends the raising, capture and exploitation of living animals, and it stops a particular kind of killing that many vegans claim is the worst and least excusable: the intentional killing of animals in order to use their bodies as material goods.

Veganism, as a whole, requires us to stop using animals for entertainment, food, pharmaceutical testing, and clothing. If it were to become universal, factory farming and animal testing would end, which would be excellent news for all the animals that we capture or raise for these purposes. But it would accomplish next to nothing for free-roaming wild animals except to stop hunting, which is the least of their problems.

by Rhys Southan, Aeon |  Read more:
Illustration: Salad by Till Nowak

Monday, January 21, 2013


Liliana Spiktorenk 1967

Why You Truly Never Leave High School

Throughout high school, my friend Kenji had never once spoken to the Glassmans. They were a popular, football-­playing, preposterously handsome set of identical twins (every high school must have its Winklevii). Kenji was a closeted, half-Japanese orchestra nerd who kept mainly to himself and graduated first in our class. Yet last fall, as our 25th high-school reunion was winding down, Kenji grabbed Josh Glassman by his triceps—still Popeye spinach cans, and the subject of much Facebook discussion afterward—and asked where the after-party was. He was only half-joking.

Psychologically speaking, Kenji carries a passport to pretty much anywhere now. He’s handsome, charming, a software engineer at an Amazon subsidiary; he radiates the kind of self-possession that earns instant respect. Josh seemed to intuit this. He said there was an after-party a few blocks away, at the home of another former football player. And when Kenji wavered, Josh wouldn’t take no for an answer. “I could see there was no going back,” Kenji explained the next morning, over brunch. “It was sort of like the dog who catches the car and doesn’t know what to do with it.”

The party was fine. For a while, Kenji wondered if he’d been brought along as a stunt guest—a suspicion hardly allayed by Josh’s announcement “I brought the valedictorian!” as they were descending the stairs to their host’s living room—though Kenji’s attendance was in the same spirit, really, just in reverse. (“This is the party I never got invited to in high school,” he told Josh at one point, who didn’t disagree.) At any rate, Kenji didn’t care. His curiosities were anthropological: He had no idea what it was like “to be a football player or a cheerleader, get out of high school, marry someone from your local area, and settle in the same area.” And his conclusion, by the end of the night, was: Nothing special. “It was just an ordinary party, one that might have been a little uncomfortable if we all hadn’t been a little drunk.”

You’d think Kenji’s underwhelmed reaction would have been reassuring. But another classmate of ours, also at that brunch, didn’t take it that way. Like Kenji, Larry was brilliant, musically gifted, and hidden behind awkward glasses during most of his adolescence; like Kenji, he too is attractive and successful today. He received a Tony nomination for the score of Legally Blonde, he has a new baby, he married a great woman who just happens to be his collaborator. Yet his reaction was visceral and instantaneous. “Literally?” he said. “Your saying this makes me feel I wish I’d been invited to that.”

“Well, right,” said Kenji. “Because that’s the way high school is.”

“And maybe the way life is, still, sometimes,” said Larry. “About wanting to be invited to things.” He’s now working on a musical adaptation of Heathers, the eighties classic that culminates, famously, in Christian Slater nearly blowing up a high school.

Not everyone feels the sustained, melancholic presence of a high-school shadow self. There are some people who simply put in their four years, graduate, and that’s that. But for most of us adults, the adolescent years occupy a privileged place in our memories, which to some degree is even quantifiable: Give a grown adult a series of random prompts and cues, and odds are he or she will recall a disproportionate number of memories from adolescence. This phenomenon even has a name—the “reminiscence bump”—and it’s been found over and over in large population samples, with most studies suggesting that memories from the ages of 15 to 25 are most vividly retained. (Which perhaps explains Ralph Keyes’s observation in his 1976 classic, Is There Life After High School?: “Somehow those three or four years can in retrospect feel like 30.”)

by Jenifer Senior, New York Magazine |  Read more:
Photo: Irina Werning

The Jailhouse Now



[ed. Lanai City is looking pretty good these days (courtesy of Mr. Larry Ellison). They've even spruced up the old jailhouse.]
Photo: markk

Sunday, January 20, 2013

Sitting Is the Smoking of Our Generation


[ed. This is one of the best ideas I've heard in a long time, and so simple.]

I find myself, probably like many of you, spending way too much time in front of my computer. When I do face-to-face meetings, my colleagues and I typically met around some conference table, sometimes at an airport lounge (nothing like getting the most out of a long layover), and quite often at coffee shops (hello Starbucks!). But that means that the most common denominator across all these locations wasn't the desk, or, the keyboard, or even the coffee. The common denominator in the modern workday is our, um, tush.

As we work, we sit more than we do anything else. We're averaging 9.3 hours a day, compared to 7.7 hours of sleeping. Sitting is so prevalent and so pervasive that we don't even question how much we're doing it. And, everyone else is doing it also, so it doesn't even occur to us that it's not okay. In that way, I've come to see that sitting is the smoking of our generation.

Of course, health studies conclude that people should sit less, and get up and move around. After 1 hour of sitting, the production of enzymes that burn fat declines by as much as 90%. Extended sitting slows the body's metabolism affecting things like (good cholesterol) HDL levels in our bodies. Research shows that this lack of physical activity is directly tied to 6% of the impact for heart diseases, 7% for type 2 diabetes, and 10% for breast cancer, or colon cancer. You might already know that the death rate associated with obesity in the US is now 35 million. But do you know what it is in relationship to Tobacco? Just 3.5 million. The New York Times reported on another study, published last year in the journal Circulation that looked at nearly 9,000 Australians and found that for each additional hour of television a person sat and watched per day, the risk of dying rose by 11%. In that article, a doctor is quoted as saying that excessive sitting, which he defines as nine hours a day, is a lethal activity.

And so, over the last couple of years, we saw the mainstreaming of the standing desk. Which, certainly, is a step forward. But even that, while it gets you off your duff, won't help you get real exercise.

So four years ago, I made a simple change when I switched one meeting from a coffee meeting to a walking-meeting. I liked it so much it became a regular addition to my calendar; I now average four such meetings, and 20 to 30 miles each week. Today it's life-changing, but it happened almost by accident.

My fundamental problem with exercise has always been this: it took time away from other more "productive things." Going to the gym to take care of me (vs. companies, colleagues, family) seemed selfish. My American-bred Puritan work ethic nearly always won out. Only when I realized I could do both at the same time, by making exercise part of the meeting, did I finally start to get more exercise. This is one of those 2-for-1 deals. I'm not sacrificing my health for work, nor work for fitness. And maybe that's why making fitness a priority finally doesn't feel like a conflict. It's as easy as stepping out the door and might require as much as a change of shoes.

And, yet, it's true that some people will turn you down. Probably 30% of the people I ask to do these kinds of meetings say that they are not fit enough to do a walking meeting. I had one person tell me afterwards that they got more active for an entire month before our meeting, so as to not embarrass themselves on their hike with me. I don't judge the people who won't do a hiking meeting, and in most cases will choose to do another type of meeting with them (lunch or whatever) but I am also reminded of James Fowler and Nicholas Christakis's research from their related book, Connected. They observed that obesity spreads according to network effects; if your friend's friend's friend who lives a thousand miles away gains weight, you're likely to gain weight, too. And if that extended friend also loses weight, even if you're not in the same city, you're likely to lose weight, too. My goal is to be someone who socializes the idea that physical activity matters, and that we each matter enough to take care of our health.

And after a few hundred of these meetings, I've started noticing some unanticipated side benefits. First, I can actually listen better when I am walking next to someone than when I'm across from them in some coffee shop. There's something about being side-by-side that puts the problem or ideas before us, and us working on it together.

Second, the simple act of moving also means the mobile device mostly stays put away. Undivided attention is perhaps today's scarcest resource, and hiking meetings allow me to invest that resource very differently.

And, finally we almost always end the hike joyful. The number one thing I've heard people say (especially if they've resisted this kind of meeting in the past) is "That was the most creative time I've had in a long time" And that could be because we're outside, or a result of walking. Research certainly says that walking is good for the brain.

by Nilofer Merchant, Harvard Business Review |  Read more:
Photo: uncredited

Helena Almeida, Tela Habitada, 1976
via:

Hello Laptop, My Old Friend

In a recent issue of the magazine, I wrote about people in their twenties and some books that focus on their plight. The piece begins with an account of some weeks I spent in Iceland, in my own early twenties, and in working on that passage I relied on both memory and record. I’m a pack rat when it comes to correspondence and ephemera: I still have every substantive note or e-mail I’ve sent or received since the start of college—perhaps even earlier—plus pamphlets, birthday cards, maps, Playbills, boarding passes, brochures, brittle magazines, and fancy hardbound notebooks that I’ve started in the hope of reinventing myself as someone who writes in fancy hardbound notebooks. Who’d have thought that a map of businesses in pre-crash Reykjavík would one day help me write a book review? Not my twenty-two-year-old self, certainly. And yet that map, like many notes and e-mails from those weeks, was crucial in reëntering a particular experience years later—not just to tell the story to readers but to reclaim it as a memory of my own.

I’ve been thinking a lot lately about the evocative power of cast-off material, because the day that twentysomething piece appeared, my laptop died. It was a galling loss: it left me wandering around the house all morning, eating stale crackers and feeling like an unyoked mule before I figured out how to move forward again. I had a second laptop, I realized—an old one, stuffed into a bookshelf by my desk. It would be perfect. Yet I kept demurring. I’d retired that computer, with complicated feelings, years before. Put plainly, the machine—which I called Laptop, capital L: the genus particularized, like “God”—stands, even now, as one of the great, haunting loves of my young-adult life.

It was an affection born of shared interests and mutual experience. I bought Laptop when I was twenty, and for years after we were inseparable. We lived together in school, in the city, abroad, and back home—some ten towns on two continents in total, with short trips to several more in between. Laptop followed me to countless cafés, bounced through hostels, patiently waited at research libraries, and offered no complaint about the odd hours or the basic loneliness of his endeavors. I wrote college papers on him, then a thesis. Hundreds of pages later, he helped me compose my first magazine work. Laptop is an IBM T42: a stripped-down, strangely square model that was the standard issue at my university tech store. But he has a rare, marvellous keyboard—deep, well-defined, solid—and has proved indestructible. The only T42 I ever saw give up the ghost belonged to a friend who treated it badly—flinging it onto tables, hammering cruelly at its keys, and dropping it repeatedly—until, one day (I think there might have been spillage involved), she broke the unbreakable. That was around the time I started to suspect that people’s rapports with their laptops reveal more about them than we might want to know.

Here’s what I’ll tell you about mine, then, with the cool objectivity of sudden reacquaintance. Laptop is dusty these days. His shell is slightly scratched. But he’s still bright on the inside—even polished—thanks to the years of oiling by fingertips and palms. He bears the marks of his experience. The A, S, E, D, C, O, L, N, and M keys are worn down to a point of near-illegibility. There’s evidence of lots of activity on the BACKSPACE key—though, having just sifted through a bunch of writing from those years, I think maybe not quite enough. Crumbs were, and continue to be, a problem.

Still, he looks basically great. I turned him on. His hourglass spun. Half an hour later, after a long, groggy, somewhat painful-to-watch reveille, I found myself facing the desktop I’d worked on all those years. This is a little like trying on those weird pants that you wore in high school: memories, not all salutary, rush back; habits return; a mind-set reasserts itself, mocking the progress that you thought you’d made. For instance, e-mail. I’d nearly forgotten what a prolific, voluble, and capricious e-mailer I was for most of my early twenties; seeing Laptop’s home screen brought back an old feeling, and I found myself tempted to fire off a string of prolix missives. Other, obscurely related anxieties followed. Not long after I began to use Laptop again, I started to have strange dreams about failing to find gainful employment after school.

by Nathan Heller, New Yorker |  Read more:
Illustration by Tim Lahan