Sunday, January 31, 2016

The Refragmentation

One advantage of being old is that you can see change happen in your lifetime. A lot of the change I've seen is fragmentation. US politics is much more polarized than it used to be. Culturally we have ever less common ground. The creative class flocks to a handful of happy cities, abandoning the rest. And increasing economic inequality means the spread between rich and poor is growing too. I'd like to propose a hypothesis: that all these trends are instances of the same phenomenon. And moreover, that the cause is not some force that's pulling us apart, but rather the erosion of forces that had been pushing us together.

Worse still, for those who worry about these trends, the forces that were pushing us together were an anomaly, a one-time combination of circumstances that's unlikely to be repeated—and indeed, that we would not want to repeat.

The two forces were war (above all World War II), and the rise of large corporations.

The effects of World War II were both economic and social. Economically, it decreased variation in income. Like all modern armed forces, America's were socialist economically. From each according to his ability, to each according to his need. More or less. Higher ranking members of the military got more (as higher ranking members of socialist societies always do), but what they got was fixed according to their rank. And the flattening effect wasn't limited to those under arms, because the US economy was conscripted too. Between 1942 and 1945 all wages were set by the National War Labor Board. Like the military, they defaulted to flatness. And this national standardization of wages was so pervasive that its effects could still be seen years after the war ended.

Business owners weren't supposed to be making money either. FDR said "not a single war millionaire" would be permitted. To ensure that, any increase in a company's profits over prewar levels was taxed at 85%. And when what was left after corporate taxes reached individuals, it was taxed again at a marginal rate of 93%.

Socially too the war tended to decrease variation. Over 16 million men and women from all sorts of different backgrounds were brought together in a way of life that was literally uniform. Service rates for men born in the early 1920s approached 80%. And working toward a common goal, often under stress, brought them still closer together.

Though strictly speaking World War II lasted less than 4 years for the US, its effects lasted longer. Wars make central governments more powerful, and World War II was an extreme case of this. In the US, as in all the other Allied countries, the federal government was slow to give up the new powers it had acquired. Indeed, in some respects the war didn't end in 1945; the enemy just switched to the Soviet Union. In tax rates, federal power, defense spending, conscription, and nationalism the decades after the war looked more like wartime than prewar peacetime. And the social effects lasted too. The kid pulled into the army from behind a mule team in West Virginia didn't simply go back to the farm afterward. Something else was waiting for him, something that looked a lot like the army.

If total war was the big political story of the 20th century, the big economic story was the rise of new kind of company. And this too tended to produce both social and economic cohesion.

The 20th century was the century of the big, national corporation. General Electric, General Foods, General Motors. Developments in finance, communications, transportation, and manufacturing enabled a new type of company whose goal was above all scale. Version 1 of this world was low-res: a Duplo world of a few giant companies dominating each big market.

The late 19th and early 20th centuries had been a time of consolidation, led especially by J. P. Morgan. Thousands of companies run by their founders were merged into a couple hundred giant ones run by professional managers. Economies of scale ruled the day. It seemed to people at the time that this was the final state of things. John D. Rockefeller said in 1880:
The day of combination is here to stay. Individualism has gone, never to return.
He turned out to be mistaken, but he seemed right for the next hundred years.

The consolidation that began in the late 19th century continued for most of the 20th. By the end of World War II, as Michael Lind writes, "the major sectors of the economy were either organized as government-backed cartels or dominated by a few oligopolistic corporations."

For consumers this new world meant the same choices everywhere, but only a few of them. When I grew up there were only 2 or 3 of most things, and since they were all aiming at the middle of the market there wasn't much to differentiate them.

One of the most important instances of this phenomenon was in TV. Here there were 3 choices: NBC, CBS, and ABC. Plus public TV for eggheads and communists. The programs the 3 networks offered were indistinguishable. In fact, here there was a triple pressure toward the center. If one show did try something daring, local affiliates in conservative markets would make them stop. Plus since TVs were expensive whole families watched the same shows together, so they had to be suitable for everyone.

And not only did everyone get the same thing, they got it at the same time. It's difficult to imagine now, but every night tens of millions of families would sit down together in front of their TV set watching the same show, at the same time, as their next door neighbors. What happens now with the Super Bowl used to happen every night. We were literally in sync. (...)

It wasn't just as consumers that the big companies made us similar. They did as employers too. Within companies there were powerful forces pushing people toward a single model of how to look and act. IBM was particularly notorious for this, but they were only a little more extreme than other big companies. And the models of how to look and act varied little between companies. Meaning everyone within this world was expected to seem more or less the same. And not just those in the corporate world, but also everyone who aspired to it—which in the middle of the 20th century meant most people who weren't already in it. For most of the 20th century, working-class people tried hard to look middle class. You can see it in old photos. Few adults aspired to look dangerous in 1950.

But the rise of national corporations didn't just compress us culturally. It compressed us economically too, and on both ends.

by Paul Graham |  Read more:

Werner BischofView of Tokyo, Shimbashi district. 1955
via:

The Church Of The Gridiron

[ed. See also: The Collision Sport on Trial]

American football officially began in the years following the Civil War. A crude blend of soccer and rugby, the sport was brutal, with a fast-and-loose set of rules that gave it the appearance of a gang fight. In 1905, 19 players died, and another 137 were injured; the Chicago Tribune called the season a “death harvest.” President Theodore Roosevelt finally intervened, calling a group of influential sportsmen to the White House in order to help transform the game.

Reforms followed, such as legalizing the forward pass and penalizing unsportsmanlike conduct. The sport became safer, and by midcentury it had entered a golden age of players like quarterback Johnny Unitas and fullback Jim Brown. Games were televised, and in the late sixties the Super Bowl was created.

Today pro football is the unparalleled giant of the sports world. In 2014 forty-five of the fifty top-rated television broadcasts were football games. More Americans follow football than follow Major League Baseball, nba basketball, and nascar racing combined. The National Football League (nfl) earns nearly $10 billion a year in profits, with an expressed goal of $25 billion. During the season, Americans spend more time watching football than going to religious services. Pro football has become the spectacle that unites people in this country more than any other.

“But it has a dark side,” says author Steve Almond.

For four decades Almond was a consummate fan, soaking up all that football offered. Then, in 2014, he did the unthinkable: he stopped. No more games. No more listening to sports talk radio. He would become football fandom’s conscientious objector.

In
Against Football: One Fan’s Reluctant Manifesto he writes, “Our allegiance to football legitimizes and even fosters within us a tolerance for violence, greed, racism, and homophobia.” A New York Times bestseller, the book is an eloquent examination of America’s most popular sport — in particular, the aspects many fans tend to ignore: its astounding injury rate, its exaggeration of gender stereotypes, and its inherent violence.

Cook: What role does football play in the U.S. today?

Almond: It’s the largest shared narrative in the country: emotionally, psychologically, and maybe even financially. My sense is that more Americans — male and female, gay and straight, of all races and classes — are deeply invested in football than in any other single activity. For forty years I was a member in good standing of the Church of the Gridiron. The game can be brutal, but it’s also complex and satisfying to watch.

When Ernest Hemingway wanted to understand Spanish culture, he went to see the bullfights. Football is our bullfight: an expression of our cultural values and a profound statement about our national consciousness. It’s important to understand what it does for us and to us, what its pleasures are and its moral costs. But football means so much to so many Americans that we’re terrified of interrogating it. (...)

Football is a powerful refuge. When we watch, we get so absorbed that we forget our troubles. It’s existential relief. You are a part of some exalted event. I didn’t watch the 2014 Super Bowl, but 111 million people tuned in. We are desperate to find something that will connect us. Football is a quick and easy solution.

Yet, at a certain point, you have to step back and ask: Why is this the church I worship in? What is the nature of this religion we have created?

As a fan I did feel a connection to the people around me, especially if my team was winning, but I also felt lost inside. Watching football became a lonely experience, like feeding an addiction. It wasn’t a way for me to engage with my problems. It wasn’t enlarging my empathy or my moral imagination. It wasn’t satisfying a deeper spiritual need.

Having said that, I can’t say to other fans that the holy feeling they have when they walk into their team’s stadium isn’t real. It is. My beef is that those feelings — our devotion to athletic heroism, our sentimental loyalty to the teams we rooted for growing up, and that our dads rooted for — are being mercilessly exploited and turned into an engine of greed. Not only that, but we’re getting so sucked into the fan mind-set that we start to see everything as a competition. Think about it. We have television programs that have turned singing, dancing, cooking, traveling, and even falling in love into competitions. It’s as if the only way a person in our culture can get what he or she wants is for another person to “lose.” This mind-set is ultimately martial. It’s what novelist Cormac McCarthy is referring to when he writes, in Blood Meridian, about warfare as a natural extension of sports. What ultimately matters is whether your team — and therefore you — wins. A lot of people these days feel that way about politics and religion: it’s all about vanquishing the socialist or the heathen or whatever. Football may not be the driving force behind this cultural mind-set, but it’s the purest expression of it. (...)

Cook: Baseball used to be the “national pastime.” What happened?

Almond: Late-model capitalism. We went from an agricultural society to an industrial one. Baseball is a pastoral game. Football is more in tune with the modern American experience. The typical American worker today is trapped in an office with elaborate rules of conduct and a lot of technical jargon. You’ve got “units” of employees working on group projects and multiple levels of management. Jobs are increasingly specialized. That’s how football operates, too. There’s a giant playbook with dozens of contingencies for any given play, strategy sessions, tons of jargon, a hierarchy of coaches — all things office drones recognize from their jobs.

But here’s what makes football so alluring: When a play works, it’s not just that you got the third-quarter earnings report done. It’s Barry Sanders making a magnificent spin move to avoid a tackle and carrying the ball sixty yards to glory. That experience is ecstatic and unlike anything in our everyday lives.

Football is both a reflection of complex, brutal, and oppressive industrialization and, at the same time, a liberation from it; a return to the intuitive childhood pleasures of play.

by David Cook, Sun  Magazine |  Read more:
Image: Marshawn Lynch

Saturday, January 30, 2016

Fake Online Locksmiths May Be Out to Pick Your Pocket

Maybe this has happened to you.

Locked out of your car or home, you pull out your phone and type “locksmith” into Google. Up pops a list of names, the most promising of which appear beneath the paid ads, in space reserved for local service companies.

You might assume that the search engine’s algorithm has instantly sifted through the possibilities and presented those that are near you and that have earned good customer reviews. Some listings will certainly fit that description. But odds are good that your results include locksmiths that are not locksmiths at all.

They are call centers — often out of state, sometimes in a different country — that use a high-tech ruse to trick Google into presenting them as physical stores in your neighborhood. These operations, known as lead generators, or lead gens for short, keep a group of poorly trained subcontractors on call. After your details are forwarded, usually via text, one of those subcontractors jumps in a car and heads to your vehicle or home. That is when the trouble starts.

The goal of lead gens is to wrest as much money as possible from every customer, according to lawsuits. The typical approach is for a phone representative to offer an estimate in the range of $35 to $90. On site, the subcontractor demands three or four times that sum, often claiming that the work was more complicated than expected. Most consumers simply blanch and pay up, in part because they are eager to get into their homes or cars.

“It was very late, and it was very cold,” said Anna Pietro, recalling an evening last January when she called Allen Emergency, the nearest locksmith to her home in a Dallas suburb, according to a Google Maps search on her iPhone. “This guy shows up and says he needs to drill my door lock, which will cost $350, about seven times the estimate I’d been given on the phone. And he demanded cash.”

The phone number at Allen Emergency is now disconnected.

It is a classic bait-and-switch. And it has quietly become an epidemic in America, among the fastest-growing sources of consumer complaints, according to the Consumer Federation of America.

Lead gens have their deepest roots in locksmithing, but the model has migrated to an array of services, including garage door repair, carpet cleaning, moving and home security. Basically, they surface in any business where consumers need someone in the vicinity to swing by and clean, fix, relocate or install something.

“I’m not exaggerating when I say these guys have people in every large and midsize city in the United States,” said John Ware, an assistant United States attorney in St. Louis, speaking of lead-gen locksmiths. (...)

The Ghosts on Google


The flaws in the Google machine are well known to Avi, an Israeli-born locksmith, who asked that his last name be omitted from this story, citing threats by competitors. (“One told me there is a bounty on my head,” he said.) Avi has been at war with lead-gen operators for eight years. It’s like guerrilla combat, because the companies are forever expanding and always innovating, he said.

To demonstrate, he searched for “locksmith” in Google one afternoon in November, as we sat in his living room in a suburb of Phoenix. One of the companies in the results was called Locksmith Force.

The company’s website at the time listed six physical locations, including a pinkish, two-story building at 10275 West Santa Fe Drive, Sun City, Ariz. When Avi looked up that address in Google Maps, he saw in the bottom left-hand corner a street-view image of the same pinkish building at the end of a retail strip.

There seemed no reason to doubt that a pinkish building stood at 10275 West Santa Fe Drive.

Avi was skeptical. “That’s about a five-minute drive from here,” he said.

We jumped in his car. It wasn’t long before the voice in his GPS announced, “You have arrived.”

“That’s the address,” he said. He was pointing to a low white-brick wall that ran beside a highway. There was no pinkish building and no stores. Other than a large, featureless warehouse on the other side of the street, there was little in sight.

“This is what I’m dealing with,” Avi said. “Ghosts.”

These ghosts don’t just game search results. They dominate AdWords, Google’s paid advertising platform. Nearly all of those ads promise “$19 service,” or thereabouts, a suspiciously low sum, given that “locksmith”-related ads cost about $30 or so per click, depending on the area.

(Yes, Google makes money every time a person clicks on an AdWords ad, and yes, in the case of locksmiths, the cost can be $30 for every click — even more in some cities. If you’ve ever wondered how Google gives away services and is still among the most profitable companies in the world, wonder no more. People clicking AdWords generated $60 billion last year.)

by David Segal, NY Times |  Read more:
Image: Caitlin O’Hara

Eric Pickersgill, from the series Removed.
via:

The Real Legacy of Steve Jobs

Partway through Alex Gibney’s earnest documentary Steve Jobs: The Man in the Machine, an early Apple Computer collaborator named Daniel Kottke asks the question that appears to animate Danny Boyle’s recent film about Jobs: “How much of an asshole do you have to be to be successful?” Boyle’s Steve Jobs is a factious, melodramatic fugue that cycles through the themes and variations of Jobs’s life in three acts—the theatrical, stage-managed product launches of the Macintosh computer (1984), theNeXT computer (1988), and the iMac computer (1998). For Boyle (and his screenwriter Aaron Sorkin) the answer appears to be “a really, really big one.”

Gibney, for his part, has assembled a chorus of former friends, lovers, and employees who back up that assessment, and he is perplexed about it. By the time Jobs died in 2011, his cruelty, arrogance, mercurial temper, bullying, and other childish behavior were well known. So, too, were the inhumane conditions in Apple’s production facilities in China—where there had been dozens of suicides—as well as Jobs’s halfhearted response to them. Apple’s various tax avoidance schemes were also widely known. So why, Gibney wonders as his film opens—with thousands of people all over the world leaving flowers and notes “to Steve” outside Apple Stores the day he died, and fans recording weepy, impassioned webcam eulogies, and mourners holding up images of flickering candles on their iPads as they congregate around makeshift shrines—did Jobs’s death engender such planetary regret?

The simple answer is voiced by one of the bereaved, a young boy who looks to be nine or ten, swiveling back and forth in a desk chair in front of his computer: “The thing I’m using now, an iMac, he made,” the boy says. “He made the iMac. He made the Macbook. He made the Macbook Pro. He made the Macbook Air. He made the iPhone. He made the iPod. He’s made the iPod Touch. He’s made everything.”

Yet if the making of popular consumer goods was driving this outpouring of grief, then why hadn’t it happened before? Why didn’t people sob in the streets when George Eastman or Thomas Edison or Alexander Graham Bell died—especially since these men, unlike Steve Jobs, actually invented the cameras, electric lights, and telephones that became the ubiquitous and essential artifacts of modern life?* The difference, suggests the MIT sociologist Sherry Turkle, is that people’s feelings about Steve Jobs had less to do with the man, and less to do with the products themselves, and everything to do with the relationship between those products and their owners, a relationship so immediate and elemental that it elided the boundaries between them. “Jobs was making the computer an extension of yourself,” Turkle tells Gibney. “It wasn’t just for you, it was you.”

In Gibney’s film, Andy Grignon, the iPhone senior manager from 2005 to 2007, observes that
Apple is a business. And we’ve somehow attached this emotion [of love, devotion, and a sense of higher purpose] to a business which is just there to make money for its shareholders. That’s all it is, nothing more. Creating that association is probably one of Steve’s greatest accomplishments.
Jobs was a consummate showman. It’s no accident that Sorkin tells his story of Jobs through product launches. These were theatrical events—performances—where Jobs made sure to put himself on display as much as he did whatever new thing he was touting. “Steve was P.T. Barnum incarnate,” says Lee Clow, the advertising executive with whom he collaborated closely. “He loved the ta-da! He was always like, ‘I want you to see the Smallest Man in the World!’ He loved pulling the black velvet cloth off a new product, everything about the showbiz, the marketing, the communications.”

People are drawn to magic. Steve Jobs knew this, and it was one reason why he insisted on secrecy until the moment of unveiling. But Jobs’s obsession with secrecy went beyond his desire to preserve the “a-ha!” moment. Is Steve Jobs “the most successful paranoid in business history?,” The Economist asked in 2005, a year that saw Apple sue, among others, a Harvard freshman running a site on the Internet that traded in gossip about Apple and other products that might be in the pipeline. Gibney tells the story of Jason Chen, a Silicon Valley journalist whose home was raided in 2010 by the California Rapid Enforcement Allied Computer Team (REACT), a multi-agency SWAT force, after he published details of an iPhone model then in development. A prototype of the phone had been left in a bar by an Apple employee and then sold to Chen’s employer, the website Gizmodo, for $5,000. Chen had returned the phone to Apple four days before REACT broke down his door and seized computers and other property. Though REACT is a public entity, Apple sits on its steering committee, leaving many wondering if law enforcement was doing Apple’s bidding.

Whether to protect trade secrets, or sustain the magic, or both, Jobs was adamant that Apple products be closed systems that discouraged or prevented tinkering. This was the rationale behind Apple’s lawsuit against people who “jail-broke” their devices in order to use non-Apple, third-party apps—a lawsuit Apple eventually lost. And it can be seen in Jobs’s insistence, from the beginning, on making computers that integrated both software and hardware—unlike, for example, Microsoft, whose software can be found on any number of different kinds of PCs; this has kept Apple computer prices high and clones at bay. An early exchange in Boyle’s movie has Steve Wozniak arguing for a personal computer that could be altered by its owner, against Steve Jobs, who believed passionately in end-to-end control. “Computers aren’t paintings,” Wozniak says, but that is exactly what Jobs considered them to be. The inside of the original Macintosh bears the signatures of its creators.

The magic Jobs was selling went beyond the products his company made: it infused the story he told about himself. Even as a multimillionaire, and then a billionaire, even after selling out friends and collaborators, even after being caught back-dating stock options, even after sending most of Apple’s cash offshore to avoid paying taxes, Jobs sold himself as an outsider, a principled rebel who had taken a stand against the dominant (what he saw as mindless, crass, imperfect) culture. You could, too, he suggested, if you allied yourself with Apple. It was this sleight of hand that allowed consumers to believe that to buy a consumer good was to do good—that it was a way to change the world. “The myths surrounding Apple is for a company that makes phones,” the journalist Joe Nocera tells Gibney. “A phone is not a mythical device. It makes you wonder less about Apple than about us.”

by Sue Halpern, NY Review of Books |  Read more:
Image: Philippe Huguen/AFP/Getty Images

Friday, January 29, 2016


Yoshiki✢I, End of the vacation
via:

Why The World Is Obsessed With Midcentury Modern Design

Today, more than ever, the midcentury modern look is everywhere. DVRs are set to capture Mad Men's final season playing out on AMC. Flip through the April issue of Elle Décor, and you'll find that more than half of the featured homes prominently include midcentury furniture pieces. Turn on The Daily Show and you'll see the guests sitting in classic Knoll office chairs. If you dine in a contemporary restaurant tonight, there's a good chance you'll be seated in a chair that was designed in the 1950s—whether it is an Eames, Bertoia, Cherner, or Saarinen. A few years back, you could stamp your mail with an Eames postage stamp.

Meanwhile, type the words "midcentury" and "modern" into any furniture retailer's search pane, and you'll likely come up with dozens of pieces labeled with these design-world buzzwords—despite the fact that there is nothing "midcentury" about the items they describe. Over the past two decades, a term describing a specific period of design has become the marketing descriptor du jour.

"Midcentury modern" itself is a difficult term to define. It broadly describes architecture, furniture, and graphic design from the middle of the 20th century (roughly 1933 to 1965, though some would argue the period is specifically limited to 1947 to 1957). The timeframe is a modifier for the larger modernist movement, which has roots in the Industrial Revolution at the end of the 19th century and also in the post-World War I period.

Author Cara Greenberg coined the phrase "midcentury modern" as the title for her 1984 book, Midcentury Modern: Furniture of the 1950s. In 1983, Greenberg had written a piece for Metropolitan Home about 1950s furniture, and an editor at Crown urged her to write a book on the topic. As for the phrase "midcentury modern," Greenberg "just made that up as the book's title," she says. A New York Times review of the book acknowledged that Greenberg's tome hit on a trend. "Some love it and others simply can't stand it, but there is no denying that the 50's are back in vogue again. Cara Greenberg, the author of 'Mid- Century Modern: Furniture of the 1950's' ($30, Harmony Books) manages to convey the verve, imagination and the occasional pure zaniness of the period." The book was an immediate hit, selling more than 100,000 copies, and once "midcentury modern" entered the lexicon, the phrase was quickly adopted by both the design world and the mainstream.

The popularity of midcentury modern design today has roots at the time of Greenberg's book. Most of the designs of the midcentury had gone out of fashion by the late 60s, but in the early- to mid-eighties, interest in the period began to return. Within a decade, vintage midcentury designs were increasingly popular, and several events helped to boost midcentury modern's appeal from a niche group of design enthusiasts into the mainstream.

by Laura Fenton, Curbed | Read more:
Image: Herman Miller

Thursday, January 28, 2016


Brian Alfred
via:

The Disposable Rocket

Inhabiting a male body is like having a bank account; as long as it’s healthy, you don’t think much about it. Compared to the female body, it is a low-maintenance proposition: a shower now and then, trim the fingernails every ten days, a haircut once a month. Oh yes, shaving—scraping or buzzing away at your face every morning. Byron, in Don Juan, thought the repeated nuisance of shaving balanced out the periodic agony, for females, of childbirth. Women are, his lines tell us,
Condemn’d to child-bed, as men for their sins Have shaving too entail’d upon their chins,— A daily plague, which the aggregate May average on the whole with parturition. 
From the standpoint of reproduction, the male body is a delivery system, as the female is a mazy device for retention. Once the delivery is made, men feel a faith but distinct falling-off of interest. Yet against the enduring realm heroics of birth and nurture should be set the male’s superhuman frenzy to deliver his goods: he vaults walls, skips sleep, risks wallet, health, and his political future all to ram home his seed into the gut of the chosen woman. The sense of the chase lives in him as the key to life. His body is, like a delivery rocket that falls away in space, a disposable means. Men put their bodies at risk to experience the release from gravity. 

When my tenancy of a male body was fairly new—of six or so years’ duration—I used to jump and fall just for the joy of it. Falling—backwards, or downstairs—became a specialty of mine, an attention-getting stunt I was still practicing into my thirties, at suburban parties. Falling is, after all, a kind of flying, though of briefer duration than would be ideal. My impulse to hurl myself from high windows and the edges of cliffs belongs to my body, not my mind, which resists the siren call of the chasm with all its might; the interior struggle knocks the wind from my lungs and tightens my scrotum and gives any trip to Europe, with its Alps, castle parapets, and gargoyled cathedral lookouts, a flavor of nightmare. Falling, strangely, no longer figures in my dreams, as it often did when I was a boy and my subconscious was more honest with me. An airplane, that necessary evil, turns the earth into a map so quickly the brain turns aloof and calm; still, I marvel that there is no end of young men willing to become jet pilots. 

Any accounting of male-female differences must include the male’s superior recklessness, a drive not, I think, toward death, as the darkest feminist cosmogonies would have it, but to test the limits, to see what the traffic will bear—a kind of mechanic’s curiosity. The number of men who do lasting damage to their young bodies is striking; war and car accidents aside, secondary-school sports, with the approval of parents and the encouragement of brutish coaches, take a fearful toll of skulls and knees. We were made for combat, back in the postsimian, East-African days, and the bumping, the whacking, the breathlessness, the painsmothering adrenaline rush form a cumbersome and unfashionable bliss, but bliss nevertheless. Take your body to the edge, and see if it flies.

by John Updike, Brown University |  Read more: (pdf)
Image: via:

Interview With Noam Chomsky: Is European Integration Unraveling?

Europe is in turmoil. The migration and refugee crisis is threatening to unravel the entire European integration project. Unwilling to absorb the waves of people fleeing their homes in the Middle East and North Africa, many European Union (EU) member states have began imposing border controls.

But it is not only people from Syria and Iraq, as mainstream media narratives would suggest, who are trying to reach Europe these days. Refugees come from Pakistan and Afghanistan and from nations in sub-Saharan Africa. The numbers are staggering, and they seem to be growing with the passing of every month. In the meantime, anti-immigration sentiment is spreading like wildfire throughout Europe, giving rise to extremist voices that threaten the very foundation of the EU and its vision of an "open, democratic" society.

In light of these challenges, EU officials are pulling out all the stops in their effort to deal with the migration and refugee crisis, offering both technical and economic assistance to member states in hopes that they will do their part in averting the unraveling of the European integration project. Whether they will succeed or fail remains to be seen. What is beyond a doubt however is that Europe's migration and refugee crisis will intensify as more than 4 million more migrants and refugees are expected to reach Europe in the next two years.

Noam Chomsky, one of the world's leading critical intellectuals, offered his insights to Truthout on Europe's migration and refugee crisis and other current European developments - including the ongoing financial crisis in Greece - in an exclusive interview with C.J. Polychroniou.

C.J. Polychroniou: Noam, thanks for doing this interview on current developments in Europe. I would like to start by asking you this question: Why do you think Europe's refugee crisis is happening now?

Noam Chomsky: The crisis has been building up for a long time. It is hitting Europe now because it has burst the bounds, from the Middle East and from Africa. Two Western sledgehammer blows had a dramatic effect. The first was the US-UK invasion of Iraq, which dealt a nearly lethal blow to a country that had already been devastated by a massive military attack 20 years earlier followed by virtually genocidal US-UK sanctions. Apart from the slaughter and destruction, the brutal occupation ignited a sectarian conflict that is now tearing the country and the entire region apart. The invasion displaced millions of people, many of whom fled and were absorbed in the neighboring countries, poor countries that are left to deal somehow with the detritus of our crimes.

One outgrowth of the invasion is the ISIS/Daesh monstrosity, which is contributing to the horrifying Syrian catastrophe. Again, the neighboring countries have been absorbing the flow of refugees. Turkey alone has over 2 million Syrian refugees. At the same time it is contributing to the flow by its policies in Syria: supporting the extremist al-Nusra Front and other radical Islamists and attacking the Kurds who are the main ground force opposing ISIS - which has also benefited from not-so-tacit Turkish support. But the flood can no longer be contained within the region.

The second sledgehammer blow destroyed Libya, now a chaos of warring groups, an ISIS base, a rich source of jihadis and weapons from West Africa to the Middle East, and a funnel for the flow of refugees from Africa. That at once brings up longer-term factors. For centuries, Europe has been torturing Africa - or, to put it more mildly - exploiting Africa for Europe's own development, to adopt the recommendation of the top US planner George Kennan after World War II.

The history, which should be familiar, is beyond grotesque. To take just a single case, consider Belgium, now groaning under a refugee crisis. Its wealth derived in no small measure from "exploiting" the Congo with brutality that exceeded even its European competitors. Congo finally won its freedom in 1960. It could have become a rich and advanced country once freed from Belgium's clutches, spurring Africa's development as well. There were real prospects, under the leadership of Patrice Lumumba, one of the most promising figures in Africa. He was targeted for assassination by the CIA, but the Belgians got there first. His body was cut to pieces and dissolved in sulfuric acid. The US and its allies supported the murderous kleptomaniac Mobutu. By now Eastern Congo is the scene of the world's worst slaughters, assisted by US favorite Rwanda while warring militias feed the craving of Western multinationals for minerals for cell phones and other high-tech wonders. The picture generalizes too much of Africa, exacerbated by innumerable crimes. For Europe, all of this becomes a refugee crisis.

Do the waves of immigrants (obviously many of them are immigrants, not simply refugees from war-torn regions) penetrating the heart of Europe represent some kind of a "natural disaster," or is it purely the result of politics?


There is an element of natural disaster. The terrible drought in Syria that shattered the society was presumably the effect of global warming, which is not exactly natural. The Darfur crisis was in part the result of desertification that drove nomadic populations to settled areas. The awful Central African famines today may also be in part due to the assault on the environment during the "Anthropocene," the new geological era when human activities, mainly industrialization, have been destroying the prospects for decent survival, and will do so, unless curbed.

European Union officials are having an exceedingly difficult time coping with the refugee crisis because many EU member states are unwilling to do their part and accept anything more than just a handful of refugees. What does this say about EU governance and the values of many European societies?

EU governance works very efficiently to impose harsh austerity measures that devastate poorer countries and benefit Northern banks. But it has broken down almost completely when addressing a human catastrophe that is in substantial part the result of Western crimes. The burden has fallen on the few who were willing, at least temporarily, to do more than lift a finger, like Sweden and Germany. Many others have just closed their borders. Europe is trying to induce Turkey to keep the miserable wrecks away from its borders, just as the US is doing, pressuring Mexico to prevent those trying to escape the ruins of US crimes in Central America from reaching US borders. This is even described as a humane policy that reduces "illegal immigration."

What does all of this tell us about prevailing values? It is hard even to use the word "values," let alone to comment. That's particularly when writing in the United States, probably the safest country in the world, now consumed by a debate over whether to allow Syrians in at all because one might be a terrorist pretending to be a doctor, or at the extremes, which unfortunately is in the US mainstream, whether to allow any Muslims in at all, while a huge wall protects us from immigrants fleeing from the wreckage south of the border.

by C.J. Polychroniou and Noam Chomsky, Truthout | Read more:
Image: Andrew Rusk

Why Tokyo is the World’s Best Food City

[ed. I think Anthony Bourdain said the same thing.]

It’s pointless to engage in any debate about which city has the best food without mentioning Tokyo.

Tokyo is the answer I give when friends and I kick around the question, Where would you live for the rest of your life solely for the food? Why? Because Japan as a country is devoted to food, and in Tokyo that fixation is exponentially multiplied. It’s a city of places built on top of each other, a mass complex of restaurants.

Let me rattle off the reasons why Tokyo beats all other cities:

It has more Michelin stars than any other city in the world, should you choose to eat that kind of food. I’d argue that some of the best French food and some of the best Italian food is in Tokyo. All the great French chefs have outposts there. If I want to eat at L’Astrance, I can go to Tokyo and eat it with Japanese ingredients. The Japanese have been sending their best cooks to train in Europe for almost sixty years. If you look at the top kitchens around the world, there is at least one Japanese cook in nearly every one.

Japan has taken from everywhere, because that’s what Japanese culture does: they take and they polish and shine and they make it better. The rest of the world’s food cultures could disappear, and as long as Tokyo remains, everything will be okay. It’s the GenBank for food. Everything that is good in the world is there.

If I want to have sushi, there’s no better place on the planet. All of the best fish in the world is flown to Tokyo so the chefs there can have first pick of it—whether it’s Hokkaido sea urchin or bluefin tuna caught off of Long Island, it all moves through Tsukiji fish market before jokers in any other city get a crack at it.

If I want to have kaiseki, there are top Kyoto guys who have spots in Tokyo, and they’re pretty fucking good. If I want to visit places dedicated to singular food items, from tempura to tonkatsu to yakitori, they’ve got it all. They have street food, yakisoba, ramen. They have the best steakhouses in the world. They have the best fucking patisseries in the world. The best Pierre Hermé is in Tokyo, not in fucking Paris. You know why? Because of the fucking Japanese cooks. I can eat the best food in subways, I can eat the best food in the train station, I can eat the best food in the airport. It’s the one place in the world where I have to seek out bad food. It’s hard to find.

They have no stupid importation laws; they get the best shit. Europe exports their best shit to Japan, because they know the Japanese have better palates than dumb Americans. It’s true. Go to the local department stores and buy cheese. It’s amazing. (...)

I can craft a great meal from convenience stores. A fantastic meal. From properly made bento boxes, to a variety of instant ramen, toonigiri, to salads, to sandwiches, it’s all really good. The egg-salad sandwiches at all the convenience stores are amazing. All the fried chicken, delicious. The chain restaurants, amazing. KFC, Pizza Hut, TGI Fridays, Tony Roma’s, you name it. I’ve been to all of them. Guess what? They’re all awesome. You know why? They care a little bit more. That’s it. They just make better fucking food than anywhere else. It’s awesome.

Now let’s keep it interesting by switching and going over the cons. There really are only a few.

by David Chang, Lucky Peach |  Read more:
Image: uncredited

Wednesday, January 27, 2016


Eusebio + Christina Saenz de Santamaria, Crazy Beautiful

The People's Critics

[ed. I still think this is my favorite Pete Wells review.]

For one of his last meals as the chief restaurant critic of the New York Times, Sam Sifton ate at “the best restaurant in New York City: Per Se, in the Time Warner Center, just up the escalator from the mall, a jewel amid the zirconia.” He (re-)awarded it the Times’ highest rating, four stars, and was so moved that he savored one dish as one “might have a massage or a sunset.” And of course he did: No one would have expected any less for Thomas Keller, long considered one of America’s greatest living chefs.

That was five years ago. Earlier this month, Sifton’s replacement, Pete Wells, declared that “the perception of Per Se as one of the country’s great restaurants, which I shared after visits in the past, appear[s] out of date” and stripped the restaurant of two of its stars. Even though it had been anticipated, it’s hard to overstate the magnitude of Wells’ review in the restaurant world: It’s maybe sort of like if people still cared what music critics said about albums and the most important one of all wrote that, like, Radiohead’s new album is not that good and certainly not great but especially not perfect?

Anyways, Wells’ takedown was received with rapt and thunderous applause: It became one of this most-read reviews in his more than four years as the Times’ chief restaurant critic and sucked the sage-scented air out of almost every other conversation in the dining world, at least for a moment. And why not? People love to watch falling stars, especially when the crash is this spectacular: The greatest restaurant in New York from one of the greatest chefs in the country is in fact a smoldering garbage fire, and has been for a year, or maybe even longer. (...)

But there is something that distinguishes Pete Wells’ run as critic, and it’s not just his deep awareness that his potential audience is both larger and different than his predecessors—a savvy on full display in his atomic obliteration of Guy Fieri’s American Kitchen & Bar or four-star crown for Sushi Nakazawa (whose chef is mildly famous for being the apprentice who cried when he made the egg sushi correctly in Jiro Dreams of Sushi). It would be hard to overstate how profoundly high-end dining has changed since Per Se opened in 2004, during a decade or so that has been largely marked by the democratization of high-end cooking: Or, in a picture, carefully grown and obsessively sourced food, radically composed and meticulously prepared, then dropped onto your cramped table with deeply uncomfortable seats by a cranky, tattooed and taciturn waiter for tens of dollars a head. What might have seemed like sorcery in 2004, “hunt[ing] down superior ingredients—turning to Elysian Fields Farm for lamb, Snake River Farms for Kobe beef—and let[ting] them express themselves as clearly as possible” through “cooking as diligence and even perfectionism”—amount to mere table stakes for any remotely hyped restaurant in gentrified Brooklyn (or Manhattan or any major city) in 2016. What was praise from Bruni in 2004 reads like a recipe for inducing nausea today, in a world where the kind of diner who would save up for a meal at Per Se probably dreams of eating a single scallop off of a bed of smoking moss and juniper branch at Fäviken:
Sybaritic to the core, Per Se is big on truffles, and it is big on foie gras, which it prepares in many ways, depending on the night. I relished it most when it was poached sous vide, in a tightly sealed plastic pouch, with Sauternes and vanilla. The vanilla was a perfect accent, used in perfect proportion.
Leaving aside the dismal execution that Wells experienced, part of Per Se’s problem, in other words, is that it is no longer elite enough even in a city host to merely the fifth-greatest restaurant in the world. (Eleven Madison Park, which Pete Wells loved, by the way, is now more inaccessible than ever, with a starting price of $295 a head for dinner.)

by , The Awl |  Read more:
Image: John

Inside Facebook’s Decision to Blow Up the Like Button

The most drastic change to Facebook in years was born a year ago during an off-site at the Four Seasons Silicon Valley, a 10-minute drive from headquarters. Chris Cox, the social network’s chief product officer, led the discussion, asking each of the six executives around the conference room to list the top three projects they were most eager to tackle in 2015. When it was Cox’s turn, he dropped a bomb: They needed to do something about the “like” button.

The like button is the engine of Facebook and its most recognized symbol. A giant version of it adorns the entrance to the company’s campus in Menlo Park, Calif. Facebook’s 1.6 billion users click on it more than 6 billion times a day—more frequently than people conduct searches on Google—which affects billions of advertising dollars each quarter. Brands, publishers, and individuals constantly, and strategically, share the things they think will get the most likes. It’s the driver of social activity. A married couple posts perfectly posed selfies, proving they’re in love; a news organization offers up what’s fun and entertaining, hoping the likes will spread its content. All those likes tell Facebook what’s popular and should be shown most often on the News Feed. But the button is also a blunt, clumsy tool. Someone announces her divorce on the site, and friends grit their teeth and “like” it. There’s a devastating earthquake in Nepal, and invariably a few overeager clickers give it the ol’ thumbs-up.

Changing the button is like Coca-Cola messing with its secret recipe. Cox had tried to battle the like button a few times before, but no idea was good enough to qualify for public testing. “This was a feature that was right in the heart of the way you use Facebook, so it needed to be executed really well in order to not detract and clutter up the experience,” he says. “All of the other attempts had failed.” The obvious alternative, a “dislike” button, had been rejected on the grounds that it would sow too much negativity.

Cox told the Four Seasons gathering that the time was finally right for a change, now that Facebook had successfully transitioned a majority of its business to smartphones. His top deputy, Adam Mosseri, took a deep breath. “Yes, I’m with you,” he said solemnly.

Later that week, Cox brought up the project with his boss and longtime friend. Mark Zuckerberg’s response showed just how much leeway Cox has to take risks with Facebook’s most important service. “He said something like, ‘Yes, do it.’ He was fully supportive,” Cox says. “Good luck,” he remembers Zuckerberg telling him. “That’s a hard one.”

The solution would eventually be named Reactions. It will arrive soon. And it will expand the range of Facebook-compatible human emotions from one to six.

Cox isn’t a founder, doesn’t serve on the boards of other companies, and hasn’t written any best-selling books. He’s not a billionaire, just a centi-millionaire. He joined Facebook in 2005, too late to be depicted in The Social Network, David Fincher’s movie about the company’s early days. While Zuckerberg manages an expanding portfolio of side businesses and projects—Instagram, WhatsApp, the Oculus Rift virtual-reality headset, a planned fleet of 737-size, carbon-fiber, Internet-beaming drones—Cox runs “the big blue app.” That’s Facebook’s term for the social network that we all compulsively check a few dozen times a day. He’s also the keeper of the company’s cultural flame, the guy who gives a rousing welcome speech to new recruits every Monday morning at 9 a.m. It’s a safe bet that all 12,000 Facebook employees know his name.

He’s probably the closest thing Internet users have to an editor-in-chief of their digital life. Cox’s team manages the News Feed, that endless scroll of Facebook updates. Invisible formulas govern what stories users see as they scroll, weighing baby pictures against political outrage. “Chris is the voice for the user,” says Bret Taylor, Facebook’s former chief technology officer. “He’s the guy in the room with Zuckerberg explaining how people might react to a change.”

by Sarah Frier, Bloomberg | Read more:
Image: Adam Amengual

Why the Calorie is Broken

Bo Nash is 38. He lives in Arlington, Texas, where he’s a technology director for a textbook publisher. He has a wife and child. And he’s 5’10” and 245 lbs—which means he is classed as obese.

In an effort to lose weight, Nash uses an app to record the calories he consumes and a Fitbit band to track the energy he expends. These tools bring an apparent precision: Nash can quantify the calories in each cracker crunched and stair climbed. But when it comes to weight gain, he finds that not all calories are equal. How much weight he gains or loses seems to depend less on the total number of calories and more on where the calories come from and how he consumes them. The unit, he says, has a “nebulous quality to it."

Tara Haelle is also obese. She had her second son on St Patrick’s Day in 2014 and hasn’t been able to lose the 70 lbs she gained during pregnancy. Haelle is a freelance science journalist based in Illinois. She understands the science of weight loss, but like Nash, she doesn’t see it translate into practice. “It makes sense from a mathematical and scientific and even visceral level that what you put in and what you take out, measured in the discrete unit of the calorie, should balance,” says Haelle. “But it doesn’t seem to work that way.”(...)

The process of counting calories begins in an anonymous office block in Maryland. The building is home to the Beltsville Human Nutrition Research Center, a facility run by the US Department of Agriculture. When we visit, the kitchen staff are preparing dinner for people enrolled in a study. Plastic dinner trays are laid out with meatloaf, mashed potatoes, corn, brown bread, a chocolate-chip scone, vanilla yoghurt and a can of tomato juice. The staff weigh and bag each item, sometimes adding an extra two-centimeter sliver of bread to ensure a tray’s contents add up to the exact calorie requirements of each participant. “We actually get compliments about the food,” says David Baer, a supervisory research physiologist with the Department.

The work that Baer and colleagues do draws on centuries-old techniques. Nestle traces modern attempts to understand food and energy back to a French aristocrat and chemist named Antoine Lavoisier. In the early 1780s, Lavoisier developed a triple-walled metal canister large enough to house a guinea pig. Inside the walls was a layer of ice. Lavoisier knew how much energy was required to melt ice, so he could estimate the heat the animal emitted by measuring the amount of water that dripped from the canister. What Lavoisier didn’t realize—and never had time to find out; he was put to the guillotine during the Revolution—was that measuring the heat emitted by his guinea pigs was a way to estimate the amount of energy they had extracted from the food they were digesting.

Until recently, the scientists at Beltsville used what was essentially a scaled-up version of Lavoisier’s canister to estimate the energy used by humans: a small room in which a person could sleep, eat, excrete, and walk on a treadmill, while temperature sensors embedded in the walls measured the heat given off and thus the calories burned. (We now measure this energy in calories. Roughly speaking, one calorie is the heat required to raise the temperature of one kilogram of water by one degree Celsius.) Today, those ‘direct-heat’ calorimeters have largely been replaced by ‘indirect-heat’ systems, in which sensors measure oxygen intake and carbon dioxide exhalations. Scientists know how much energy is used during the metabolic processes that create the carbon dioxide we breathe out, so they can work backwards to deduce that, for example, a human who has exhaled 15 liters of carbon dioxide must have used 94 calories of energy.

The facility’s three indirect calorimeters are down the halls from the research kitchen. “They’re basically nothing more than walk-in coolers, modified to allow people to live in here,” physiologist William Rumpler explains as he shows us around. Inside each white room, a single bed is folded up against the wall, alongside a toilet, sink, a small desk and chair, and a short treadmill. A couple of airlocks allow food, urine, faeces and blood samples to be passed back and forth. Apart from these reminders of the room’s purpose, the vinyl-floored, fluorescent-lit units resemble a 1970s dorm room. Rumpler explains that subjects typically spend 24 to 48 hours inside the calorimeter, following a highly structured schedule. (...)

Measuring the calories in food itself relies on another modification of Lavoisier’s device. In 1848, an Irish chemist called Thomas Andrews realized that he could estimate calorie content by setting food on fire in a chamber and measuring the temperature change in the surrounding water. (Burning food is chemically similar to the ways in which our bodies break food down, despite being much faster and less controlled.) Versions of Andrews’s ‘bomb calorimeter’ are used to measure the calories in food today. At the Beltsville center, samples of the meatloaf, mashed potatoes and tomato juice have been incinerated in the lab’s bomb calorimeter. “We freeze-dry it, crush into a powder, and fire it,” says Baer.

Humans are not bomb calorimeters, of course, and we don’t extract every calorie from the food we eat. This problem was addressed at the end of the 19th century, in one of the more epic experiments in the history of nutrition science. Wilbur Atwater, a Department of Agriculture scientist, began by measuring the calories contained in more than 4,000 foods. Then he fed those foods to volunteers and collected their faeces, which he incinerated in a bomb calorimeter. After subtracting the energy measured in the faeces from that in the food, he arrived at the Atwater values, numbers that represent the available energy in each gram of protein, carbohydrate and fat. These century-old figures remain the basis for today’s standards. When Baer wants to know the calories per gram figure for that night’s meatloaf, he corrects the bomb calorimeter results using Atwater values.

Trouble begins

This entire enterprise, from the Beltsville facility to the numbers on the packets of the food we buy, creates an aura of scientific precision around the business of counting calories. That precision is illusory.

The trouble begins at source, with the lists compiled by Atwater and others. Companies are allowed to incinerate freeze-dried pellets of product in a bomb calorimeter to arrive at calorie counts, though most avoid that hassle, says Marion Nestle. Some use the data developed by Atwater in the late 1800s. But the Food and Drug Administration (FDA) also allows companies to use a modified set of values, published by the Department of Agriculture in 1955, that take into account our ability to digest different foods in different ways.

Atwater’s numbers say that Tara Haelle can extract 8.9 calories per gram of fat in a plate of her favorite Tex-Mex refried beans; the modified table shows that, thanks to the indigestibility of some of the plant fibers in legumes, she only gets 8.3 calories per gram. Depending on the calorie-measuring method that a company chooses—the FDA allows two more variations on the theme, for a total of five—a given serving of spaghetti can contain from 200 to 210 calories. These uncertainties can add up. Haelle and Bo Nash might deny themselves a snack or sweat out another few floors on the StairMaster to make sure they don’t go 100 calories over their daily limit. If the data in their calorie counts is wrong, they can go over regardless.

There’s also the issue of serving size. After visiting over 40 US chain restaurants, including Olive Garden, Outback Steakhouse and PF Chang’s China Bistro, Susan Roberts of Tufts University’s nutrition research center and colleagues discovered that a dish listed as having, say, 500 calories could contain 800 instead. The difference could easily have been caused, says Roberts, by local chefs heaping on extra french fries or pouring a dollop more sauce. It would be almost impossible for a calorie-counting dieter to accurately estimate their intake given this kind of variation.

Even if the calorie counts themselves were accurate, dieters like Haelle and Nash would have to contend with the significant variations between the total calories in the food and the amount our bodies extract. These variations, which scientists have only recently started to understand, go beyond the inaccuracies in the numbers on the back of food packaging. In fact, the new research calls into question the validity of nutrition science’s core belief that a calorie is a calorie.

Using the Beltsville facilities, for instance, Baer and his colleagues found that our bodies sometimes extract fewer calories than the number listed on the label. Participants in their studies absorbed around a third fewer calories from almonds than the modified Atwater values suggest. For walnuts, the difference was 21 per cent. This is good news for someone who is counting calories and likes to snack on almonds or walnuts: he or she is absorbing far fewer calories than expected. The difference, Baer suspects, is due to the nuts’ particular structure. “All the nutrients—the fat and the protein and things like that—they’re inside this plant cell wall.” Unless those walls are broken down—by processing, chewing or cooking—some of the calories remain off-limits to the body, and thus are excreted rather than absorbed.

by Cynthia Graber and Nicola Twilley, Ars Technica | Read more:
Image: Catherine Losing

Tuesday, January 26, 2016


Fishing day
photo: markk

photo: markk

Papio
photo: markk

Michael Ward
via:

[ed. I finally have an answer for my fourth grade spelling teacher.]
via:

The Reductive Seduction of Other People’s Problems

Let’s pretend, for a moment, that you are a 22-year-old college student in Kampala, Uganda. You’re sitting in class and discreetly scrolling through Facebook on your phone. You see that there has been another mass shooting in America, this time in a place called San Bernardino. You’ve never heard of it. You’ve never been to America. But you’ve certainly heard a lot about gun violence in the U.S. It seems like a new mass shooting happens every week.

You wonder if you could go there and get stricter gun legislation passed. You’d be a hero to the American people, a problem-solver, a lifesaver. How hard could it be? Maybe there’s a fellowship for high-minded people like you to go to America after college and train as social entrepreneurs. You could start the nonprofit organization that ends mass shootings, maybe even win a humanitarian award by the time you are 30.

Sound hopelessly naïve? Maybe even a little deluded? It is. And yet, it’s not much different from how too many Americans think about social change in the “Global South.”

If you asked a 22-year-old American about gun control in this country, she would probably tell you that it’s a lot more complicated than taking some workshops on social entrepreneurship and starting a non-profit. She might tell her counterpart from Kampala about the intractable nature of our legislative branch, the long history of gun culture in this country and its passionate defenders, the complexity of mental illness and its treatment. She would perhaps mention the added complication of agitating for change as an outsider.

But if you ask that same 22-year-old American about some of the most pressing problems in a place like Uganda — rural hunger or girl’s secondary education or homophobia — she might see them as solvable. Maybe even easily solvable.

I’ve begun to think about this trend as the reductive seduction of other people’s problems. It’s not malicious. In many ways, it’s psychologically defensible; we don’t know what we don’t know.

If you’re young, privileged, and interested in creating a life of meaning, of course you’d be attracted to solving problems that seem urgent and readily solvable. Of course you’d want to apply for prestigious fellowships that mark you as an ambitious altruist among your peers. Of course you’d want to fly on planes to exotic locations with, importantly, exotic problems.

There is a whole “industry” set up to nurture these desires and delusions — most notably, the 1.5 million nonprofit organizations registered in the U.S., many of them focused on helping people abroad. In other words, the young American ego doesn’t appear in a vacuum. Its hubris is encouraged through job and internship opportunities, conferences galore, and cultural propaganda — encompassed so fully in the patronizing, dangerously simple phrase “save the world.”

by Courtney Martin, Medium |  Read more:
Image: Michael Marsicano

Design for Living

What’s great about Goethe?

In the English-speaking world, we are used to thinking of our greatest writer as an enigma, or a blank. Though there’s enough historical evidence to tell us when Shakespeare was born and when he died, and more than enough to prove that he wrote the plays ascribed to him, the record is thin. Indeed, the persistence of conspiracy theories attributing Shakespeare’s work to the Earl of Oxford or other candidates is a symptom of how little we actually understand about his life. His religious beliefs, his love affairs, his relationships with other writers, his daily routine—these are permanent mysteries, and biographies of Shakespeare are always mostly speculation.

To get a sense of how Johann Wolfgang von Goethe dominates German literature, we would have to imagine a Shakespeare known to the last inch—a Shakespeare squared or cubed. Goethe’s significance is only roughly indicated by the sheer scope of his collected works, which run to a hundred and forty-three volumes. Here is a writer who produced not only some of his language’s greatest plays but hundreds of major poems of all kinds—enough to keep generations of composers supplied with texts for their songs. Now consider that he also wrote three of the most influential novels in European literature, and a series of classic memoirs documenting his childhood and his travels, and essays on scientific subjects ranging from the theory of colors to the morphology of plants.

Then, there are several volumes of his recorded table talk, more than twenty thousand extant letters, and the reminiscences of the many visitors who met him throughout his sixty-year career as one of Europe’s most famous men. Finally, Goethe accomplished all this while simultaneously working as a senior civil servant in the duchy of Weimar, where he was responsible for everything from mining operations to casting actors in the court theatre. If he hadn’t lived from 1749 to 1832, safely into the modern era and the age of print, but had instead flourished when Shakespeare did, there would certainly be scholars today theorizing that the life and work of half a dozen men had been combined under Goethe’s name. As it is, in the words of Nicholas Boyle, his leading English-­language biographer, “More must be known, or at any rate there must be more to know, about Goethe than about almost any other human being.”

Germans began debating the significance of the Goethe phenomenon while he was still in his twenties, and they have never stopped. His lifetime, spanning some of the most monumental disruptions in modern history, is referred to as a single whole, the Goethezeit, or Age of Goethe. Worshipped as the greatest genius in German history and as an exemplary poet and human being, he has also been criticized for his political conservatism and quietism, which in the twentieth century came to seem sinister legacies. Indeed, Goethe was hostile to both the French Revolution and the German nationalist movement that sprang up in reaction to it. More radical and Romantic spirits especially disdained the way this titan seemed content to be a servant to princes—and Grand Duke Karl August of Weimar, despite his title, was a fairly minor prince—in an age of revolution.

One famous anecdote concerns Goethe and Beethoven, who were together at a spa resort when they unexpectedly met a party of German royalty on the street. Goethe deferentially stood aside and removed his hat, while Beethoven kept his hat firmly on his head and plowed through the royal group, forcing them to make way—which they did, while offering the composer friendly greetings. Here was a contrast of temperaments, but also of generations. Goethe belonged to the courtly past, when artists were the clients of princes, while Beethoven represented the Romantic future, when princes would clamor to associate with artists. Historians dispute whether the incident actually took place, but if it didn’t the story is arguably even more revealing; the event became famous because it symbolized the way people thought about Goethe and his values.

Goethe’s fame notwithstanding, he is strangely neglected in the English-speaking world. English readers are notoriously indifferent to the poets of other cultures, and Goethe’s poems, unfortunately, seldom come across vividly in translation. This is partly because Goethe so often cloaks his sophistication in deceptively simple language. “Heidenröslein,” one of his earliest great poems, is written in the style of a folk song and almost entirely in words of one or two syllables: “Sah ein Knab’ ein Röslein stehn” (“A boy saw a little rose standing”). “The Essential Goethe” (Princeton), a rich new anthology, a thousand pages long, edited by Matthew Bell, which valiantly seeks to display every facet of Goethe’s genius, gives the poem in a translation by John Frederick Nims:

Urchin blurts: “I’ll pick you, though,
Rosebud in the heather!”
Rosebud: “Then I’ll stick you so
That there’s no forgetting, no!
I’ll not stand it, ever!”

Nims reproduces the rhythm of the original precisely. But to do so he adds words that aren’t in the original (“though”) and resorts to distractingly winsome diction (“urchin,” “I’ll not”). The result is clumsy and charmless. The very simplicity of Goethe’s language makes his poetry practically untranslatable.

English speakers are more hospitable to fiction in translation, and yet when was the last time you heard someone mention “Wilhelm Meister’s Apprenticeship” or “Elective Affinities,” Goethe’s long fictions? These books have a good claim to have founded two of the major genres of the modern novel—respectively, the Bildungsroman and the novel of adultery. Goethe’s first novel, “The Sorrows of Young Werther,” is better known, mainly because it represented such an enormous milestone in literary history; the first German international best-seller, it is said to have started a craze for suicide among young people emulating its hero. But in English it remains a book more famous than read.

This wasn’t always the case. Victorian intellectuals revered Goethe as the venerable Sage of Weimar. Thomas Carlyle implored the reading public to “close thy Byron, open thy Goethe”—which was as much as to say, “Grow up!” Matthew Arnold saw Goethe as a kind of healer and liberator, calling him the “physician of the Iron Age,” who “read each wound, each weakness” of the “suffering human race.” For these writers, Goethe seemed to possess something the modern world lacked: wisdom, the ability to understand life and how it should be lived. It was this very quality that led to his fall from favor in the post-Victorian age. For the modernists, being spiritually sick was a condition of intellectual respectability, and T. S. Eliot wrote that “there is something artificial and even priggish about Goethe’s healthiness.” Reading Goethe today, even through the veil of translation, is most valuable as an encounter with a way of thinking and feeling that has grown foreign to us.

by Adam Kirsch, New Yorker | Read more:
Image: Boris Pelcher

Monday, January 25, 2016

Sunday, January 24, 2016

A Drug to Cure Fear

Who among us hasn’t wanted to let go of anxiety or forget about fear? Phobias, panic attacks and disorders like post-traumatic stress are extremely common: 29 percent of American adults will suffer from anxiety at some point in their lives.

Sitting at the heart of much anxiety and fear is emotional memory — all the associations that you have between various stimuli and experiences and your emotional response to them. Whether it’s the fear of being embarrassed while talking to strangers (typical of social phobia) or the dread of being attacked while walking down a dark street after you’ve been assaulted (a symptom of PTSD), you have learned that a previously harmless situation predicts something dangerous.

It has been an article of faith in neuroscience and psychiatry that, once formed, emotional memories are permanent. Afraid of heights or spiders? The best we could do was to get you to tolerate them, but we could never really rid you of your initial fear. Or so the thinking has gone.

The current standard of treatment for such phobias revolves around exposure therapy. This involves repeatedly presenting the feared object or frightening memory in a safe setting, so that the patient acquires a new safe memory that resides in his brain alongside the bad memory. As long as the new memory has the upper hand, his fear is suppressed. But if he is re-traumatized or re-exposed with sufficient intensity to the original experience, his old fear will awaken with a vengeance.

This is one of the limitations of exposure therapy, along with the fact that it generally works in only about half of the PTSD patients who try it. Many also find it upsetting or intolerable to relive memories of assaults and other traumatizing experiences.

We urgently need more effective treatments for anxiety disorders. What if we could do better than creating a new safe memory — and actually get rid of emotions attached to the old bad one?

New research suggests that it may be possible not just to change certain types of emotional memories, but even to erase them. We’ve learned that memories are uniquely vulnerable to alteration at two points: when we first lay them down, and later, when we retrieve them.

Merel Kindt, a professor of psychology at the University of Amsterdam, and her colleagues have seemingly erased the emotional fear response in healthy people with arachnophobia. For a study published last month in the journal Biological Psychiatry, she compared three groups made up of 45 subjects in total. One group was exposed to a tarantula in a glass jar for two minutes, and then given a beta-blocker called propranolol that is commonly prescribed to patients for performance anxiety; one was exposed to the tarantula and given a placebo; and one was just given propranolol without being shown the spider, to rule out the possibility that propranolol by itself could decrease spider fear.

Dr. Kindt assessed the subjects’ anxiety when they were shown the spider the first time, then again three months later, and finally after a year. What she found was remarkable. Those who got the propranolol alone and those who got the placebo had no improvement in their anxiety. But the arachnophobes who were exposed to the spider and given the drug were able to touch the tarantula within days and, by three months, many felt comfortable holding the spider with their bare hands. Their fear did not return even at the end of one year.

How does this work? Well, propranolol blocks the effects of norepinephrine in the brain. This chemical, which is similar to adrenaline, enhances learning, so blocking it disrupts the way a memory is put back in storage after it is retrieved — a process called reconsolidation.

Arachnophobes have an emotional memory that involves an association between spiders and a dreaded outcome, like a spider bite. This “fear memory” is the source of their phobia — even if (as is often the case) it never actually happened. The basic idea is that when Dr. Kindt briefly exposed the subjects to the spider, she reactivated their fear, which made the fear memory susceptible to the influence of propranolol.

Reconsolidation is a bit like pulling up a file on your computer, rewriting the same material in a bigger, bolder font and saving it again. Disrupting reconsolidation with propranolol or another drug is akin to retrieving this document, erasing some or all of the text and then writing something new in its place.

Dr. Kindt is not the first to demonstrate that disrupting reconsolidation can weaken or erase emotional memories. Several studies of rats done in 2000 showed that a drug called anisomycin, which blocks the synthesis of proteins in the brain, could reduce fear associations. In one, researchers taught rats to fear a sound by pairing it with a shock. After the animals were fear-conditioned, they were presented with the sound and then immediately given the drug. When the animals were exposed to the sound again, they no longer appeared afraid; they had forgotten their original fear.

Curiously, there is a very narrow time window after retrieving a fear memory when you can disrupt that memory — hours, in the animal studies — before it closes and the drug has no effect.

These studies suggest that someday, a single dose of a drug, combined with exposure to your fear at the right moment, could free you of that fear forever. But there’s a flip side to this story about how to undo emotional learning: how to strengthen it. We can do that with drugs as well, and may have been doing it for some time.

by Richard A. Friedman , NY Times | Read more:
Image: Jillian Tamaki