Friday, January 8, 2016

The Comfort Food Diaries: All I Want Are Some Potato Skins

We are at a bar somewhere near our office north of Times Square, and all I want are some potato skins, but all they have on the menu are mozzarella sticks and chicken wings and nachos, which are terrible if you ask me, at least compared to potato skins. Potato skins have everything you could ever hope for in a bar food—the crunch of the skin, the pull of the cheddar, the stink of the green onion, the chew of the bacon bits. A plate of potato skins and a pint of cold lager is the best pairing American cuisine has to offer. I mean that. I really do.

The bar is one of those fake Irish pubs that are ubiquitous in Midtown Manhattan. Let's call it Magnus O'Malley O'Sullivan's. It's the kind of place that smells of stale beer, cleaning products, and lacquer. It has no history. It has no lore. There are no regulars. The bartender can't pour a Guinness to save his life. In one of the basement bathrooms, a tourist from Indiana is likely throwing up after one too many Long Island Iced Teas. As a Korn song plays from an iPod, we get two orders of wings and two orders of mozzarella sticks. I can't keep my eyes off the plasma TVs.

We gather here once or twice a week to complain about our jobs. We work at a home improvement magazine, where I serve as an associate editor. I dislike my boss immensely, and he dislikes me just the same. It's the middle of the recession, and we keep having layoffs, but for some reason he never fires me. After each purge, we gather at this bar with our fallen ex-colleagues, and at some point one of them inevitably looks at me and says something like, "I can't believe you made it through." As I said, I dislike my boss immensely, and he dislikes me. Everyone knows he does.

I am in my late thirties, and I am anxious all the time. I take pills for it, but they don't work. I'm convinced I am dying of several diseases, because I have been a hypochondriac ever since I was a freshman in college and mistook two salivary glands under my tongue for cancerous tumors, and I didn't go to the doctor because I was terrified he would tell me I wasn't mistaken at all. I stand outside my office each day, chain-smoking cigarettes and worrying about my health. Creditors keep calling me because I'm tens of thousands of dollars in debt; I can't pay the rent on my Brooklyn apartment anymore. My girlfriend moved out. After work, all I want is a cold lager and some potato skins, because I am convinced they will fix everything. No, they won't pay the rent, but they have their own special powers. (...)

When I was a teenager, my father and I made a point of watching St. Elsewhere together each Wednesday night. It was after my parents' divorce, and we were living in a rented two-story town house. After years of tumult, things had finally started to settle down, and I was happy just to spend time with him. We had two floral-patterned love seats that used to be part of a formal living room set, but since Mom got full custody of the family room sofa, they now served as our primary seating. There were rips in the arms, and one of the legs had snapped off. We had each claimed one of our own, mine on the right side of the living room, his on the left. Before the show came on at 10 p.m., I would go into the cupboard to fetch a bag of Tato Skins, a concave chip that looked sort of like a tongue and was supposed to taste just like potato skins, and did. Then I would go into the refrigerator to retrieve a tin of Frito-Lay cheddar cheese dip and bring it into the living room, where Dad and I would split the entire bag, watching a medical drama that, in the series finale, was revealed to be little more than a young boy's dream.

I leave the fake Irish pub and walk by a T.G.I Friday's on 46th Street. It's about 10 p.m., and the place is filled with tourists. I stand outside watching throngs of people from Ohio and Michigan and everywhere else but New York pass me by. I admire the fact that sometime, in the next day or two, they'll all pack their bags in their Marriott hotel rooms and fly back to places that are so much more familiar to me than this one is.

There's a woman I have a crush on, and I fumble with my cellphone, scrolling for her number. I want to see if she'll meet me here. I want to call her and say: "Hey, wanna meet me at Fridays for some potato skins?" But then I realize how affected this will sound. She'll think I'm asking her to eat potato skins at Friday's because it's ironic, but it's not ironic at all. It's sacred.

I put my phone back into my pocket and totter toward the subway. This isn't the time for potato skins, I think to myself. But will there ever be a time again? Maybe potato skins are best left alone as a childhood memory. Yes, they are my favorite comfort food, but if I eat them, they'll just remind me how uncomfortable I am. In my city. In my middle age. In my life.

by Keith Pandolfi, Serious Eats |  Read more:
Image: Zac Overman

George Booth
via:

Therapy Wars: the Revenge of Freud

Dr David Pollens is a psychoanalyst who sees his patients in a modest ground-floor office on the Upper East Side of Manhattan, a neighbourhood probably only rivalled by the Upper West Side for the highest concentration of therapists anywhere on the planet. Pollens, who is in his early 60s, with thinning silver hair, sits in a wooden armchair at the head of a couch; his patients lie on the couch, facing away from him, the better to explore their most embarrassing fears or fantasies. Many of them come several times a week, sometimes for years, in keeping with analytic tradition. He has an impressive track record treating anxiety, depression and other disorders in adults and children, through the medium of uncensored and largely unstructured talk.

To visit Pollens, as I did one dark winter’s afternoon late last year, is to plunge immediately into the arcane Freudian language of “resistance” and “neurosis”, “transference” and “counter-transference”. He exudes a sort of warm neutrality; you could easily imagine telling him your most troubling secrets. Like other members of his tribe, Pollens sees himself as an excavator of the catacombs of the unconscious: of the sexual drives that lurk beneath awareness; the hatred we feel for those we claim to love; and the other distasteful truths about ourselves we don’t know, and often don’t wish to know.

But there’s a very well-known narrative when it comes to therapy and the relief of suffering – and it leaves Pollens and his fellow psychoanalysts decisively on the wrong side of history. For a start, Freud (this story goes) has been debunked. Young boys don’t lust after their mothers, or fear their fathers will castrate them; adolescent girls don’t envy their brothers’ penises. No brain scan has ever located the ego, super-ego or id. The practice of charging clients steep fees to ponder their childhoods for years – while characterising any objections to this process as “resistance”, demanding further psychoanalysis – looks to many like a scam. “Arguably no other notable figure in history was so fantastically wrong about nearly every important thing he had to say” than Sigmund Freud, the philosopher Todd Dufresne declared a few years back, summing up the consensus and echoing the Nobel prize-winning scientist Peter Medawar, who in 1975 called psychoanalysis “the most stupendous intellectual confidence trick of the 20th century”. It was, Medawar went on, “a terminal product as well – something akin to a dinosaur or a zeppelin in the history of ideas, a vast structure of radically unsound design and with no posterity.”

A jumble of therapies emerged in Freud’s wake, as therapists struggled to put their endeavours on a sounder empirical footing. But from all these approaches – including humanistic therapy, interpersonal therapy, transpersonal therapy, transactional analysis and so on – it’s generally agreed that one emerged triumphant. Cognitive behavioural therapy, or CBT, is a down-to-earth technique focused not on the past but the present; not on mysterious inner drives, but on adjusting the unhelpful thought patterns that cause negative emotions. In contrast to the meandering conversations of psychoanalysis, a typical CBT exercise might involve filling out a flowchart to identify the self-critical “automatic thoughts” that occur whenever you face a setback, like being criticised at work, or rejected after a date.

CBT has always had its critics, primarily on the left, because its cheapness – and its focus on getting people quickly back to productive work – makes it suspiciously attractive to cost-cutting politicians. But even those opposed to it on ideological grounds have rarely questioned that CBT does the job. Since it first emerged in the 1960s and 1970s, so many studies have stacked up in its favour that, these days, the clinical jargon “empirically supported therapies” is usually just a synonym for CBT: it’s the one that’s based on facts. Seek a therapy referral on the NHS today, and you’re much more likely to end up, not in anything resembling psychoanalysis, but in a short series of highly structured meetings with a CBT practitioner, or perhaps learning methods to interrupt your “catastrophising” thinking via a PowerPoint presentation, or online.

Yet rumblings of dissent from the vanquished psychoanalytic old guard have never quite gone away. At their core is a fundamental disagreement about human nature – about why we suffer, and how, if ever, we can hope to find peace of mind. CBT embodies a very specific view of painful emotions: that they’re primarily something to be eliminated, or failing that, made tolerable. A condition such as depression, then, is a bit like a cancerous tumour: sure, it might be useful to figure out where it came from – but it’s far more important to get rid of it. CBT doesn’t exactly claim that happiness is easy, but it does imply that it’s relatively simple: your distress is caused by your irrational beliefs, and it’s within your power to seize hold of those beliefs and change them.

Psychoanalysts contend that things are much more complicated. For one thing, psychological pain needs first not to be eliminated, but understood. From this perspective, depression is less like a tumour and more like a stabbing pain in your abdomen: it’s telling you something, and you need to find out what. (No responsible GP would just pump you with painkillers and send you home.) And happiness – if such a thing is even achievable – is a much murkier matter. We don’t really know our own minds, and we often have powerful motives for keeping things that way. We see life through the lens of our earliest relationships, though we usually don’t realise it; we want contradictory things; and change is slow and hard. Our conscious minds are tiny iceberg-tips on the dark ocean of the unconscious – and you can’t truly explore that ocean by means of CBT’s simple, standardised, science-tested steps.

This viewpoint has much romantic appeal. But the analysts’ arguments fell on deaf ears so long as experiment after experiment seemed to confirm the superiority of CBT – which helps explain the shocked response to a study, published last May, that seemed to show CBT getting less and less effective, as a treatment for depression, over time.

Examining scores of earlier experimental trials, two researchers from Norway concluded that its effect size – a technical measure of its usefulness – had fallen by half since 1977. (In the unlikely event that this trend were to persist, it could be entirely useless in a few decades.) Had CBT somehow benefited from a kind of placebo effect all along, effective only so long as people believed it was a miracle cure?

That puzzle was still being digested when researchers at London’s Tavistock clinic published results in October from the first rigorous NHS study of long-term psychoanalysis as a treatment for chronic depression. For the most severely depressed, it concluded, 18 months of analysis worked far better – and with much longer-lasting effects – than “treatment as usual” on the NHS, which included some CBT. Two years after the various treatments ended, 44% of analysis patients no longer met the criteria for major depression, compared to one-tenth of the others. Around the same time, the Swedish press reported a finding from government auditors there: that a multimillion pound scheme to reorient mental healthcare towards CBT had proved completely ineffective in meeting its goals.

Such findings, it turns out, aren’t isolated – and in their midst, a newly emboldened band of psychoanalytic therapists are pressing the case that CBT’s pre-eminence has been largely built on sand. Indeed, they argue that teaching people to “think themselves to wellness” might sometimes make things worse. “Every thoughtful person knows that self-understanding isn’t something you get from the drive-thru,” said Jonathan Shedler, a psychologist at the University of Colorado medical school, who is one of CBT’s most unsparing critics. His default bearing is one of wry good humour, but exasperation ruffled his demeanour whenever our conversation dwelt too long on CBT’s claims of supremacy. “Novelists and poets seemed to have understood this truth for thousands of years. It’s only in the last few decades that people have said, ‘Oh, no, in 16 sessions we can change lifelong patterns!’” If Shedler and others are right, it may be time for psychologists and therapists to re-evaluate much of what they thought they knew about therapy: about what works, what doesn’t, and whether CBT has really consigned the cliche of the chin-stroking shrink – and with it, Freud’s picture of the human mind – to history. The impact of such a re-evaluation could be profound; eventually, it might even change how millions of people around the world are treated for psychological problems.

by Oliver Burkeman, The Guardian |  Read more:
Image: Peter Gamlen

Art nouveau pendant, early 1900s
via:

French woman, 1920s
via:

Human-Animal Chimeras Are Gestating on U.S. Research Farms

Braving a funding ban put in place by America’s top health agency, some U.S. research centers are moving ahead with attempts to grow human tissue inside pigs and sheep with the goal of creating hearts, livers, or other organs needed for transplants.

The effort to incubate organs in farm animals is ethically charged because it involves adding human cells to animal embryos in ways that could blur the line between species.

Last September, in a reversal of earlier policy, the National Institutes of Health announced it would not support studies involving such “human-animal chimeras” until it had reviewed the scientific and social implications more closely.

The agency, in a statement, said it was worried about the chance that animals’ “cognitive state” could be altered if they ended up with human brain cells.

The NIH action was triggered after it learned that scientists had begun such experiments with support from other funding sources, including from California’s state stem-cell agency. The human-animal mixtures are being created by injecting human stem cells into days-old animal embryos, then gestating these in female livestock. (...)

The experiments rely on a cutting-edge fusion of technologies, including recent breakthroughs in stem-cell biology and gene-editing techniques. By modifying genes, scientists can now easily change the DNA in pig or sheep embryos so that they are genetically incapable of forming a specific tissue. Then, by adding stem cells from a person, they hope the human cells will take over the job of forming the missing organ, which could then be harvested from the animal for use in a transplant operation.

“We can make an animal without a heart. We have engineered pigs that lack skeletal muscles and blood vessels,” says Daniel Garry, a cardiologist who leads a chimera project at the University of Minnesota. While such pigs aren’t viable, they can develop properly if a few cells are added from a normal pig embryo. Garry says he’s already melded two pigs in this way and recently won a $1.4 million grant from the U.S. Army, which funds some biomedical research, to try to grow human hearts in swine.

Because chimeras could provide a new supply of organs for needy patients and also lead to basic discoveries, researchers including Garry say they intend to press forward despite the NIH position. In November, he was one of 11 authors who published a letter criticizing the agency for creating “a threat to progress” that “casts a shadow of negativity” on their work.

The worry is that the animals might turn out to be a little too human for comfort, say ending up with human reproductive cells, patches of people hair, or just higher intelligence. “We are not near the island of Dr. Moreau, but science moves fast,” NIH ethicist David Resnik said during the agency’s November meeting. “The specter of an intelligent mouse stuck in a laboratory somewhere screaming ‘I want to get out’ would be very troubling to people.”

Hiromitsu Nakauchi, a stem-cell biologist at Stanford University, began trying to make human-sheep chimeras this year. He says that so far the contribution by human cells to the animals’ bodies appears to be relatively small. “If the extent of human cells is 0.5 percent, it’s very unlikely to get thinking pigs or standing sheep,” he says. “But if it’s large, like 40 percent, then we’d have to do something about that.”

by Antonio Regalado, MIT Technology Review | Read more:
Image: Ping Zhu

So Long, and Thanks for All the Fish


On January 5th, in a pre-dawn ritual going back decades, a handbell rang to mark the year’s first auction at Tsukiji, Tokyo’s sprawling fish market. The star attraction was a glistening 200kg tuna, sold to a sushi restaurant chain for ¥14m ($118,000). But the sale was tinged with nostalgia and even bitterness. This time next year the wholesale market, the world’s busiest, will be gone.

Squeezed between the Sumida river and the Ginza shopping district, Tsukiji is creaking at the seams. Some 60,000 people work under its leaky roof, and hundreds of forklifts, carrying everything from sea urchins to whale meat, careen across bumpy floors. The site’s owner, the city government, wants it moved.

That is unpopular. Traders resent being yanked to a sterile new site to the south. The new market is being built on a wharf whose soil is contaminated by the toxic effluent from a former gasworks. The clean-up and negotiations delayed the move for over a decade.

The final blow was Tokyo’s successful bid to host the 2020 Olympics. A new traffic artery will cut through Tsukiji, transporting visitors to the games’ venues. Part of the site will become a temporary press centre, says Yutaka Maeyasui, the executive in charge of shifting the market. Our time is up, he says, glancing around his decrepit office. The site has become too small, old and crowded. An earthquake could bring the roof down.

by The Economist |  Read more:
Image: uncredited

Thursday, January 7, 2016

After Capitalism

Where we're going we don't need roads

How will it end? For centuries even the most sanguine of capitalism’s theorists have thought it not long for this world. Smith, Ricardo, and Mill pointed to a “falling rate of profit” linked to inevitable declines in agricultural productivity. Marx applied the same concept to industrial production, suggesting that the tendency to replace workers with machines would lead to a chronic and insurmountable lack of demand. Sombart saw the restive adventurousness of capitalism as the key to its success—and, ultimately, its failure: though the appearance of new peripheries had long funneled profits back to the center, the days of “stout Cortez” had ended and there would one day be no empires or hinterlands to subdue.

Schumpeter was the gloomiest of all. He opened a chapter titled “Can Capitalism Survive?” (in his Capitalism, Socialism, and Democracy) with the definitive answer, “No. I do not think it can.” Inspired by Marx, he imagined that the very success of capitalism—the creation of large enterprises through continuous innovation—would lead to profound fatigue as innovation came to be merely routine, and the bourgeoisie turned its attention toward the banalities of office life: “Success in industry and commerce requires a lot of stamina, yet industrial and commercial activity is essentially unheroic in the knight’s sense—no flourishing of swords about it, not much physical prowess, no chance to gallop the armored horse into the enemy, preferably a heretic or heathen — and the ideology that glorifies the idea of fighting for fighting’s sake and of victory for victory’s sake understandably withers in the office among all the columns of figures.” He foresaw a world in which intellectuals, a marginalized and unhappy lot, would turn their discontent into politics and lead the discontented castoffs of capitalism toward socialism.

These predictions, however, failed to describe what was actually happening with capitalism in the 20th century. By the 1980s people had turned toward a different proposition of Schumpeter’s: that competition “from the new commodity, the new technology, the new source of supply, the new type of organization” was the source of dynamism in a swiftly growing economy. For Schumpeter, the crises of capitalism were signs not of the system’s debility but of its secret health. Business cycles were zesty, violent guarantees of continued growth. Monopolies were only temporary and could be broken up by the “perennial gale of creative destruction.” When in the 1960s and ’70s the otherwise impregnable position of American industry was broken by competition from Germany and Japan, Schumpeter seemed prescient. The response of corporations in the 1980s—enormous mergers, leveraged buyouts, union busting, corporate raiding, mass layoffs, and upward redistribution of wealth—seemed almost to be taking his words as prescriptive.

But while the economy has been dynamic, it has not been healthy. Several crashes later, the gloom has returned, and the signs of autumn are once again most recognizable in the pronouncements of free-market capitalism’s erstwhile boosters. In the past year, many have taken up Larry Summers’s remark that we have entered a period of “secular stagnation,” marked by persistent and slow growth worldwide. Fiscal austerity is general, taxes remain low, and debt levels continue to rise—which means that Western countries, by selling treasury bonds to the rich through capital markets, are actually paying their elites in bond yields to avoid having to go through the politically impossible process of taxing them. Absent any political recourse to countercyclical fiscal policy, central banks in the US, the Eurozone, and Japan have kept interest rates low and pumped trillions of dollars of fiat money into the financial system, keeping banks and dot-com companies liquid and driving the rich to put their money into the condos now flooding Manhattan, all while leaving median wages pleasantly low. It’s kept things humming along, but not much more than that. Fear courses through the veins of the free-marketers, who recognize that all is not well with the system they love.

One form that such worry takes is that robots are coming to take our jobs. From The Second Machine Aget to Rise of the Robots, a new wave of technofuturists predicts that most manufacturing and a good deal of white-collar work in “services” can and will be subject to automation. The special force of the technofuturists’ predictions today lies in the fact that many of us read their work on devices we carry in our pockets that have already destroyed jobs, or at least made them more precarious, at newspapers, record companies, travel agencies, taxi services, and even casinos. The statistics they purvey are worrying, among them the fact that the share of workers in global manufacturing is on the decline. China’s share peaked in the 1990s at 15 percent and has decreased since. Dani Rodrik calls this process “premature deindustrialization”: the ability of more and more developing countries to “skip” the usual stages of capital accumulation (mass industrialization accompanied by adding workers in services) by replacing more workers with machines and moving others into services.

The surprise is that a number of prominent left intellectuals have begun to view the idea of automation with equanimity, even optimism. Most prominent among them are the accelerationists, whose widely circulated “Manifesto for an Accelerationist Politics” is the inspiration for a new book, Inventing the Future, by the manifesto’s original authors Nick Srnicek and Alex Williams. Their motto seems to be “I for one welcome our new robot overlords”—for the principle of “accelerationism” is that automation is likely to become general, and so the left needs once and for all to cease imagining that blue-collar unionism and socialist parties will drive us toward communism.

The accelerationists insist that the future will be one in which, thanks to computer assisted advances in automation, wage labor is a condition guaranteed to very few, and “surplus populations,” already large, will dominate the planet. Prior socialists imagined that victory would come through the workplace; the accelerationists argue that, in the future, the workplace won’t exist in anything like the form we have now, and in any case it will have very few permanent workers. Assuming this position, they ask: What would be the social vision appropriate to a jobless future? What, after the end of working-class socialist dreams, should the left propose?

by The Editors, N+1 |  Read more:
Image: Derek Paul Boyle, Salt and Pennies, 2015

A Sad State of Affairs - Most Americans Are One Paycheck Away From the Street

Whenever I see one of these stories about how little Americans have available for an emergency, my blood starts to boil. I understand that poor people making $25,000 per year are forced to live paycheck to paycheck. But when 63% of all Americans can’t handle a $500 emergency, and 46% of households making over $75,000 can’t handle a $500 emergency, then they are just plain stupid, frivolous, and incapable of distinguishing between wants and needs. Delayed gratification is a trait almost non-existent among Americans today.

The first thing that infuriates me is the assumption that a $500 car repair or house repair is an unexpected emergency. It’s a fucking living expense. It’s not a fucking surprise. Your car will need new tires every few years. That’s $500 or more. Your hot water heater, air conditioner, roof, windows, etc. will need to be replaced. Everyone gets sick. That is not unexpected. Anyone who lives their life as if these expenses are a shocking surprise is a blithering idiot. And this country is crawling with blithering idiots.

So the majority of Americans can’t handle a $500 expense, but for the last two years there have been 35 million new cars “sold” to blithering idiots on credit or leases. Even though they have no money, they decide it’s a brilliant idea to commit to a 7 year payment of $300 to $500 per month on an asset that declines in value rapidly. Morons abound. These are the same people who must have their Starbucks coffee every day. These math challenged boobs could defer buying a Starbucks coffee every day, save the $3, and accumulate $750 of emergency savings in one year.  (...)

Quentin Fottrell: Most Americans are one paycheck away from the street:

Approximately 63% of Americans have no emergency savings for things such as a $1,000 emergency room visit or a $500 car repair, according to a survey released Wednesday of 1,000 adults by personal finance website Bankrate.com, up slightly from 62% last year. Faced with an emergency, they say they would raise the money by reducing spending elsewhere (23%), borrowing from family and/or friends (15%) or using credit cards to bridge the gap (15%).

This lack of emergency savings could be a problem for millions of Americans. More than four in 10 Americans either experienced a major unexpected expense over the past 12 months or had an immediate family member who had an unexpected expense, Bankrate found. (The survey didn’t specify the impact of that expense.) “Without emergency savings, you may not have money to cover needed home repairs,” says Signe-Mary McKernan, senior fellow and economist at the Urban Institute, a nonprofit organization that focuses on social and economic policy. “Similarly, without emergency savings, people could raid their retirement account.”

The findings are strikingly similar to two other reports, one by the U.S. Federal Reserve survey of more than 4,000 adults released in 2014. “Savings are depleted for many households after the recession,” it found. Among those who had savings prior to 2008, 57% said they’d used up some or all of their savings in the Great Recession and its aftermath. And another survey of 1,000 adults released last year by personal finance website GOBankingRates.com found that most Americans (62%) have less than $1,000 in their savings account (although that doesn’t include retirement or other investment accounts).

Why aren’t people saving? Millions of Americans are struggling with student loans, medical bills and other debts, says Andrew Meadows, a San Francisco-based producer of “Broken Eggs,” a documentary about retirement. Central bankers hiked their short-term interest rate target last month to a range of 0.25% to 0.50% from near-zero, but that’s still a small return for savings left in bank accounts. Indeed, personal savings rates as a percentage of disposable income dropped from 11% in December 2012 to 4.6% in August 2015, according to the Bureau of Economic Analysis, and now hover at 5.5%.

More money and education can help. The latest Bankrate survey found that savings increased with income and education: Just 46% of the highest-income households ($75,000-plus per year) and 52% of college graduates lack enough savings to cover a $500 car repair or $1,000 emergency room visit. And while those figures could still be lower, Americans are willing to cut back on at least some expenses when money is tight: 58% say they’re “very/somewhat” likely to cut back on eating out, are likely to decrease their cable bill and 41% are likely to spend less on coffee at places like Starbucks, while 39% will seek out lower-cost cellphone bills.

by Jim Quinn, The Burning Platform, and Quentin Fottrell, Market Watch | Read more:
Image: Shutterstock

Saturday, January 2, 2016


[ed. Taking a short break. Enjoy the archives.] [We're back. Thanks, Max.]

[ed. No garnish on the next one, thanks...]

Friday, January 1, 2016


Gordon Welters
via:

What Was Volkswagen Thinking?

One day in 1979, James Burke, the chief executive of Johnson & Johnson, summoned more than 20 of his key people into a room, jabbed his finger at an internal document, and proposed destroying it.

The document was hardly incriminating. Entitled “Our Credo,” its plainspoken list of principles—including a higher duty to “mothers, and all others who use our products”—had been a fixture on company walls since 1943. But Burke was worried that managers had come to regard it as something like the Magna Carta: an important historical document, but hardly a tool for modern decision making. “If we’re not going to live by it, let’s tear it off the wall,” Burke told the group, using the weight of his office to force a debate. And that is what he got: a room full of managers debating the role of moral duties in daily business, and then choosing to resuscitate the credo as a living document.

Three years later, after reports emerged of a deadly poisoning of Tylenol capsules in Chicago-area stores, Johnson & Johnson’s reaction became the gold standard of corporate crisis response. But the company’s swift decisions—to remove every bottle of Tylenol capsules from store shelves nationwide, publicly warn people not to consume its product, and take a $100 million loss—weren’t really decisions. They flowed more or less automatically from the signal sent three years earlier. Burke, in fact, was on a plane when news of the poisoning broke. By the time he landed, employees were already ordering Tylenol off store shelves.

On the face of it, you’d be hard-pressed to find an episode less salient to the emissions-cheating scandal at Volkswagen—a company that, by contrast, seems intent on poisoning its own product, name, and future. But although the details behind VW’s installation of “defeat devices” in its vehicles are only beginning to trickle out, the decision process is very likely to resemble a bizarro version of Johnson & Johnson’s, with opposite choices every step of the way.

The sociologist Diane Vaughan coined the phrase the normalization of deviance to describe a cultural drift in which circumstances classified as “not okay” are slowly reclassified as “okay.” In the case of the Challenger space-shuttle disaster—the subject of a landmark study by Vaughan—damage to the crucial O‑rings had been observed after previous shuttle launches. Each observed instance of damage, she found, was followed by a sequence “in which the technical deviation of the [O‑rings] from performance predictions was redefined as an acceptable risk.” Repeated over time, this behavior became routinized into what organizational psychologists call a “script.” Engineers and managers “developed a definition of the situation that allowed them to carry on as if nothing was wrong.” To clarify: They were not merelyacting as if nothing was wrong. They believed it, bringing to mind Orwell’s concept of doublethink, the method by which a bureaucracy conceals evil not only from the public but from itself.

If that comparison sounds overwrought, consider the words of Denny Gioia, a management professor at Penn State who, in the early 1970s, was the coordinator of product recalls at Ford. At the time, the Ford Pinto was showing a tendency to explode when hit from behind, incinerating passengers. Twice, Gioia and his team elected not to recall the car—a fact that, when revealed to his M.B.A. students, goes off like a bomb. “Before I went to Ford I would have argued strongly that Ford had an ethical obligation to recall,” he wrote in the Journal of Business Ethics some 17 years after he’d left the company. “I now argue and teach that Ford had an ethical obligation to recall. But, while I was there, I perceived no strong obligation to recall and I remember no strong ethical overtones to the case whatsoever.”The O-ring engineers were not merely acting as if nothing was wrong. They believed it.

What, Gioia the professor belatedly asked, had Gioia the auto executive been thinking? The best answer, he concluded, is that he hadn’t been. Executives are bombarded with information. To ease the cognitive load, they rely on a set of unwritten scripts imported from the organization around them. You could even define corporate culture as a collection of scripts. Scripts are undoubtedly efficient. Managers don’t have to muddle through each new problem afresh, Gioia wrote, because “the mode of handling such problems has already been worked out in advance.” But therein lies the danger. Scripts can be flawed, and grow more so over time, yet they discourage active analysis. Based on the information Gioia had at the time, the Pinto didn’t fit the criteria for recall that his team had already agreed upon (a clearly documentable pattern of failure of a specific part). No further thought necessary.

Sometimes a jarring piece of evidence does intrude, forcing a conscious reassessment. For Gioia, it was the moment he saw the charred hulk of a Pinto at a company depot known internally as “The Chamber of Horrors.” The revulsion it evoked gave him pause. He called a meeting. But nothing changed. “After the usual round of discussion about criteria and justification for recall, everyone voted against recommending recall—including me.”

The most troubling thing, says Vaughan, is the way scripts “expand like an elastic waistband” to accommodate more and more divergence. Morton-Thiokol, the NASA contractor charged with engineering the O-rings, requested a teleconference on the eve of the fatal Challenger launch. After a previous launch, its engineers had noticed O-ring damage that looked different from damage they’d seen before. Suspecting that cold was a factor, the engineers saw the near-freezing forecast and made a “no launch” recommendation—something they had never done before. But the data they faxed to NASA to buttress their case were the same data they had earlier used to argue that the space shuttle was safe to fly. NASA pounced on the inconsistency. Embarrassed and unable to overturn the script they themselves had built in the preceding years, Morton-Thiokol’s brass buckled. The “no launch” recommendation was reversed to “launch.”

“It’s like losing your virginity,” a NASA teleconference participant later told Vaughan. “Once you’ve done it, you can’t go back.” If you try, you face a credibility spiral: Were you lying then or are you lying now?

by Jerry Useem, The Atlantic |  Read more:
Image: Justin Renteria

Eating ‘Healthy-ish’

Abstinence, we are usually told around this time of year, makes the heart grow stronger. It’s why Dry January, which started in the green and pleasantly alcoholic land of Britain a few years ago before reaching the U.S., is increasingly being touted as a good and worthy thing to do, and why so many people are currently making plans to remove whole food groups from their diet: carbs, fat, Terry’s Chocolate Oranges. The key to health, books and websites and dietitians and former presidents reveal, is a process of elimination. It’s going without. It’s getting through the darkest, coldest month of the year without so much as a snifter of antioxidant-rich Cabernet.

The problem with giving things up, though, is that inevitably it creates a void in one’s diet that only Reese’s pieces and a family-sized wheel of brie can fill. Then there’s the fact that so many abstinence-espousing programs require spending money on things; on Whole 30 cookbooks and Weight Watchers memberships and $10 bottles of bone broth. For a process that supposedly involves cutting things out, there seems to be an awful lot to take in.

This, Michael Pollan posits, is the problem with food: It’s gotten extraordinarily complicated. The writer and sustainable-eating advocate has written several books on how the simple business of eating has become a minefield in which earnest Westerners try to tiptoe around gooey, genetically engineered sugar bombs without setting off an explosion of calories, corn sugar, and cancer. In Defense of Food, published in 2008, offers a “manifesto” for eaters (i.e. humans) that’s breathtaking in its seven-word simplicity: Eat Food. Not Too Much. Mostly Plants. This mantra is repeated once more in a documentary based on the book that airs Wednesday night on PBS, and it’s felt in the January issue of Bon Appetit, which is based almost entirely around the concept of “healthy-ish” eating: “delicious, comforting home cooking that just happens to be kinda good for you.”

Healthy-ish, as a concept, isn’t new. In fact, it’s the food industry’s equivalent of your mom telling you to finish your broccoli before you dive into the Twinkies, only dressed up with a sexy hyphenated coverline and some mouthwatering photos of chicken seared in a cast-iron skillet. “Healthy-ish” shouldn’t feel revolutionary. By its very definition it’s something of a big old foodie shrug—an acknowledgment that if we can’t all subsist on steamed fish and vegetables all of the time, we can at least offset the steak dinner for having salad for lunch. It is, as per Pollan at least, a philosophy that everything is best enjoyed in moderation, including moderation.

So why does it feel so subversive?

The reason, as explained by both manifestations of In Defense of Food, is that industries upon industries, even entire religions, have been predicated on the premise that eating (certain things) is bad and will kill you. The documentary draws on years of food-related quackery to illustrate how ingrained fearing food is. 

by Sophie Gilbert, The Atlantic |  Read more:
Image: Kikim Media

The Good Times

Talking Heads



[ed. Possum Legba]
Image via: 

Thursday, December 31, 2015

The Sacred Child

[ed. I've been reposting a few things from 2015 over the last few days. Here's something from a bit earlier. Given the ubiquity of phone cameras these days it's an issue to think about.]

Goa, India, 2009. A shimmering white beach. Clear blue water, a cloudless sky. The rush of waves and a constant din from jet skis. Behind us: rust-coloured sand, skinny cows browsing among trash and dry bushes.

I'm lounging on the sun bed with a mystery novel and keeping half an eye on my three-year-old daughter, who is sitting in pink swimming pants and playing with a bucket and spade. She is blonde, blue-eyed and unbelievably cute. People here stare at her, ensorcelled, love-struck, touching her hair, pointing at her. The other day the restaurant waiter - stoned? - approached and bit her tenderly on her yummy upper arm. And above all, they want to take her picture. In this country headed headlong into the future - the little dirt track back to the hotel that we walked when we arrived a week ago has already been tarred over with asphalt - every Indian seems to have a camera phone. Often they ask me, or more rarely my wife, civilly if they may take a picture. Having been brought up on Swedish school pedagogics, I relay the question to my daughter: "Is it OK for you if they take your picture?" I guess I think it's her decision.

A well-dressed slender Indian man in white pants and shirt wanders past on the beach. He smiles and coos at the playing Swedish child and takes out his cell phone. My sister-in-law is already there, asks my daughter, who says no. The man pays no attention, takes the pictures anyway.

My daughter is clearly stressed and uneasy with the situation, the strange man who stands before her with his phone portraying her, laughing lightly. My sister in law tells him off sharply, "Please! No!". He pays no mind, takes some more pictures.

I run down to the water and confront the man. "You respect my daughter!" I yell repeatedly. He apologises, looks nervous, says something in Hindi that I don't understand and points at his phone, as if showing that hey, he just took some pictures, what's the harm? He hurries away.

One of the beach guards soon catches up with him and takes the phone, clearly in order to flip through the photo folder. The man, by now visibly sweating and piteous, explains and gesticulates to the grim guard. Apparently there is nothing on the phone to suggest that the man is a sex tourist or pedophile, as he soon gets his phone back and slips off.

I sit back heavily on the sun bed. Conflicting emotions. I feel indignant and aggrieved - dammit, I should have thrown that phone into the sea, would have served that perv right. Uncertain - OK, he shouldn't have done that, but what if he's really just an everyday Indian guy who loves to see European kids on the beach and wanted a lovely holiday souvenir? Is that really such a big deal?

No more strangers take any pictures of my daughter on the trip. I quit offering her to decide. I just say no, categorically. Her image becomes untouchable. Her likeness becomes sacred.

I should perhaps begin with the disclaimer we all seem forced to start with when we talk about this issue. To wit: I hate everything about child molestation. I hate pedophiles, child porn, all the dirt and darkness and nauseating shit those awful people do. I have two little daughters and I'm prepared to kill or die to protect them against that kind of evil.

This is not actually an essay on child pornography, at least not if we take that to mean images of children being sexually abused, images that could not exist unless children had been violated, defiled, victimised. But in 2011, in Sweden, that is not the definition of child pornography. Instead there is a boundary zone between images that are OK (legitimate though potentially provocative) and such that are a crime to produce, disseminate and possess. That gray zone raises a number of difficult questions about children, art, society and sexuality. Those questions have rarely been more topical than today, and they touch upon the most personal, forbidden and sacred of issues.

Biddick Hall, north-east England, 1976. This time the three-year-old's name is Rosie Bowdrey. Photographer Robert Mapplethorpe is a guest at the wealthy family's garden party, the sun beats down and he takes innumerable pictures. Rosie has been swimming and runs around in the nude; her mother hurriedly gets the child into a dress. She sits down, a little huffily, on a stone bench. Mapplethorpe takes a picture, probably using his new Hasselblad. Then the skirt comes off again.

34 years later this picture is considered the single most controversial work in Mapplethorpe's oeuvre. We're dealing with an artist who, later in life, took pictures of BDSM, of coprophagy, sexually charged images of African American men, pictures of himself with a bull whip up his posterior. But the picture where the genitals of a three-year-old can be made out is worse. Wherever "Rosie" has been shown, it has soon been taken down again, most recently in November 2010 at Bukowski's fine-arts auction house in Stockholm.

It makes no difference that Rosie's mother, Lady Beatrix Nevill, signed a release for the image, stating that she does not find it pornographic and that she wants it to be exhibited. It makes no difference that Rosie Bowdrey herself, now an adult, has said that she is proud of the picture, that she can't see how anyone would find it pornographic, and that she wants it to be exhibited. It makes no difference that nothing suggests that Mapplethorpe, who incidentally was gay, had any sexual interest in little girls.

Who is eroticising the child in the picture? The photographer - or the viewer?

Because at the same time: isn't there something erotic about that image? Or what? About the large luminous eyes, about the sullen mouth with its slightly drooping corners? Something like posing, provocative, that we recognise from a thousand sexually explicit or implicit pictures of adult women? Or what? What do you think?

People in art circles rarely condemn a work of art; more commonly one will encounter a "permissive" attitude to the sphere of aesthetics where anything smacking of censorship will be loudly decried. Thus it is interesting to note mystery novelist Mons Kallentoft writing on his blog that the image goes "way, way across the boundary to child porn" and noting with pleasure that this time "the alarm bells" had worked. "It's never ever right to eroticise a child, not even for the most self-aggrandising, priggish artistic purposes", he added. When I reach Kallentoft on the phone he is at first happy to develop his thoughts further.

"The girl in the picture can't choose, she's being watched. There are people on Earth who get turned on by pictures like these, and that constitutes abuse against her no matter how you shake it. Nobody has that right."

But as an adult, the girl in that picture has said that she doesn't view it as pornographic?

"It doesn't work that way. That's like saying that with consent, we're allowed to do whatever we like to each other, and we might as well sign contracts permitting others to murder us ... That picture is child porn and exhibiting it to the public is wrong! I mean sure, OK, you can keep it to yourself in your home."

So would the image be acceptable if it sat in somebody's photo album - where pictures of nude kids are pretty common?

Our interview takes a left turn here. Mons Kallentoft is very upset by my question, or by my matter-of-fact and slightly impersonal way of phrasing it. He asks me if I have experienced any sexual abuse against children. Before I can answer, he angrily declares that he isn't willing to intellectualise this issue further and abruptly ends our conversation.

I feel bad about this, like a cynical and superficial asshole. Somebody who is happy to sit in a comfy desk chair under pleasant lighting with a cup of tea and soft music in the background, writing about this issue as if it were all about aesthetics - while in fact we're talking about children's lives being ruined, children being violated and defiled in unimaginable ways. Do we even have the right to a lukewarm analytical attitude regarding an issue were the stakes are so high?

I don't want to use a fellow human being and colleague's emotional reaction as a rhetorical tool or pedagogical example, but Kallentoft's reaction really shows me how fraught, personal and painful this issue can be. And suddenly I also think I have gained a deeper understanding of how devout Christians or Muslims feel about pictures such as Elisabeth Ohlson Wallin's Ecce Homo or Lars Vilks's Mohammed cartoons. It's such a gross violation that it's impossible to speak rationally about it, a violation that can only get worse when some uncomprehending respectless bastard asks why you feel violated.

Suddenly I understand better how difficult it is to get anywhere when it comes to things that touches the depths of our souls. How much really is at stake.

by Jens Liljestrand, Aardvarcheology | Read more:
Image: via:
Repost