Tuesday, November 3, 2015

Death Rates Rising for Middle-Aged White Americans

Something startling is happening to middle-aged white Americans. Unlike every other age group, unlike every other racial and ethnic group, unlike their counterparts in other rich countries, death rates in this group have been rising, not falling.

That finding was reported Monday by two Princeton economists, Angus Deaton, who last month won the 2015 Nobel Memorial Prize in Economic Science, and Anne Case. Analyzing health and mortality data from the Centers for Disease Control and Prevention and from other sources, they concluded that rising annual death rates among this group are being driven not by the big killers like heart disease and diabetes but by an epidemic of suicides and afflictions stemming from substance abuse: alcoholic liver disease and overdoses of heroin and prescription opioids.

The analysis by Dr. Deaton and Dr. Case may offer the most rigorous evidence to date of both the causes and implications of a development that has been puzzling demographers in recent years: the declining health and fortunes of poorly educated American whites. In middle age, they are dying at such a high rate that they are increasing the death rate for the entire group of middle-aged white Americans, Dr. Deaton and Dr. Case found.

The mortality rate for whites 45 to 54 years old with no more than a high school education increased by 134 deaths per 100,000 people from 1999 to 2014.

“It is difficult to find modern settings with survival losses of this magnitude,” wrote two Dartmouth economists, Ellen Meara and Jonathan S. Skinner, in a commentary to the Deaton-Case analysis to be published in Proceedings of the National Academy of Sciences.

“Wow,” said Samuel Preston, a professor of sociology at the University of Pennsylvania and an expert on mortality trends and the health of populations, who was not involved in the research. “This is a vivid indication that something is awry in these American households.”

Dr. Deaton had but one parallel. “Only H.I.V./AIDS in contemporary times has done anything like this,” he said.

by Gina Kolata, NY Times |  Read more:
Image: Ben Solomon

Atsuo (Dazai Atsuo) and S. Riyo
via: here and here

Monday, November 2, 2015

Depression Modern

[ed. Haven't seen this one yet but now I'm intrigued.]

The second season of “The Leftovers,” on HBO, begins with a mostly silent eight-minute sequence, set in a prehistoric era. We hear a crackle, then see red-and-black flames and bodies sleeping around a fire; among them is a pregnant woman, nearly naked. She rises, stumbles from her cave, then squats and pisses beneath the moon—only to be startled by a terrifying rumble, an earthquake that buries her home. When she gives birth, we see everything: the flood of amniotic fluid, the head crowning, teeth biting the umbilical cord. For weeks, she struggles to survive, until finally she dies in agony, bitten by a snake that she pulled off her child. Another woman rescues the baby, her face hovering like the moon. Only then does the camera glide down the river, to where teen-age girls splash and laugh. We are suddenly in the present, with no idea how we got there.

It takes serious brass to start your new season this way: the main characters don’t even show up until midway through the hour. With no captions or dialogue, and no clear link to the first season’s story, it’s a gambit that might easily veer into self-indulgence, or come off as second-rate Terrence Malick. Instead, almost magically, the sequence is ravishing and poetic, sensual and philosophical, dilating the show’s vision outward like a telescope’s lens. That’s the way it so often has been with this peculiar, divisive, deeply affecting television series, Damon Lindelof’s first since “Lost.” Lindelof, the co-creator, and his team (which includes Tom Perrotta, the other co-creator, who wrote the novel on which the show is based; the religious scholar Reza Aslan, a consultant; and directors such as Mimi Leder) persist in dramatizing the grandest of philosophical notions and addressing existential mysteries—like the origins of maternal love and loss—without shame, thus giving the audience permission to react to them in equally vulnerable ways. They’re willing to risk the ridiculous in search of something profound.

At heart, “The Leftovers” is about grief, an emotion that is particularly hard to dramatize, if only because it can be so burdensome and static. The show, like the novel, is set a few years after the Departure, a mysterious event in which, with no warning, two per cent of the world’s population disappears. Celebrities go; so do babies. Some people lose their whole family, others don’t know anyone who has “departed.” The entire cast of “Perfect Strangers” blinks out (though, in a rare moment of hilarity, Mark Linn-Baker turns out to have faked his death). Conspiracy theories fly, people lose their religion or become fundamentalists—and no one knows how to feel. The show’s central family, the Garveys, who live in Mapleton, New York, appear to have lost no one, yet they’re emotionally shattered. Among other things, the mother, Laurie (an amazing Amy Brenneman, her features furrowed with disgust), joins a cult called the Guilty Remnant, whose members dress in white, chain-smoke, and do not speak. They stalk the bereaved, refusing to let anyone move on from the tragedy. Her estranged husband, Kevin (Justin Theroux), the chief of police, has flashes of violent instability; their teen-age children drift away, confused and alarmed.

That’s the plot, but the series is often as much about images (a girl locked in a refrigerator, a dog that won’t stop barking) and feelings (fury, suicidal alienation) as about events; it dives into melancholy and the underwater intensity of the grieving mind without any of the usual relief of caperlike breakthroughs. Other cable dramas, however ambitious, fuel themselves on the familiar story satisfactions of brilliant iconoclasts taking risks: cops, mobsters, surgeons, spies. “The Leftovers” is structured more like explorations of domestic intimacy such as “Friday Night Lights,” but marinated in anguish and rendered surreal. The Departure itself is a simple but highly effective metaphor. In the real world, of course, people disappear all the time: the most ordinary death can feel totally bizarre and inexplicable, dividing the bereaved as often as it brings them closer. But “The Leftovers” is more expansive than that, evoking, at various moments, New York after 9/11, and also Sandy Hook, Charleston, Indonesia, Haiti, and every other red-stringed pin on our pre-apocalyptic map of trauma. At its eeriest, the show manages to feel both intimate and world-historical: it’s a fable about a social catastrophe threaded into the story of a lacerating midlife divorce.

The first season of “The Leftovers” rose and fell in waves: a few elements (like a plot about the Garveys’ son, who becomes a soldier in a separate cult) felt contrived, while others (especially the violent clashes between the Guilty Remnant and the bereaved residents of Mapleton) were so raw that the show could feel hard to watch. But halfway through Season 1 “The Leftovers” spiked into greatness, with a small masterpiece of an episode. In “Guest,” a seemingly minor character named Nora Durst (Carrie Coon), a Mapleton resident who has the frightening distinction of having lost her entire family—a husband and two young children—stepped to the story’s center. In one hour, we learned everything about her: what she does for work (collects “survivor” questionnaires for an organization searching for patterns), what she does at home (obsessively replaces cereal boxes, as if her family were still alive), and what she does for catharsis (hires a prostitute to shoot her in the chest while she’s wearing a bulletproof vest). She travels to New York for a conference, where her identity gets stripped away in bizarre fashion. But, as with that prehistoric opener, the revelations are delivered through montages, which drag, then speed up, revealing without overexplaining, grounded in Coon’s brilliantly unsentimental, largely silent performance. When the episode was over, I was weeping, which happens a lot with “The Leftovers.” It may be the whole point.

by Emily Nussbaum, New Yorker | Read more:
Image: Emiliano Ponzi 

A 'Huge Milestone' in Cancer Treatment

A new cancer treatment strategy is on the horizon that experts say could be a game-changer and spare patients the extreme side effects of existing options such as chemotherapy.

Chemotherapy and other current cancer treatments are brutal, scorched-earth affairs that work because cancer cells are slightly – but not much – more susceptible to the havoc they wreak than the rest of the body. Their side effects are legion, and in many cases horrifying – from hair loss and internal bleeding to chronic nausea and even death.

But last week the Food and Drug Administration (FDA) for the first time approved a single treatment that can intelligently target cancer cells while leaving healthy ones alone, and simultaneously stimulate the immune system to fight the cancer itself.

The treatment, which is called T-VEC (for talimogene laherparepvec) but will be sold under the brand name Imlygic, uses a modified virus to hunt cancer cells in what experts said was an important and significant step in the battle against the deadly disease.

It works by introducing a specially modified form of the herpes virus by injection directly into a tumour – specifically skin cancer, the indication for which the drug has been cleared for use.

It was developed by the Massachusetts-based biotech company BioVex, which wasacquired in 2011 by biotech behemoth Amgen for $1bn. The genetic code of the virus – which was originally taken from the cold sore of a BioVex employee – has been modified so it can kill only cancer cells.

Cancer-hunting viruses have long been thought of as a potential source of a more humane and targeted treatment for cancer. Unlike current oncological treatments like chemotherapy and radiotherapy, which kill cancer cells but also damage the rest of the body, viruses can be programmed to attack only the cancer cells, leaving patients to suffer the equivalent of just a day or two’s flu.

Treatments such as Imlygic have two modes of action: first, the virus directly attacks the cancer cells; and second, it triggers the body’s immune system to attack the rogue cells too once it detects the virus’s presence.

Dr Stephen Russell, a researcher at the Mayo Clinic who specialises in oncolytic virotherapy – as these treatments are known – says that the FDA’s clearance of Imlygic represents “a huge milestone” in cancer treatment development.

Viruses are “nature’s last untapped bioresource”, Russell said. Imlygic itself has an officially fairly modest effect coming out of its clinical studies – an average lifespan increase of less than five months. But underneath that data, Russell said anecdotally that in his Mayo clinic studies in mice, some programmable viruses saw “large tumours completely disappearing”.

The goal, he said, was to get to the point where the clinical trials would see similarly dramatic outcomes, so that chemotherapy and radiotherapy could finally be consigned to medical history.

by Nicky Woolf, The Guardian |  Read more:
Image: Stocktrek Images, Inc. / Alamy S/Alamy Stock Photo

The Folklore Of Our Times

I was born in 1949. I started high school in 1963 and went to college in 1967. And so it was amid the crazy, confused uproar of 1968 that I saw in my otherwise auspicious twentieth year. Which, I guess, makes me a typical child of the sixties. It was the most vulnerable, most formative, and therefore most important period in my life, and there I was, breathing in deep lungfuls of abandon and quite naturally getting high on it all. I kicked in a few deserving doors - and what a thrill it was whenever a door that deserved kicking in presented itself before me, as Jim Morrison, the Beatles and Bob Dylan played in the background. The whole shebang.

Even now, looking back on it all, I think that those years were special. I'm sure that if you were to examine the attributes of the time one by one, you wouldn't discover anything all that noteworthy. Just the heat generated by the engine of history, that limited gleam that certain things give off in certain places at certain times - that and a kind of inexplicable antsiness, as if we were viewing everything through the wrong end of a telescope. Heroics and villainy, rapture and disillusionment, martyrdom and revisionism, silence and eloquence, etcetera, etcetera... the stuff of any age. Only, in our day - if you'll forgive the overblown expression - it was all so colourful somehow, so very reach-out-and-grab-it palpable. There were no gimmicks, no discount coupons, no hidden advertising, no keep-'em-coming point-card schemes, no insidious, loopholing paper trails. Cause and effect shook hands; theory and reality embraced with aplomb. A prehistory to high capitalism: that's what I personally call those years.

But as to whether the era brought us - my generation, that is - any special radiance, well, I'm not so sure. In the final analysis, perhaps we simply passed through it as if we were watching an exciting movie: we experienced it as real - our hearts pounded, our palms sweated - but when the lights came on we just walked out of the cinema and picked up where we'd left off. For whatever reason, we neglected to learn any truly valuable lesson from it all. Don't ask me why. I am much too deeply bound up in those years to answer the question. There's just one thing I'd like you to understand: I'm not the least bit proud that I came of age then; I'm simply reporting the facts.

Now let me tell you about the girls. About the mixed-up sexual relations between us boys, with our brand new genitals, and the girls, who at the time were, well, still girls.

But, first, about virginity. In the sixties, virginity held a greater significance than it does today. As I see it - not that I've ever conducted a survey - about 50% of the girls of my generation were no longer virgins by the age of 20. Or, at least, that seemed to be the ratio in my general vicinity. Which means that, consciously or not, about half the girls around still revered this thing called virginity.

Looking back now, I'd say that a large portion of the girls of my generation, whether virgins or not, had their share of inner conflicts about sex. It all depended on the circumstances, on the partner. Sandwiching this relatively silent majority were the liberals, who thought of sex as a kind of sport, and the conservatives, who were adamant that girls should stay virgins until they were married.

Among the boys, there were also those who thought that the girl they married should be a virgin.

People differ, values differ. That much is constant, no matter what the period. But the thing about the sixties that was totally unlike any other time is that we believed that those differences could be resolved.

This is the story of someone I knew. He was in my class during my senior year of high school in Kobe, and, frankly, he was the kind of guy who could do it all. His grades were good, he was athletic, he was considerate, he had leadership qualities. He wasn't outstandingly handsome, but he was good-looking in a clean-cut sort of way. He could even sing. A forceful speaker, he was always the one to mobilise opinion in our classroom discussions. This didn't mean that he was much of an original thinker - but who expects originality in a classroom discussion? All we ever wanted was for it to be over as quickly as possible, and if he opened his mouth we were sure to be done on time. In that sense, you could say that he was a real friend.

There was no faulting him. But then again I could never begin to imagine what went on in his mind. Sometimes I felt like unscrewing his head and shaking it, just to see what kind of sound it would make. Still, he was very popular with the girls. Whenever he stood up to say something in class, all the girls would gaze at him admiringly. Any maths problem they didn't understand, they'd take to him. He must have been twenty-seven times more popular than I was. He was just that kind of guy.

We all learn our share of lessons from the textbook of life, and one piece of wisdom I've picked up along the way is that you just have to accept that in any collective body there will be such types. Needless to say, though, I personally wasn't too keen on his type. I guess I preferred, I don't know, someone more flawed, someone with a more unusual presence. So in the course of an entire year in the same class I never once hung out with the guy. I doubt that I even spoke to him. The first time I ever had a proper conversation with him was during the summer vacation after my freshman year of college. We happened to be attending the same driving school, and we'd chat now and then, or have coffee together during the breaks. That driving school was such a bore that I'd have been happy to kill time with any acquaintance I ran into. I don't remember much about our conversations; whatever we talked about, it left no impression, good or bad.

The other thing I remember about him is that he had a girlfriend. She was in a different class, and she was hands down the prettiest girl in the school. She got good grades, but she was also an athlete, and she was a leader - like him, she had the last word in every class discussion. The two of them were simply made for each other: Mr and Miss Clean, like something out of a toothpaste commercial.

I'd see them around. Every lunch hour, they sat in a corner of the schoolyard, talking. After school, they rode the train home together, getting off at different stations. He was on the soccer team, and she was in the English conversation club. When their extra-curricular activities weren't over at the same time, the one who finished first would go and study in the library. Any free time they had they spent together.

None of us - in my crowd - had anything against them. We didn't make fun of them, we never gave them a hard time; in fact, we hardly paid any attention to them at all. They really didn't give us much to speculate about. They were like the weather - just there, a physical fact. Inevitably, we spent our time talking about the things that interested us more: sex and rock and roll and Jean-Luc Godard films, political movements and Kenzaburo Oe novels, things like that. But especially sex.

OK, we were ignorant and full of ourselves. We didn't have a clue about life. But, for us, Mr and Miss Clean existed only in their Clean world. Which probably means that the illusions we entertained back then and the illusions they embraced were, to some extent, interchangeable.

This is their story. It's not a particularly happy story, nor, by this point in time, is it one with much of a moral. But no matter: it's our story as much as theirs. Which, I guess, makes it a form of cultural history. Suitable material for me to collect and relate here - me, the insensitive folklorist.

by Haruki Murakami, The Guardian |  Read more:
Image: Wikipedia

The Big Bush Question

Poor Jeb. Or I should say, Poor Jeb! (I’m not given to exclamation points, but Jeb! is so magnetic.) It’s unfathomable how he thought that he could run for the Republican nomination without having to wrestle with his brother’s record as president.   [ed. Jeb! has a new slogan: Jeb Can Fix It.  lol]

Soon enough, he was so entangled in the question of whether he would have gone into Iraq, knowing what we know now, that it took him four tries to come up with the currently politically acceptable answer: No. But while the war in Iraq is widely accepted to have been a disastrous mistake, another crucial event during the George W. Bush administration has long been considered unfit for political discussion: President Bush’s conduct, in the face of numerous warnings of a major terrorist plot, in the months leading up to September 11, 2001.

The general consensus seems to have been that the 9/11 attacks were so horrible, so tragic, that to even suggest that the president at the time might bear any responsibility for not taking enough action to try to prevent them is to play “politics,” and to upset the public. And so we had a bipartisan commission examine the event and write a report; we built memorials at the spots where the Twin Towers had come down and the Pentagon was attacked; and that was to be that. And then along came Donald Trump, to whom “political correctness” is a relic of an antiquated, stuffy, political system he’s determined to overwhelm. In an interview on October 16, he violated the longstanding taboo by saying, “When you talk about George Bush—I mean, say what you want, the World Trade Center came down during his time.”

Trump’s comments set up a back and forth between him and Jeb Bush—who, as Trump undoubtedly anticipated, can’t let a blow against him by the frontrunner go by without response—but the real point is that with a simple declaration by Trump, there it was: the subject of George W. Bush’s handling of the warnings about the 9/11 attacks was out there.

Jeb Bush had already left himself open to this charge by saying that his brother had “kept us safe.” Now he has insisted on this as his response to Trump. But the two men were talking about different periods of time. As Jeb Bush said later, “We were attacked, my brother kept us safe.” That’s true enough in Jeb’s framing of the issue as what happened after the attacks—and if one’s concept of safe means fighting two terrible wars whose effects continue to play out in the Middle East; continual reports of terrorist plots and panicked responses to them; invasive searches at airports; and greatly expanded surveillance.

But that’s not the heart of the matter. The heretofore hushed-up public policy question that Trump stumbled into is: Did George W. Bush do what he could have to try to disrupt the terrorist attacks on September 11, 2001? It’s not simply a question of whether he could have stopped the devastation—that’s unknowable. But did he do all he could given the various warnings that al-Qaeda was planning a major attack somewhere on US territory, most likely New York or Washington? The unpleasant, almost unbearable conclusion—one that was not to be discussed within the political realm—is that in the face of numerous warnings of an impending attack, Bush did nothing.

Osama bin Laden was already a wanted man when the Bush administration took office. The Clinton administration had identified him as the prime suspect in the 1998 bombings of two US embassies in Africa, and it took a few steps to capture or kill him that came to naught. Outgoing Clinton officials warned the incoming administration about al-Qaeda, but the repeated and more specific warnings by Richard Clarke, who stayed on from one administration to the next as the chief terrorism adviser, were ignored. In a White House meeting on July 5, 2001, Clarke said, “Something really spectacular is going to happen here, and it’s going to happen soon.” By this time top Bush officials regarded Clarke as a pain, who kept going on about terrorist plots against the US.

But Clarke wasn’t the only senior official sounding an alarm. On July 10, CIA Director George Tenet, having just received a briefing from a deputy that “literally made my hair stand on end,” phoned National Security Adviser Condoleezza Rice to ask for a special meeting at the White House. “I can recall no other time in my seven years as DCI that I sought such an urgent meeting at the White House,” Tenet later wrote in his book, At the Center of the Storm. Tenet and aides laid out for Rice what they described as irrefutable evidence that, as the lead briefer put it at that meeting, “’There will be a significant terrorist attack in the coming weeks or months” and that the attack would be “spectacular.” Tenet believed that the US was going to get hit, and soon. But the intelligence authorities, including covert action, that the CIA officials told Rice they needed, and had been asking for since March, weren’t granted until September 17.

Then came the now-famous August 6 Presidential Daily Brief (PDB) intelligence memorandum to the president, headlined, “Bin Laden Determined To Strike in US.” Bush was at his ranch in Crawford, Texas, on what was to be one of the longest summer vacations any president has taken; none of his senior aides was present for the briefing. Rice later described this PDB as “very vague” and “ very non-specific” and “mostly historical.” It was only after a great struggle that the 9/11 commission got it declassified and the truth was learned. In its final report, the commission noted that this was the thirty-sixth Presidential Daily Brief so far that year related to al-Qaeda and bin Laden though the first one that specifically warned of an attack on the US itself.

While the title of the memo has become somewhat familiar, less known are its contents, including the following: “Clandestine, foreign government, and media reports indicate bin Laden since 1997 has wanted to conduct terrorist attacks in the US. Bin Laden implied in U.S. television interviews in 1997 and 1998 that his followers would follow the example of World Trade Center bomber Ramzi Yousef and ‘bring the fighting to America.’” And: “FBI information since that time indicates patterns of suspicious activity in this country consistent with preparations for hijackings or other types of attacks, including recent surveillance of federal buildings in New York.” Having received this alarming warning the president did nothing.

As August went on, Tenet was so agitated by the chatter he was picking up and Bush’s lack of attention to the matter that he arranged for another CIA briefing of the president later in August, with Bush still at his ranch, to try to get his attention to what Tenet believed was an impending danger. According to Ron Suskind, in the introduction to his book The One Percent Doctrine, when the CIA agents finished their briefing of the president in Crawford, the president said, “All right. You’ve covered your ass now.” And that was the end of it.

What might a president do upon receiving notice that the world’s number one terrorist was “determined to strike in US”? The most obvious thing was to direct Rice or Vice President Cheney to convene a special meeting of the heads of any agencies that might have information about possible terror threats, and order them shake their agencies down to the roots to find out what they had that might involve such a plot, then put the information together. As it happened they had quite a bit:

by Elizabeth Drew, NY Review of Books |  Read more:
Image: David Levine

Catherine MurphyStill Life with Envelopes 1976
via:

The Truth About Ninjas

If you do anything for Halloween this weekend, chances are pretty good you might see a child (or an adult) trick-or-treating (or partying) dressed as a ninja. Maybe it’ll be a generic ninja, or maybe a specific one, like a Naruto character or a Ninja Turtle.

Today, ninjas are all around us. They’re in our movies and comics and video games; they’re even in our everyday language (“I can’t believe you ninja’d that in there at the last second!” “Come join our team of elite code ninjas!”). Far from their origins in medieval Japan, ninjas are now arguably that country’s most famous warrior type. We talk about pirates versus ninjas, after all, not pirates versus samurai.

There’s a huge divergence between historical ninja and the fantasy ninjas of popular culture. For example, everyone knows that ninjas were warriors who stuck to the shadows and never revealed their secrets— yet watch some anime or play a video game and you’re likely to see ninjas portrayed as the flashiest, most conspicuous characters around.

Like a lot of well-known fantasy archetypes, the ninja have a real history, but aside from some basic core attributes, writers and artists around the world feel free to interpret the word however they want.

The two strains of ninja—“real” ninja versus pop-culture ninjas—aren’t as separate as you might assume. In fact, the tension between the two is one of the things I love most about them. Ninjas as we know them today are a complex mixture of historical inspiration and modern imagination, defined by the intersection of two seemingly contradictory identities.

The true story of the ninja is fascinating. The people known today as ninja (they pronounced it “shinobi” then) rose out of small villages in the Iga and Kōga regions of Japan. By necessity, they became experts in navigating and utilizing the resources of the dense mountain forests around them. Because of their relative isolation, they served no lord and ruled themselves through a council of village chiefs. In the Warring States period (c. 1467 – c. 1603), people from these areas frequently found work as spies and agents of espionage, work that made good use of their skills in navigation, observation, and escape.

The villages of Iga and Kōga were eventually attacked by one of the greatest warlords of the era, Oda Nobunaga (an event that forms the loose inspiration for, among many other things, the underrated Neo Geo fighting game Ninja Master’s[sic], by World Heroes developer ADK).

The villages banded together and fought the invading armies with guerilla techniques—techniques enabled by their superior knowledge and mastery of the terrain. That’s pretty much textbook ninja action, right there.

By the end of the Warring States period, the ninja were enfranchised and integrated into the government’s systems of power. Their most famous leader, Hattori Hanzō, received an official salary equivalent to millions of dollars today. He became so much a part of the establishment that they named a gate in the Shogun’s palace after him, and today there’s a train line named after that gate: Tokyo’s Hanzōmon Line.

Serious researchers and students of ninja history and practice often take pains to remind us that the real-life ninjas they study were decidedly not cartoon characters. The real story of the ninja, they often say, is better than anything that’s been made up about them. That’s true in some sense: the history of the ninja is definitely worth understanding. It weaves together many threads of Japan’s culture, its philosophy, and even its spirituality.

But I have to admit: I love the goofy pop-culture version of ninjas, too.

by Matthew S. Burns, Kotaku | Read more:
Image: uncredited

Sunday, November 1, 2015


Enlightened by Chiara Tocci
via:

Noodle
via:

The Power of Nudges, for Good and Bad

[ed. See also: What It Really Means To Be A "Nudge"... the rationale for adopting policies designed to make it more likely that people will act in their own best interests rather than, say, spend money they shouldn’t spend or eat food they shouldn’t consume.(...) how recent advances in behavioral science should inform our attitudes towards rational decision making. Specifically, these behavioral science findings show that people don’t always make rational decisions, raising questions about when or whether outsiders—like governments or employers–should step in to help people avoid making bad choices.]

Nudges, small design changes that can markedly affect individual behavior, have been catching on. These techniques rely on insights from behavioral science, and when used ethically, they can be very helpful. But we need to be sure that they aren’t being employed to sway people to make bad decisions that they will later regret.

Whenever I’m asked to autograph a copy of “Nudge,” the book I wrote with Cass Sunstein, the Harvard law professor, I sign it, “Nudge for good.” Unfortunately, that is meant as a plea, not an expectation.

Three principles should guide the use of nudges:

■ All nudging should be transparent and never misleading.

■ It should be as easy as possible to opt out of the nudge, preferably with as little as one mouse click.

■ There should be good reason to believe that the behavior being encouraged will improve the welfare of those being nudged.

As far as I know, the government teams in Britain and the United States that have focused on nudging have followed these guidelines scrupulously. But the private sector is another matter. In this domain, I see much more troubling behavior.

For example, last spring I received an email telling me that the first prominent review of a new book of mine had appeared: It was in The Times of London. Eager to read the review, I clicked on a hyperlink, only to run into a pay wall. Still, I was tempted by an offer to take out a one-month trial subscription for the price of just £1.

As both a consumer and producer of newspaper articles, I have no beef with pay walls. But before signing up, I read the fine print. As expected, I would have to provide credit card information and would be automatically enrolled as a subscriber when the trial period expired. The subscription rate would then be £26 (about $40) a month. That wasn’t a concern because I did not intend to become a paying subscriber. I just wanted to read that one article.

But the details turned me off. To cancel, I had to give 15 days’ notice, so the one-month trial offer actually was good for just two weeks. What’s more, I would have to call London, during British business hours, and not on a toll-free number. That was both annoying and worrying. As an absent-minded American professor, I figured there was a good chance I would end up subscribing for several months, and that reading the article would end up costing me at least £100.

I spoke to Chris Duncan, a spokesman for The Times of London. He said his company wanted readers to call before canceling to make sure that they appreciated the scope of the paper’s coverage, but when I pointed out the inconvenience this posed to readers outside Britain, he said that the company might rethink that aspect of the policy.

In the meantime, that deal qualifies as a nudge that violates all three of my guiding principles: The offer was misleading, not transparent; opting out was cumbersome; and the entire package did not seem to be in the best interest of a potential subscriber, as opposed to the publisher.

by Richard H. Thaler, NY Times | Read more:
Image: Anthony Freda

President Obama & Marilynne Robinson: A Conversation—II


The President: Part of the challenge is—and I see this in our politics—is a common conversation. It’s not so much, I think, that people don’t read at all; it’s that everybody is reading [in] their niche, and so often, at least in the media, they’re reading stuff that reinforces their existing point of view. And so you don’t have that phenomenon of here’s a set of great books that everybody is familiar with and everybody is talking about.

Sometimes you get some TV shows that fill that void, but increasingly now, that’s splintered, too, so other than the Super Bowl, we don’t have a lot of common reference points. And you can argue that that’s part of the reason why our politics has gotten so polarized, is that—when I was growing up, if the president spoke to the country, there were three stations and every city had its own newspaper and they were going to cover that story. And that would last for a couple of weeks, people talking about what the president had talked about.

Today, my poor press team, they’re tweeting every two minutes because some new thing has happened, which then puts a premium on the sensational and the most outrageous or a conflict as a way of getting attention and breaking through the noise—which then creates, I believe, a pessimism about the country because all those quiet, sturdy voices that we were talking about at the beginning, they’re not heard.

It’s not interesting to hear a story about some good people in some quiet place that did something sensible and figured out how to get along.

Robinson: I think that in our earlier history—the Gettysburg Address or something—there was the conscious sense that democracy was an achievement. It was not simply the most efficient modern system or something. It was something that people collectively made and they understood that they held it together by valuing it. I think that in earlier periods—which is not to say one we will never return to—the president himself was this sort of symbolic achievement of democracy. And there was the human respect that I was talking about before, [that] compounds itself in the respect for the personified achievement of a democratic culture. Which is a hard thing—not many people can pull that together, you know…. So I do think that one of the things that we have to realize and talk about is that we cannot take it for granted. It’s a made thing that we make continuously. (...)

Robinson: It’s amazing. You know, when I go to Europe or—England is usually where I go—they say, what are you complaining about? Everything is great. [Laughter.] I mean, really. Comparisons that they make are never at our disadvantage.

The President: No—but, as I said, we have a dissatisfaction gene that can be healthy if harnessed. If it tips into rage and paranoia, then it can be debilitating and just be a self-fulfilling prophecy, because we end up blocking progress in serious ways.

Robinson: Restlessness of, like, why don’t we do something about this yellow fever? There’s generous restlessness.

The President: That’s a good restlessness.

Robinson: Yes, absolutely. And then there is a kind of acidic restlessness that—

The President: I want more stuff.

Robinson: I want more stuff, or other people are doing things that I’m justified in resenting. That sort of thing.

The President: Right.

Robinson: I was not competing with anyone else. Nobody knew what my project was. I didn’t know what it was. But what does freedom mean? I mean, really, the ideal of freedom if it doesn’t mean that we can find out what is in this completely unique being that each one of us is? And competition narrows that. It’s sort of like, you should not be studying this; you should be studying that, pouring your life down the siphon of economic utility.

The President: But doesn’t part of that depend on people having different definitions of success, and that we’ve narrowed what it means to be successful in a way that makes people very anxious? They don’t feel affirmed if they’re good at something that the society says isn’t that important or doesn’t reward.

by Barack Obama and Marilynne Robinson, NY Review of Books | Read more:
Image: Pete Souza/White House

Saturday, October 31, 2015

Sex and Drugs and Rock'n'Roll Insurance

A day after Katy Perry tweeted she had just completed her 151-date Prismatic world tour and that it was “only By The Grace Of God that I made each & every one of them”, One Direction had to cancel their show in Belfast at the last minute due to Liam Payne falling ill.

Insurers and underwriters looking at Perry’s next tour will regard it as low risk. But they will be keeping a closer eye on One Direction, even though the show was quickly rescheduled, and mentally reworking the numbers if more shows get cancelled. Since record sales started to tumble 15 years ago, touring has become the way that most acts make a living these days. The numbers are staggering. Taylor Swift, for example, is grossing $2.93m per night on her 1989 tour, based on from figures published by Billboard. With stakes this high, touring insurance, on the surface an admittedly dry subject, has never been more important.

Acts on the road generally take out three types of insurance: equipment (to protect against damage and theft); public liability (in case an audience member is injured during a show); and non-appearance. The last two are relatively modern developments, but it is non-appearance that is arguably the most critical, especially as tours become longer.

At the start of October, promoter and agent John Giddings spoke at the International Festival Forum and suggested that David Bowie has effectively retired from touring, having performed his last solo British show in 2004 at the Isle of Wight festival (which Giddings runs). There have been rumours that Bowie is not willing to put himself through the exertion of a world tour. Unlike, say, 74-year-old Bob Dylan, who has played between 85 and 112 shows every year this century, Bowie has not played for so long it could be difficult to insure a tour against cancellations.

David Enthoven, co-founder of management company ie:music, whose biggest client is Robbie Williams, started managing acts in 1969 with EG Records. He says it was the late Willie Robertson, founder of specialist insurance company Robertson Taylor, who invented parts of touring insurance in the 1970s that acts today take as read. “There was certainly no non-appearance insurance then,” says Enthoven of his earliest experiences touring with King Crimson. “I remember [taking it out for the first time] in the mid-1970s for Roxy Music. It was a package that Willie Robertson thought up.”

The reality for most touring acts, as One Direction are finding and that was painfully made clear to Foo Fighters when Dave Grohl broke his leg on stage in Gothenburg in June, is that long tours are fraught with risk. “If you are insuring a 100-date world tour, as far as the insurers are concerned, the likelihood is that at least one of those 100 shows will be cancelled,” explains Paul Twomey, director of entertainment at insurance specialists Doodson Broking Group. “The singer’s voice could deteriorate as the tour goes on and they get more tired.”

Insurance companies regard some cancellations as collateral damage on lengthy tours, and structure their policies around that. “The underwriter could put in a deductible on the policy that means they won’t pay out if one show is cancelled,” says Steven Howell, head of music at Music Insurance Brokers. “So they might add in a one-show or two-show deductible. In a string of 30 shows, if you miss one or two, you are not going to be able to make a claim; but if you miss a third one then you can make a claim.”

The amounts of money at risk can be phenomenal. “For a stadium show, it could be anything up to two million quid,” says Enthoven. But it is not just the income from ticket sales at risk. “For an act like One Direction, they possibly make more money from merchandise than they do for the tickets,” suggests Twomey. So that has to be factored into their policies, which are often taken out at the earliest stages in planning a tour and will only run for as long as the tour lasts. “They are not annual policies, like car insurance, where you rack up year after year of no claims,” says Howell. “It is very specific to the life and health of the individual or the band members that you are insuring.” (...)

For a small act playing back rooms of pubs, insurance may be seen as a luxury they can rarely afford. Once you get to a certain level, however, the stakes become so high that it would be reckless to consider scrimping on insurance.

“You have to weigh up how expensive it is to go on the road,” says Niamh Byrne of ElevenManagement, who represent Blur. “In Blur’s instance there is a significant cost to putting the show on the road as the band likes to give a lot and make every show special. Rehearsals, crew, equipment hire, production rehearsals, strings, brass, guests – that all costs money. If a show doesn’t happen then you are going to be in the hole for a significant amount of money.”

Ahead of the tour, brokers will be appointed to cost up and take out insurance policies. Part of that will be based on the act’s touring history – or, more specifically, their cancellation history. If they keep missing shows then their premium will rise exponentially. Byrne takes pride in Blur’s clean record, which makes their touring insurance relatively straightforward. “We have a band with an incredible work ethic,” she says. “Even if anyone is ill, they have always managed to be able to perform.”

Twomey adds that most insurance policies will only cover the key members of the band as, frankly, no one is going to be disappointed or ask for their money back if the third trombonist misses a show. “You have to look at if those members are changeable,” he says.

Byrne puts it more bluntly. “Non-appearance insurance is only for the people who are necessary to perform,” she says. “If, say, the sound engineer is ill, that doesn’t necessarily mean you can’t do the show. You can get another sound engineer but you can’t get another Damon Albarn.”

by Eamonn Forde, The Guardian | Read more:
Image: Damon Albarn, Mário Cruz/EPA

Arbitration Everywhere, Stacking the Deck of Justice

On Page 5 of a credit card contract used by American Express, beneath an explainer on interest rates and late fees, past the details about annual membership, is a clause that most customers probably miss. If cardholders have a problem with their account, American Express explains, the company “may elect to resolve any claim by individual arbitration.”

Those nine words are at the center of a far-reaching power play orchestrated by American corporations, an investigation by The New York Times has found.

By inserting individual arbitration clauses into a soaring number of consumer and employment contracts, companies like American Express devised a way to circumvent the courts and bar people from joining together in class-action lawsuits, realistically the only tool citizens have to fight illegal or deceitful business practices.

Over the last few years, it has become increasingly difficult to apply for a credit card, use a cellphone, get cable or Internet service, or shop online without agreeing to private arbitration. The same applies to getting a job, renting a car or placing a relative in a nursing home.

Among the class actions thrown out because of the clauses was one brought by Time Warner customers over charges they said mysteriously appeared on their bills and another against a travel booking website accused of conspiring to fix hotel prices. A top executive at Goldman Sachs who sued on behalf of bankers claiming sex discrimination was also blocked, as were African-American employees at Taco Bell restaurants who said they were denied promotions, forced to work the worst shifts and subjected to degrading comments.

Some state judges have called the class-action bans a “get out of jail free” card, because it is nearly impossible for one individual to take on a corporation with vast resources.

Patricia Rowe of Greenville, S.C., learned this firsthand when she initiated a class action against AT&T. Ms. Rowe, who was challenging a $600 fee for canceling her phone service, was among more than 900 AT&T customers in three states who complained about excessive charges, state records show. When the case was thrown out last year, she was forced to give up and pay the $600. Fighting AT&T on her own in arbitration, she said, would have cost far more.

By banning class actions, companies have essentially disabled consumer challenges to practices like predatory lending, wage theft and discrimination, court records show.

“This is among the most profound shifts in our legal history,” William G. Young, a federal judge in Boston who was appointed by President Ronald Reagan, said in an interview. “Ominously, business has a good chance of opting out of the legal system altogether and misbehaving without reproach.”

More than a decade in the making, the move to block class actions was engineered by a Wall Street-led coalition of credit card companies and retailers, according to interviews with coalition members and court records. Strategizing from law offices on Park Avenue and in Washington, members of the group came up with a plan to insulate themselves from the costly lawsuits. Their work culminated in two Supreme Court rulings, in 2011 and 2013, that enshrined the use of class-action bans in contracts. The decisions drew little attention outside legal circles, even though they upended decades of jurisprudence put in place to protect consumers and employees.

One of the players behind the scenes, The Times found, was John G. Roberts Jr., who as a private lawyer representing Discover Bank unsuccessfully petitioned the Supreme Court to hear a case involving class-action bans. By the time the Supreme Court handed down its favorable decisions, he was the chief justice.

Corporations said that class actions were not needed because arbitration enabled individuals to resolve their grievances easily. But court and arbitration records show the opposite has happened: Once blocked from going to court as a group, most people dropped their claims entirely.

by Jessica Silver-Greenberg and Robert Gebeloff, NY Times |  Read more:
Image: Chief Justice Roberts, Chip Somodevilla/Getty Images

Friday, October 30, 2015

Joe Jackson


Goodnight and Thank You, Grantland

[ed. See also: Talent First]

When ESPN launched Grantland four years ago as a website built around the hyper-popular writer Bill Simmons, predictions of its irrelevance and death abounded. In The Atlantic, Nicholas Jackson predicted that “the new site is doomed,” saying it wouldn’t “be distinct enough to draw the audience it needs.” He was wrong—Grantland quickly became the premier location for intelligent, thoughtful, unique writing on a whole range of subjects in sports and culture, and featured some of the Internet’s best reporting and podcasting, delivered by a staggering lineup of talent on staff. But on Friday, ESPN decided to abruptly pull the plug on the site, months after abruptly firing Simmons.

It’s easy to castigate ESPN’s thinking: Simmons left after clashing with management, mostly for calling out his parent company’s coverage of recent NFL scandals. After he was gone, the company didn’t find a permanent successor for the site (instead tapping Chris Connelly as an interim editor-in-chief), and subsequently, much of its deep bench of talent departed, some to a new project being set up by Simmons. Still, there were numerable writers and editors left on staff who heard about their site closing via press release today, though ESPN will apparently honor their contracts.

Grantland was sometimes pigeonholed as a “speciality site” or a “special project,” a prestige undertaking for ESPN that didn’t need to succeed in terms of raw traffic. But by any yardstick, it exceeded expectations. Throughout his tenure, Simmons remained himself: He hosted his super-popular B.S. Report podcast, wrote a weekly column, and occasionally weighed in on aspects of pop culture. But he also showed an eye for fantastic talent and let his staff explore diverse topics well outside of ESPN’s normal purview.

by David Sims, The Atlantic |  Read more:
Image: ESPN