Saturday, January 9, 2016

How the Daily Fantasy Sports Industry Turns Fans Into Suckers

Full disclosure: I am a 36-year-old dude who bores easily, drinks I.P.A.s and wears sports-themed T-shirts, especially ones with faded, nostalgic logos that suggest better times. In my early 20s, I developed a gambling problem that I’ve since learned to spread out over a variety of low-stakes games — Scrabble, pitch-and-putt golf, my stock profile on ETrade. I watch somewhere between six and 20 hours of basketball per week. I try to keep up with the usual cultural things — documentaries about conflict in South Sudan, Netflix binge shows, memes — but whenever I find myself awake in the early morning and there is no email to answer and no news to track, I watch SportsCenter, or I scan the previous night’s N.B.A. box scores to check up on Porzingis, or I read some dissertation on Johnny Cueto’s unusual ability to hold runners on first base. It’s not the most glamorous way to spend my time, but what can I do? My mind, at its most aimless, obsessively seeks out sports information. I am, in other words, the target demographic for the daily fantasy sports industry.

Since the start of this N.F.L. season, I have lost roughly $1,900 on DraftKings and FanDuel, the two main proprietors of daily fantasy sports (D.F.S.). I play pretty much every night. This requires me to pick a team of players — whether baseball, basketball, football, hockey or soccer — each of whom have been assigned a dollar amount, and fit them all under a salary cap. I base these lineups on reasonably educated hunches, something to the effect of: I’ll play the Indiana Pacers point guard George Hill tonight, because he’s going up against the New Orleans Pelicans, who have been a defensive train wreck this season, especially on the perimeter. Also, Monta Ellis, Hill’s back-court partner, is sitting out, which means more of the usage load should fall to Hill. Sometimes, usually while walking the dog, I’ll even sit down on a park bench and check to make sure that at least some of those facts are real. My bets range anywhere from $3 to $100. My losses in D.F.S. are not financially crippling, nor are they happening at a rate that should be cause for concern. But every gambler, whatever the size of the problem, wants to know that he or she has some chance of winning.

The ads, I admit, are what got me. For the first 10 months of 2015, DraftKings and FanDuel spent more than a combined $200 million on advertising, a surge that peaked at the start of the football season, when a DraftKings ad ran seemingly every couple of minutes on television. In addition to the ads, many of which showed regular guys like me who had won, in the DraftKings parlance, “a shipload of money,” there were DraftKings lounges in N.F.L. stadiums, FanDuel sidelines in N.B.A. arenas and daily fantasy advice segments in the sports sections of newspapers and all over ESPN, which, during the first weeks of the N.F.L. season, felt as if it had been converted into a nonstop publicity machine for DraftKings. As of August, both companies had billion-dollar valuations and promised weekly competitions with huge payouts and fast and easy withdrawals.

Initially, D.F.S. seemed harmless enough — on Sunday mornings, I would challenge a couple of my friends in California to head-to-head match-ups for $50 and put a few $20 entries into the million-dollar fantasy-football contest. Then, on Sunday, Sept. 27, Ethan Haskell, an employee at DraftKings, inadvertently published information that could have given him an edge over his competitors. That day, Haskell won $350,000 in prizes on FanDuel. (DraftKings later concluded, in an internal review conducted by a former United States attorney, that Haskell obtained the information after the deadline for submitting his lineup in the contest and couldn’t use it for profit.) Haskell’s accidental disclosure and subsequent bonanza caught the attention of several media outlets, including The Times, leading to a volley of articles and columns that placed the operations of daily fantasy sports under close scrutiny.Continue reading the main story

In the three months that have passed since Haskell’s post, DraftKings and FanDuel have been reeling. In October, Nevada joined Arizona, Iowa, Louisiana, Montana and Washington on the list of states where DraftKings and FanDuel cannot be played. On Nov. 10, Attorney General Eric Schneiderman of New York issued cease-and-desist letters to the two companies, and later filed lawsuits against the two companies. In response, a judge ordered them to stop accepting bets in the state. (The judge’s order has now been stayed, and both companies continue to do business in New York. Last week, Schneiderman asked a judge to order DraftKings and FanDuel to reimburse the money New York State residents have lost on the site.) On Dec. 23, Lisa Madigan, attorney general for the state of Illinois, released an opinion stating that daily fantasy games “clearly constitutes gambling.” (The two companies have argued that D.F.S. is a game of skill.) A chill has hit the D.F.S. industry. Prize pools have been steadily declining, and in the eyes of the public and much of the media, D.F.S. has become synonymous with online poker or offshore sports gambling — an industry, in other words, that deserves neither protection nor sympathy.

Since the scandal broke, I have traveled to D.F.S. events, spent dozens of hours playing on DraftKings and FanDuel and talked to players and industry media figures. I initially intended to write an article about the bro culture that had sprouted up around D.F.S., which, from a distance, reminded me of the sweaty, sardonic camaraderie you typically see at high-stakes poker events. At the time, the crusade against D.F.S. felt a few degrees too hot — DraftKings and FanDuel struck me as obviously gambling sites, but the game itself felt sort of like homework. You research players. You build a spreadsheet. You project data and enter a team. You watch the team either fulfill or fall short of your projections. The next day, you start over again. The ruinous thrill of other forms of gambling — sports betting, blackjack, poker — just wasn’t there.

Instead, I came across a different sort of problem: a rapacious ecosystem in which high-volume gamblers, often aided by computer scripts and optimization software that allow players to submit hundreds or even thousands of lineups at a time, repeatedly take advantage of new players, who, after watching an ad, deposit some money on DraftKings and FanDuel and start betting. Both companies mostly looked the other way. And, when evidence of the competitive advantages enjoyed by these high-volume players became too overwhelming for the companies to ignore, DraftKings and FanDuel enacted rules that in the end are likely to protect the high-volume players rather than regulate them. In any case, a stricter ban on computer scripting would have been functionally impossible — because, as a representative of FanDuel told me, D.F.S. companies cannot reliably detect it on their sites.

Each company took advantage of language in a federal law that allowed them to plug directly into two huge, overlapping populations — fantasy sports players and gamblers. Each company was able to raise hundreds of millions of dollars in venture capital funding and sponsorships, all of which created pressure to increase user bases, which, in turn, led to an advertising deluge this past fall. It takes years of testing, regulation and outside oversight to create a reliable betting market. But DraftKings and FanDuel rose to prominence — and now, seemingly, have become derailed — in the course of a single N.F.L. regular season.

The betting economy that has been created is highly unstable and corrupt. One critic I spoke to was Gabriel Harber, a well-known D.F.S. podcaster and writer who has worked in the D.F.S. industry since its inception and goes by the handle CrazyGabey. He has come forward to discuss the rampant exploitation in D.F.S.’s betting economy.

“The idea that these sites exist so that regular guys can make a lot of money playing daily fantasy sports is a lie,” Harber told me. “FanDuel and DraftKings are optimized for power players to rape and pillage regular players over and over again.”

by Jay Caspian Kang, NY Times |  Read more:
Image:Getty

Friday, January 8, 2016

The Comfort Food Diaries: All I Want Are Some Potato Skins

We are at a bar somewhere near our office north of Times Square, and all I want are some potato skins, but all they have on the menu are mozzarella sticks and chicken wings and nachos, which are terrible if you ask me, at least compared to potato skins. Potato skins have everything you could ever hope for in a bar food—the crunch of the skin, the pull of the cheddar, the stink of the green onion, the chew of the bacon bits. A plate of potato skins and a pint of cold lager is the best pairing American cuisine has to offer. I mean that. I really do.

The bar is one of those fake Irish pubs that are ubiquitous in Midtown Manhattan. Let's call it Magnus O'Malley O'Sullivan's. It's the kind of place that smells of stale beer, cleaning products, and lacquer. It has no history. It has no lore. There are no regulars. The bartender can't pour a Guinness to save his life. In one of the basement bathrooms, a tourist from Indiana is likely throwing up after one too many Long Island Iced Teas. As a Korn song plays from an iPod, we get two orders of wings and two orders of mozzarella sticks. I can't keep my eyes off the plasma TVs.

We gather here once or twice a week to complain about our jobs. We work at a home improvement magazine, where I serve as an associate editor. I dislike my boss immensely, and he dislikes me just the same. It's the middle of the recession, and we keep having layoffs, but for some reason he never fires me. After each purge, we gather at this bar with our fallen ex-colleagues, and at some point one of them inevitably looks at me and says something like, "I can't believe you made it through." As I said, I dislike my boss immensely, and he dislikes me. Everyone knows he does.

I am in my late thirties, and I am anxious all the time. I take pills for it, but they don't work. I'm convinced I am dying of several diseases, because I have been a hypochondriac ever since I was a freshman in college and mistook two salivary glands under my tongue for cancerous tumors, and I didn't go to the doctor because I was terrified he would tell me I wasn't mistaken at all. I stand outside my office each day, chain-smoking cigarettes and worrying about my health. Creditors keep calling me because I'm tens of thousands of dollars in debt; I can't pay the rent on my Brooklyn apartment anymore. My girlfriend moved out. After work, all I want is a cold lager and some potato skins, because I am convinced they will fix everything. No, they won't pay the rent, but they have their own special powers. (...)

When I was a teenager, my father and I made a point of watching St. Elsewhere together each Wednesday night. It was after my parents' divorce, and we were living in a rented two-story town house. After years of tumult, things had finally started to settle down, and I was happy just to spend time with him. We had two floral-patterned love seats that used to be part of a formal living room set, but since Mom got full custody of the family room sofa, they now served as our primary seating. There were rips in the arms, and one of the legs had snapped off. We had each claimed one of our own, mine on the right side of the living room, his on the left. Before the show came on at 10 p.m., I would go into the cupboard to fetch a bag of Tato Skins, a concave chip that looked sort of like a tongue and was supposed to taste just like potato skins, and did. Then I would go into the refrigerator to retrieve a tin of Frito-Lay cheddar cheese dip and bring it into the living room, where Dad and I would split the entire bag, watching a medical drama that, in the series finale, was revealed to be little more than a young boy's dream.

I leave the fake Irish pub and walk by a T.G.I Friday's on 46th Street. It's about 10 p.m., and the place is filled with tourists. I stand outside watching throngs of people from Ohio and Michigan and everywhere else but New York pass me by. I admire the fact that sometime, in the next day or two, they'll all pack their bags in their Marriott hotel rooms and fly back to places that are so much more familiar to me than this one is.

There's a woman I have a crush on, and I fumble with my cellphone, scrolling for her number. I want to see if she'll meet me here. I want to call her and say: "Hey, wanna meet me at Fridays for some potato skins?" But then I realize how affected this will sound. She'll think I'm asking her to eat potato skins at Friday's because it's ironic, but it's not ironic at all. It's sacred.

I put my phone back into my pocket and totter toward the subway. This isn't the time for potato skins, I think to myself. But will there ever be a time again? Maybe potato skins are best left alone as a childhood memory. Yes, they are my favorite comfort food, but if I eat them, they'll just remind me how uncomfortable I am. In my city. In my middle age. In my life.

by Keith Pandolfi, Serious Eats |  Read more:
Image: Zac Overman

George Booth
via:

Therapy Wars: the Revenge of Freud

Dr David Pollens is a psychoanalyst who sees his patients in a modest ground-floor office on the Upper East Side of Manhattan, a neighbourhood probably only rivalled by the Upper West Side for the highest concentration of therapists anywhere on the planet. Pollens, who is in his early 60s, with thinning silver hair, sits in a wooden armchair at the head of a couch; his patients lie on the couch, facing away from him, the better to explore their most embarrassing fears or fantasies. Many of them come several times a week, sometimes for years, in keeping with analytic tradition. He has an impressive track record treating anxiety, depression and other disorders in adults and children, through the medium of uncensored and largely unstructured talk.

To visit Pollens, as I did one dark winter’s afternoon late last year, is to plunge immediately into the arcane Freudian language of “resistance” and “neurosis”, “transference” and “counter-transference”. He exudes a sort of warm neutrality; you could easily imagine telling him your most troubling secrets. Like other members of his tribe, Pollens sees himself as an excavator of the catacombs of the unconscious: of the sexual drives that lurk beneath awareness; the hatred we feel for those we claim to love; and the other distasteful truths about ourselves we don’t know, and often don’t wish to know.

But there’s a very well-known narrative when it comes to therapy and the relief of suffering – and it leaves Pollens and his fellow psychoanalysts decisively on the wrong side of history. For a start, Freud (this story goes) has been debunked. Young boys don’t lust after their mothers, or fear their fathers will castrate them; adolescent girls don’t envy their brothers’ penises. No brain scan has ever located the ego, super-ego or id. The practice of charging clients steep fees to ponder their childhoods for years – while characterising any objections to this process as “resistance”, demanding further psychoanalysis – looks to many like a scam. “Arguably no other notable figure in history was so fantastically wrong about nearly every important thing he had to say” than Sigmund Freud, the philosopher Todd Dufresne declared a few years back, summing up the consensus and echoing the Nobel prize-winning scientist Peter Medawar, who in 1975 called psychoanalysis “the most stupendous intellectual confidence trick of the 20th century”. It was, Medawar went on, “a terminal product as well – something akin to a dinosaur or a zeppelin in the history of ideas, a vast structure of radically unsound design and with no posterity.”

A jumble of therapies emerged in Freud’s wake, as therapists struggled to put their endeavours on a sounder empirical footing. But from all these approaches – including humanistic therapy, interpersonal therapy, transpersonal therapy, transactional analysis and so on – it’s generally agreed that one emerged triumphant. Cognitive behavioural therapy, or CBT, is a down-to-earth technique focused not on the past but the present; not on mysterious inner drives, but on adjusting the unhelpful thought patterns that cause negative emotions. In contrast to the meandering conversations of psychoanalysis, a typical CBT exercise might involve filling out a flowchart to identify the self-critical “automatic thoughts” that occur whenever you face a setback, like being criticised at work, or rejected after a date.

CBT has always had its critics, primarily on the left, because its cheapness – and its focus on getting people quickly back to productive work – makes it suspiciously attractive to cost-cutting politicians. But even those opposed to it on ideological grounds have rarely questioned that CBT does the job. Since it first emerged in the 1960s and 1970s, so many studies have stacked up in its favour that, these days, the clinical jargon “empirically supported therapies” is usually just a synonym for CBT: it’s the one that’s based on facts. Seek a therapy referral on the NHS today, and you’re much more likely to end up, not in anything resembling psychoanalysis, but in a short series of highly structured meetings with a CBT practitioner, or perhaps learning methods to interrupt your “catastrophising” thinking via a PowerPoint presentation, or online.

Yet rumblings of dissent from the vanquished psychoanalytic old guard have never quite gone away. At their core is a fundamental disagreement about human nature – about why we suffer, and how, if ever, we can hope to find peace of mind. CBT embodies a very specific view of painful emotions: that they’re primarily something to be eliminated, or failing that, made tolerable. A condition such as depression, then, is a bit like a cancerous tumour: sure, it might be useful to figure out where it came from – but it’s far more important to get rid of it. CBT doesn’t exactly claim that happiness is easy, but it does imply that it’s relatively simple: your distress is caused by your irrational beliefs, and it’s within your power to seize hold of those beliefs and change them.

Psychoanalysts contend that things are much more complicated. For one thing, psychological pain needs first not to be eliminated, but understood. From this perspective, depression is less like a tumour and more like a stabbing pain in your abdomen: it’s telling you something, and you need to find out what. (No responsible GP would just pump you with painkillers and send you home.) And happiness – if such a thing is even achievable – is a much murkier matter. We don’t really know our own minds, and we often have powerful motives for keeping things that way. We see life through the lens of our earliest relationships, though we usually don’t realise it; we want contradictory things; and change is slow and hard. Our conscious minds are tiny iceberg-tips on the dark ocean of the unconscious – and you can’t truly explore that ocean by means of CBT’s simple, standardised, science-tested steps.

This viewpoint has much romantic appeal. But the analysts’ arguments fell on deaf ears so long as experiment after experiment seemed to confirm the superiority of CBT – which helps explain the shocked response to a study, published last May, that seemed to show CBT getting less and less effective, as a treatment for depression, over time.

Examining scores of earlier experimental trials, two researchers from Norway concluded that its effect size – a technical measure of its usefulness – had fallen by half since 1977. (In the unlikely event that this trend were to persist, it could be entirely useless in a few decades.) Had CBT somehow benefited from a kind of placebo effect all along, effective only so long as people believed it was a miracle cure?

That puzzle was still being digested when researchers at London’s Tavistock clinic published results in October from the first rigorous NHS study of long-term psychoanalysis as a treatment for chronic depression. For the most severely depressed, it concluded, 18 months of analysis worked far better – and with much longer-lasting effects – than “treatment as usual” on the NHS, which included some CBT. Two years after the various treatments ended, 44% of analysis patients no longer met the criteria for major depression, compared to one-tenth of the others. Around the same time, the Swedish press reported a finding from government auditors there: that a multimillion pound scheme to reorient mental healthcare towards CBT had proved completely ineffective in meeting its goals.

Such findings, it turns out, aren’t isolated – and in their midst, a newly emboldened band of psychoanalytic therapists are pressing the case that CBT’s pre-eminence has been largely built on sand. Indeed, they argue that teaching people to “think themselves to wellness” might sometimes make things worse. “Every thoughtful person knows that self-understanding isn’t something you get from the drive-thru,” said Jonathan Shedler, a psychologist at the University of Colorado medical school, who is one of CBT’s most unsparing critics. His default bearing is one of wry good humour, but exasperation ruffled his demeanour whenever our conversation dwelt too long on CBT’s claims of supremacy. “Novelists and poets seemed to have understood this truth for thousands of years. It’s only in the last few decades that people have said, ‘Oh, no, in 16 sessions we can change lifelong patterns!’” If Shedler and others are right, it may be time for psychologists and therapists to re-evaluate much of what they thought they knew about therapy: about what works, what doesn’t, and whether CBT has really consigned the cliche of the chin-stroking shrink – and with it, Freud’s picture of the human mind – to history. The impact of such a re-evaluation could be profound; eventually, it might even change how millions of people around the world are treated for psychological problems.

by Oliver Burkeman, The Guardian |  Read more:
Image: Peter Gamlen

Art nouveau pendant, early 1900s
via:

French woman, 1920s
via:

Human-Animal Chimeras Are Gestating on U.S. Research Farms

Braving a funding ban put in place by America’s top health agency, some U.S. research centers are moving ahead with attempts to grow human tissue inside pigs and sheep with the goal of creating hearts, livers, or other organs needed for transplants.

The effort to incubate organs in farm animals is ethically charged because it involves adding human cells to animal embryos in ways that could blur the line between species.

Last September, in a reversal of earlier policy, the National Institutes of Health announced it would not support studies involving such “human-animal chimeras” until it had reviewed the scientific and social implications more closely.

The agency, in a statement, said it was worried about the chance that animals’ “cognitive state” could be altered if they ended up with human brain cells.

The NIH action was triggered after it learned that scientists had begun such experiments with support from other funding sources, including from California’s state stem-cell agency. The human-animal mixtures are being created by injecting human stem cells into days-old animal embryos, then gestating these in female livestock. (...)

The experiments rely on a cutting-edge fusion of technologies, including recent breakthroughs in stem-cell biology and gene-editing techniques. By modifying genes, scientists can now easily change the DNA in pig or sheep embryos so that they are genetically incapable of forming a specific tissue. Then, by adding stem cells from a person, they hope the human cells will take over the job of forming the missing organ, which could then be harvested from the animal for use in a transplant operation.

“We can make an animal without a heart. We have engineered pigs that lack skeletal muscles and blood vessels,” says Daniel Garry, a cardiologist who leads a chimera project at the University of Minnesota. While such pigs aren’t viable, they can develop properly if a few cells are added from a normal pig embryo. Garry says he’s already melded two pigs in this way and recently won a $1.4 million grant from the U.S. Army, which funds some biomedical research, to try to grow human hearts in swine.

Because chimeras could provide a new supply of organs for needy patients and also lead to basic discoveries, researchers including Garry say they intend to press forward despite the NIH position. In November, he was one of 11 authors who published a letter criticizing the agency for creating “a threat to progress” that “casts a shadow of negativity” on their work.

The worry is that the animals might turn out to be a little too human for comfort, say ending up with human reproductive cells, patches of people hair, or just higher intelligence. “We are not near the island of Dr. Moreau, but science moves fast,” NIH ethicist David Resnik said during the agency’s November meeting. “The specter of an intelligent mouse stuck in a laboratory somewhere screaming ‘I want to get out’ would be very troubling to people.”

Hiromitsu Nakauchi, a stem-cell biologist at Stanford University, began trying to make human-sheep chimeras this year. He says that so far the contribution by human cells to the animals’ bodies appears to be relatively small. “If the extent of human cells is 0.5 percent, it’s very unlikely to get thinking pigs or standing sheep,” he says. “But if it’s large, like 40 percent, then we’d have to do something about that.”

by Antonio Regalado, MIT Technology Review | Read more:
Image: Ping Zhu

So Long, and Thanks for All the Fish


On January 5th, in a pre-dawn ritual going back decades, a handbell rang to mark the year’s first auction at Tsukiji, Tokyo’s sprawling fish market. The star attraction was a glistening 200kg tuna, sold to a sushi restaurant chain for ¥14m ($118,000). But the sale was tinged with nostalgia and even bitterness. This time next year the wholesale market, the world’s busiest, will be gone.

Squeezed between the Sumida river and the Ginza shopping district, Tsukiji is creaking at the seams. Some 60,000 people work under its leaky roof, and hundreds of forklifts, carrying everything from sea urchins to whale meat, careen across bumpy floors. The site’s owner, the city government, wants it moved.

That is unpopular. Traders resent being yanked to a sterile new site to the south. The new market is being built on a wharf whose soil is contaminated by the toxic effluent from a former gasworks. The clean-up and negotiations delayed the move for over a decade.

The final blow was Tokyo’s successful bid to host the 2020 Olympics. A new traffic artery will cut through Tsukiji, transporting visitors to the games’ venues. Part of the site will become a temporary press centre, says Yutaka Maeyasui, the executive in charge of shifting the market. Our time is up, he says, glancing around his decrepit office. The site has become too small, old and crowded. An earthquake could bring the roof down.

by The Economist |  Read more:
Image: uncredited

Thursday, January 7, 2016

After Capitalism

Where we're going we don't need roads

How will it end? For centuries even the most sanguine of capitalism’s theorists have thought it not long for this world. Smith, Ricardo, and Mill pointed to a “falling rate of profit” linked to inevitable declines in agricultural productivity. Marx applied the same concept to industrial production, suggesting that the tendency to replace workers with machines would lead to a chronic and insurmountable lack of demand. Sombart saw the restive adventurousness of capitalism as the key to its success—and, ultimately, its failure: though the appearance of new peripheries had long funneled profits back to the center, the days of “stout Cortez” had ended and there would one day be no empires or hinterlands to subdue.

Schumpeter was the gloomiest of all. He opened a chapter titled “Can Capitalism Survive?” (in his Capitalism, Socialism, and Democracy) with the definitive answer, “No. I do not think it can.” Inspired by Marx, he imagined that the very success of capitalism—the creation of large enterprises through continuous innovation—would lead to profound fatigue as innovation came to be merely routine, and the bourgeoisie turned its attention toward the banalities of office life: “Success in industry and commerce requires a lot of stamina, yet industrial and commercial activity is essentially unheroic in the knight’s sense—no flourishing of swords about it, not much physical prowess, no chance to gallop the armored horse into the enemy, preferably a heretic or heathen — and the ideology that glorifies the idea of fighting for fighting’s sake and of victory for victory’s sake understandably withers in the office among all the columns of figures.” He foresaw a world in which intellectuals, a marginalized and unhappy lot, would turn their discontent into politics and lead the discontented castoffs of capitalism toward socialism.

These predictions, however, failed to describe what was actually happening with capitalism in the 20th century. By the 1980s people had turned toward a different proposition of Schumpeter’s: that competition “from the new commodity, the new technology, the new source of supply, the new type of organization” was the source of dynamism in a swiftly growing economy. For Schumpeter, the crises of capitalism were signs not of the system’s debility but of its secret health. Business cycles were zesty, violent guarantees of continued growth. Monopolies were only temporary and could be broken up by the “perennial gale of creative destruction.” When in the 1960s and ’70s the otherwise impregnable position of American industry was broken by competition from Germany and Japan, Schumpeter seemed prescient. The response of corporations in the 1980s—enormous mergers, leveraged buyouts, union busting, corporate raiding, mass layoffs, and upward redistribution of wealth—seemed almost to be taking his words as prescriptive.

But while the economy has been dynamic, it has not been healthy. Several crashes later, the gloom has returned, and the signs of autumn are once again most recognizable in the pronouncements of free-market capitalism’s erstwhile boosters. In the past year, many have taken up Larry Summers’s remark that we have entered a period of “secular stagnation,” marked by persistent and slow growth worldwide. Fiscal austerity is general, taxes remain low, and debt levels continue to rise—which means that Western countries, by selling treasury bonds to the rich through capital markets, are actually paying their elites in bond yields to avoid having to go through the politically impossible process of taxing them. Absent any political recourse to countercyclical fiscal policy, central banks in the US, the Eurozone, and Japan have kept interest rates low and pumped trillions of dollars of fiat money into the financial system, keeping banks and dot-com companies liquid and driving the rich to put their money into the condos now flooding Manhattan, all while leaving median wages pleasantly low. It’s kept things humming along, but not much more than that. Fear courses through the veins of the free-marketers, who recognize that all is not well with the system they love.

One form that such worry takes is that robots are coming to take our jobs. From The Second Machine Aget to Rise of the Robots, a new wave of technofuturists predicts that most manufacturing and a good deal of white-collar work in “services” can and will be subject to automation. The special force of the technofuturists’ predictions today lies in the fact that many of us read their work on devices we carry in our pockets that have already destroyed jobs, or at least made them more precarious, at newspapers, record companies, travel agencies, taxi services, and even casinos. The statistics they purvey are worrying, among them the fact that the share of workers in global manufacturing is on the decline. China’s share peaked in the 1990s at 15 percent and has decreased since. Dani Rodrik calls this process “premature deindustrialization”: the ability of more and more developing countries to “skip” the usual stages of capital accumulation (mass industrialization accompanied by adding workers in services) by replacing more workers with machines and moving others into services.

The surprise is that a number of prominent left intellectuals have begun to view the idea of automation with equanimity, even optimism. Most prominent among them are the accelerationists, whose widely circulated “Manifesto for an Accelerationist Politics” is the inspiration for a new book, Inventing the Future, by the manifesto’s original authors Nick Srnicek and Alex Williams. Their motto seems to be “I for one welcome our new robot overlords”—for the principle of “accelerationism” is that automation is likely to become general, and so the left needs once and for all to cease imagining that blue-collar unionism and socialist parties will drive us toward communism.

The accelerationists insist that the future will be one in which, thanks to computer assisted advances in automation, wage labor is a condition guaranteed to very few, and “surplus populations,” already large, will dominate the planet. Prior socialists imagined that victory would come through the workplace; the accelerationists argue that, in the future, the workplace won’t exist in anything like the form we have now, and in any case it will have very few permanent workers. Assuming this position, they ask: What would be the social vision appropriate to a jobless future? What, after the end of working-class socialist dreams, should the left propose?

by The Editors, N+1 |  Read more:
Image: Derek Paul Boyle, Salt and Pennies, 2015

A Sad State of Affairs - Most Americans Are One Paycheck Away From the Street

Whenever I see one of these stories about how little Americans have available for an emergency, my blood starts to boil. I understand that poor people making $25,000 per year are forced to live paycheck to paycheck. But when 63% of all Americans can’t handle a $500 emergency, and 46% of households making over $75,000 can’t handle a $500 emergency, then they are just plain stupid, frivolous, and incapable of distinguishing between wants and needs. Delayed gratification is a trait almost non-existent among Americans today.

The first thing that infuriates me is the assumption that a $500 car repair or house repair is an unexpected emergency. It’s a fucking living expense. It’s not a fucking surprise. Your car will need new tires every few years. That’s $500 or more. Your hot water heater, air conditioner, roof, windows, etc. will need to be replaced. Everyone gets sick. That is not unexpected. Anyone who lives their life as if these expenses are a shocking surprise is a blithering idiot. And this country is crawling with blithering idiots.

So the majority of Americans can’t handle a $500 expense, but for the last two years there have been 35 million new cars “sold” to blithering idiots on credit or leases. Even though they have no money, they decide it’s a brilliant idea to commit to a 7 year payment of $300 to $500 per month on an asset that declines in value rapidly. Morons abound. These are the same people who must have their Starbucks coffee every day. These math challenged boobs could defer buying a Starbucks coffee every day, save the $3, and accumulate $750 of emergency savings in one year.  (...)

Quentin Fottrell: Most Americans are one paycheck away from the street:

Approximately 63% of Americans have no emergency savings for things such as a $1,000 emergency room visit or a $500 car repair, according to a survey released Wednesday of 1,000 adults by personal finance website Bankrate.com, up slightly from 62% last year. Faced with an emergency, they say they would raise the money by reducing spending elsewhere (23%), borrowing from family and/or friends (15%) or using credit cards to bridge the gap (15%).

This lack of emergency savings could be a problem for millions of Americans. More than four in 10 Americans either experienced a major unexpected expense over the past 12 months or had an immediate family member who had an unexpected expense, Bankrate found. (The survey didn’t specify the impact of that expense.) “Without emergency savings, you may not have money to cover needed home repairs,” says Signe-Mary McKernan, senior fellow and economist at the Urban Institute, a nonprofit organization that focuses on social and economic policy. “Similarly, without emergency savings, people could raid their retirement account.”

The findings are strikingly similar to two other reports, one by the U.S. Federal Reserve survey of more than 4,000 adults released in 2014. “Savings are depleted for many households after the recession,” it found. Among those who had savings prior to 2008, 57% said they’d used up some or all of their savings in the Great Recession and its aftermath. And another survey of 1,000 adults released last year by personal finance website GOBankingRates.com found that most Americans (62%) have less than $1,000 in their savings account (although that doesn’t include retirement or other investment accounts).

Why aren’t people saving? Millions of Americans are struggling with student loans, medical bills and other debts, says Andrew Meadows, a San Francisco-based producer of “Broken Eggs,” a documentary about retirement. Central bankers hiked their short-term interest rate target last month to a range of 0.25% to 0.50% from near-zero, but that’s still a small return for savings left in bank accounts. Indeed, personal savings rates as a percentage of disposable income dropped from 11% in December 2012 to 4.6% in August 2015, according to the Bureau of Economic Analysis, and now hover at 5.5%.

More money and education can help. The latest Bankrate survey found that savings increased with income and education: Just 46% of the highest-income households ($75,000-plus per year) and 52% of college graduates lack enough savings to cover a $500 car repair or $1,000 emergency room visit. And while those figures could still be lower, Americans are willing to cut back on at least some expenses when money is tight: 58% say they’re “very/somewhat” likely to cut back on eating out, are likely to decrease their cable bill and 41% are likely to spend less on coffee at places like Starbucks, while 39% will seek out lower-cost cellphone bills.

by Jim Quinn, The Burning Platform, and Quentin Fottrell, Market Watch | Read more:
Image: Shutterstock

Saturday, January 2, 2016


[ed. Taking a short break. Enjoy the archives.] [We're back. Thanks, Max.]

[ed. No garnish on the next one, thanks...]

Friday, January 1, 2016


Gordon Welters
via:

What Was Volkswagen Thinking?

One day in 1979, James Burke, the chief executive of Johnson & Johnson, summoned more than 20 of his key people into a room, jabbed his finger at an internal document, and proposed destroying it.

The document was hardly incriminating. Entitled “Our Credo,” its plainspoken list of principles—including a higher duty to “mothers, and all others who use our products”—had been a fixture on company walls since 1943. But Burke was worried that managers had come to regard it as something like the Magna Carta: an important historical document, but hardly a tool for modern decision making. “If we’re not going to live by it, let’s tear it off the wall,” Burke told the group, using the weight of his office to force a debate. And that is what he got: a room full of managers debating the role of moral duties in daily business, and then choosing to resuscitate the credo as a living document.

Three years later, after reports emerged of a deadly poisoning of Tylenol capsules in Chicago-area stores, Johnson & Johnson’s reaction became the gold standard of corporate crisis response. But the company’s swift decisions—to remove every bottle of Tylenol capsules from store shelves nationwide, publicly warn people not to consume its product, and take a $100 million loss—weren’t really decisions. They flowed more or less automatically from the signal sent three years earlier. Burke, in fact, was on a plane when news of the poisoning broke. By the time he landed, employees were already ordering Tylenol off store shelves.

On the face of it, you’d be hard-pressed to find an episode less salient to the emissions-cheating scandal at Volkswagen—a company that, by contrast, seems intent on poisoning its own product, name, and future. But although the details behind VW’s installation of “defeat devices” in its vehicles are only beginning to trickle out, the decision process is very likely to resemble a bizarro version of Johnson & Johnson’s, with opposite choices every step of the way.

The sociologist Diane Vaughan coined the phrase the normalization of deviance to describe a cultural drift in which circumstances classified as “not okay” are slowly reclassified as “okay.” In the case of the Challenger space-shuttle disaster—the subject of a landmark study by Vaughan—damage to the crucial O‑rings had been observed after previous shuttle launches. Each observed instance of damage, she found, was followed by a sequence “in which the technical deviation of the [O‑rings] from performance predictions was redefined as an acceptable risk.” Repeated over time, this behavior became routinized into what organizational psychologists call a “script.” Engineers and managers “developed a definition of the situation that allowed them to carry on as if nothing was wrong.” To clarify: They were not merelyacting as if nothing was wrong. They believed it, bringing to mind Orwell’s concept of doublethink, the method by which a bureaucracy conceals evil not only from the public but from itself.

If that comparison sounds overwrought, consider the words of Denny Gioia, a management professor at Penn State who, in the early 1970s, was the coordinator of product recalls at Ford. At the time, the Ford Pinto was showing a tendency to explode when hit from behind, incinerating passengers. Twice, Gioia and his team elected not to recall the car—a fact that, when revealed to his M.B.A. students, goes off like a bomb. “Before I went to Ford I would have argued strongly that Ford had an ethical obligation to recall,” he wrote in the Journal of Business Ethics some 17 years after he’d left the company. “I now argue and teach that Ford had an ethical obligation to recall. But, while I was there, I perceived no strong obligation to recall and I remember no strong ethical overtones to the case whatsoever.”The O-ring engineers were not merely acting as if nothing was wrong. They believed it.

What, Gioia the professor belatedly asked, had Gioia the auto executive been thinking? The best answer, he concluded, is that he hadn’t been. Executives are bombarded with information. To ease the cognitive load, they rely on a set of unwritten scripts imported from the organization around them. You could even define corporate culture as a collection of scripts. Scripts are undoubtedly efficient. Managers don’t have to muddle through each new problem afresh, Gioia wrote, because “the mode of handling such problems has already been worked out in advance.” But therein lies the danger. Scripts can be flawed, and grow more so over time, yet they discourage active analysis. Based on the information Gioia had at the time, the Pinto didn’t fit the criteria for recall that his team had already agreed upon (a clearly documentable pattern of failure of a specific part). No further thought necessary.

Sometimes a jarring piece of evidence does intrude, forcing a conscious reassessment. For Gioia, it was the moment he saw the charred hulk of a Pinto at a company depot known internally as “The Chamber of Horrors.” The revulsion it evoked gave him pause. He called a meeting. But nothing changed. “After the usual round of discussion about criteria and justification for recall, everyone voted against recommending recall—including me.”

The most troubling thing, says Vaughan, is the way scripts “expand like an elastic waistband” to accommodate more and more divergence. Morton-Thiokol, the NASA contractor charged with engineering the O-rings, requested a teleconference on the eve of the fatal Challenger launch. After a previous launch, its engineers had noticed O-ring damage that looked different from damage they’d seen before. Suspecting that cold was a factor, the engineers saw the near-freezing forecast and made a “no launch” recommendation—something they had never done before. But the data they faxed to NASA to buttress their case were the same data they had earlier used to argue that the space shuttle was safe to fly. NASA pounced on the inconsistency. Embarrassed and unable to overturn the script they themselves had built in the preceding years, Morton-Thiokol’s brass buckled. The “no launch” recommendation was reversed to “launch.”

“It’s like losing your virginity,” a NASA teleconference participant later told Vaughan. “Once you’ve done it, you can’t go back.” If you try, you face a credibility spiral: Were you lying then or are you lying now?

by Jerry Useem, The Atlantic |  Read more:
Image: Justin Renteria

Eating ‘Healthy-ish’

Abstinence, we are usually told around this time of year, makes the heart grow stronger. It’s why Dry January, which started in the green and pleasantly alcoholic land of Britain a few years ago before reaching the U.S., is increasingly being touted as a good and worthy thing to do, and why so many people are currently making plans to remove whole food groups from their diet: carbs, fat, Terry’s Chocolate Oranges. The key to health, books and websites and dietitians and former presidents reveal, is a process of elimination. It’s going without. It’s getting through the darkest, coldest month of the year without so much as a snifter of antioxidant-rich Cabernet.

The problem with giving things up, though, is that inevitably it creates a void in one’s diet that only Reese’s pieces and a family-sized wheel of brie can fill. Then there’s the fact that so many abstinence-espousing programs require spending money on things; on Whole 30 cookbooks and Weight Watchers memberships and $10 bottles of bone broth. For a process that supposedly involves cutting things out, there seems to be an awful lot to take in.

This, Michael Pollan posits, is the problem with food: It’s gotten extraordinarily complicated. The writer and sustainable-eating advocate has written several books on how the simple business of eating has become a minefield in which earnest Westerners try to tiptoe around gooey, genetically engineered sugar bombs without setting off an explosion of calories, corn sugar, and cancer. In Defense of Food, published in 2008, offers a “manifesto” for eaters (i.e. humans) that’s breathtaking in its seven-word simplicity: Eat Food. Not Too Much. Mostly Plants. This mantra is repeated once more in a documentary based on the book that airs Wednesday night on PBS, and it’s felt in the January issue of Bon Appetit, which is based almost entirely around the concept of “healthy-ish” eating: “delicious, comforting home cooking that just happens to be kinda good for you.”

Healthy-ish, as a concept, isn’t new. In fact, it’s the food industry’s equivalent of your mom telling you to finish your broccoli before you dive into the Twinkies, only dressed up with a sexy hyphenated coverline and some mouthwatering photos of chicken seared in a cast-iron skillet. “Healthy-ish” shouldn’t feel revolutionary. By its very definition it’s something of a big old foodie shrug—an acknowledgment that if we can’t all subsist on steamed fish and vegetables all of the time, we can at least offset the steak dinner for having salad for lunch. It is, as per Pollan at least, a philosophy that everything is best enjoyed in moderation, including moderation.

So why does it feel so subversive?

The reason, as explained by both manifestations of In Defense of Food, is that industries upon industries, even entire religions, have been predicated on the premise that eating (certain things) is bad and will kill you. The documentary draws on years of food-related quackery to illustrate how ingrained fearing food is. 

by Sophie Gilbert, The Atlantic |  Read more:
Image: Kikim Media

The Good Times

Talking Heads



[ed. Possum Legba]
Image via: