Monday, March 20, 2017

How NFL Players Lost Their Leverage

It’s a good time to be an in-demand NFL player, as record spending is making the league’s top free agents richer than ever. As of Tuesday, NFL teams had spent $1.9 billion on unrestricted targets through the first six days of free agency, with $922 million of that guaranteed. Last spring, teams spent $1 billion guaranteed over six weeks. For valuable players hitting the market at the right moment, big deals are the new normal.

For everyone else, though, settling is. The rising salary cap, which sits at $167 million for the 2017 season — up $12 million from last year and $47 million from 2011, the first season of the league’s current collective bargaining agreement — has altered spending patterns in the NFL. And though earnings are rising for the top free agents, a confluence of events has caused them to shrink for those lower on rosters, eradicating the NFL’s middle class and costing its lower tier much of its leverage.

Larger training camp and practice squad rosters mean more players competing for spots on the active roster, robbing those on the fringes of true bargaining power. The rookie wage scale, introduced in 2011 to theoretically push more money toward veterans, has actually hurt aging nonstars, who wind up negotiating below-market deals based on their low initial salaries. And of course, NFL teams remain self-interested even amid the rising cap, reducing their own financial burden using little-known, widely implemented mechanisms like split contracts and per-game roster bonuses.

It wasn’t supposed to be like this. “The goal of [the 2011 CBA] was to give more money to the middle class,” says Mark Dominik, a former Tampa Bay Buccaneers general manager, who’s now an analyst for ESPN. “Instead, what happened was teams rewarded star players, and it created a cavernous pit between types of contracts. It’s a have-and-have-not league.”

NFL teams have always considered lower-rung players disposable. Now, however, franchises have become expert at stacking the deck against those with the least leverage, further splitting rosters into two clusters with vastly different circumstances.

“We’re in a challenging time,” says Rams general manager Les Snead. “I’ve heard [former Colts executive] Bill Polian talk about the concept of ‘monetary dysfunction,’ where you have problems in the locker room because guys are saying, ‘Hey, why is this guy getting this money?’ … The market used to be outdated annually; now it’s outdated on a player-by-player basis. The paradigm shifts constantly. … There’s going to be a natural jealousy.”

During the 2015 season, Ronnie Hillman led the Super Bowl champion Denver Broncos in rushing yards and touchdowns. At training camp seven months later, the Broncos, flush with running backs, released Hillman from the $2 million contract they’d signed him to earlier in 2016. Instead of resuming his role as Denver’s lead back, Hillman had to face the open market a week before the season began. Absent his leverage, he signed a league-minimum deal with the Minnesota Vikings two weeks into the season worth a prorated $760,000 and featuring what is called an “injury split,” which would cost him hundreds of thousands of dollars if he wound up on injured reserve.

As guaranteed money has risen for the NFL’s haves, split contracts have become increasingly prominent for the have-nots. These deals put the risk almost entirely on the player by not guaranteeing the full amount of money unless he stays healthy all season. For example, Donald Brown’s deal with the Patriots last season called for $965,000 total, but a prorated drop to $453,000 if Brown hit IR. This mechanism used to be reserved for late-round draft picks and veterans with extensive injury histories. But according to multiple NFL team executives, over the past few seasons splits and other similarly pro-team, anti-player contract clauses like per-game bonuses have started to creep into more veteran deals than ever before. Nick Greisen, a former NFL player who now sells injury insurance to NFL players, estimates that players leaguewide lost $28 million in salary due to these injury clauses in 2015, up from $19 million in 2013 and $23 million in 2014.

“[Teams] are going to try to keep their money in their pockets as much as possible,” Hillman says. “The league is cheap, man. And you kind of learn they don’t really take care of you like that.” Hillman, who was placed on waivers by the Vikings and picked up by the Chargers in November, says he signed his split contract because “I knew I wasn’t going to get hurt,” but also says he feels for the growing group of players facing slanted negotiations. Hillman believes that the CBA should include more provisions to protect veterans and laments how quickly players are flushed out of the league if they find themselves off a roster for even a moment. “There are lots of things you’d want to change about the CBA,” Hillman says. “But for me, it’s definitely how they handle players out of the league, trying to get another [team]. I can’t complain about that because I got picked up, but just hearing how other players struggle to get back in, or look to the [National Football League Players Association] for help, it just sucks to see your friends go through it.” (The Vikings declined to comment.)

Since the cap started rising, NFL teams have performed a master class in reducing their own financial risk at the expense of lower-earning players. In addition to identifying the proliferation of injury splits, people inside the NFL — from team executives to agents — point to the growing number of contracts built in part on per-game bonuses, which stem from being on the active roster, meaning that to get their maximum salary each week, a player must be on the 46-man game-day roster, not just the 53-man overall roster. Greisen says his data shows that players lost $20 million leaguewide in 2016 due to per-game bonuses.

by Kevin Clark, The Ringer |  Read more:
Image: Getty Images/The Ringer

Spoon

Step Back for the Bigger Picture

Two weeks ago today, President Trump went on Twitter and leveled a series of accusations against former President Obama, most notably that Obama had wiretapped his phones in Trump Tower. The claim has been roundly criticized ever since. Notably, it came on the heels of a new round of damaging revelations about ties between Trump's entourage and Russia. We've now had formal inquiries from the congressional intelligence committees, statements from the Department of Justice and the FBI, a follow on attempt by Trump and Spicer to redefine what the President actually said.

We know this much of the story. But this is a case where the particularity of the story, the minutiae of intelligence officials' denials, discussions of what authority a president might theoretically have to do such a thing all conspire together to confuse rather than illuminate what happened.

The real story here is that the President, by force of his office and audacity, was able to inject into the national conversation a preposterous claim which the country has spent two weeks debating. True, most people may not believe it. But virtually everyone has gone through the motions of probing the question as though they might be true. Intelligence communities have been briefed, statements have been made, a number of news conferences have been dominated by it. Perhaps most notably, members of his party have only been willing to say that there is as yet no evidence to back up the President's claims - not that they are obviously false and represent a major problem in themselves.

I would say that this ability - both the President's pathological lying and our institutions' inability to grapple with it - is the big, big story. The particulars of the accusation basically pale in comparison.

Also note how these lies have spread. The need to perpetuate the lie has made it necessary to escalate it. In an attempt to work around the uniform denials of every US government agency that does 'wire-taps', Press Secretary Sean Spicer was forced to grasp on to the rantings of a Fox News 'legal analyst' who claimed that President Obama had used British intelligence to sidestep US legal strictures. Repeating this claim with the imprimatur of the White House triggered a minor but real diplomatic incident with the United Kingdom, which may not yet be settled.

Continuing to defend the baseless claim required Trump to revisit the story in his press conference with German Chancellor Angela Merkel, both doubling down on the claim and also passing the buck to Fox News and creating the surreal spectacle of suggesting that he, like Merkel, was the victim of the very intelligence services and law enforcement agencies which he in fact now leads.

While most have dismissed the President's claims, it is still the case that he has been allowed to drive public debate for two weeks over an obvious lie. Members of his party will not denounce it as a lie or even obviously false. That's a big problem. Without being overly dramatic, this is a warning case of people in power deciding what's true and false which is a harbinger of free government dying.

by Josh Marshall, Talking Points Memo |  Read more:
Image: uncredited via:

What the Senate Should Ask Judge Gorsuch

[ed. See also: The Case Against Neil Gorsuch and Neil Gorsuch Is No Originalist.]

When Judge Neil Gorsuch faces the Senate Judiciary Committee on Monday, will we see a series of crisp, clear exchanges on the nature of the Constitution, the role of precedent, the limits of presidential power? Or will we see what one legal scholar called “a vapid and hollow charade, in which repetition of platitudes has replaced discussion of viewpoints and personal anecdotes have supplanted legal analysis”?

If the last 30 years are any guide, put your money on the second option.

Ever since Judge Robert Bork offered the Senate an honest account of his judicial philosophy in 1987 and watched it torpedo his chances, nominees have steadfastly refused to engage on controversial legal issues—insisting that they must avoid prejudging cases by remaining silent about any significant issue that might conceivably come before the court. Those nominees include Elena Kagan, the legal scholar who authored that 1995 jab at the process, and who notably lost her enthusiasm for revealing questions and answers when she was the one being questioned as a nominee.

Modern nominees decline as well to offer assessments of virtually any past Supreme Court decision, beyond embracing Brown v. Board of Education—the school desegregation decision of 1954—and taking a swipe at the Dred Scott decision of 1854 that declared slavery the law of the land. (Justices Antonin Scalia and William Rehnquist hold the record for such discretion: During their 1987 confirmation hearings, both refused to commit even to Marbury v. Madison, the 1803 decision that established the Court’s power to strike down laws as unconstitutional.)

The result has been a series of elaborate, ritualistic exercises designed chiefly to make political points in front of the TV cameras. (Many of the senators will make eight-minute statements followed by a question mark.) Democrats will ask Gorsuch whether he believes there is a right of privacy in the Constitution. He will say yes. Then they will ask if that includes a woman’s right to terminate a pregnancy. He will say that issue might well come before the court, and will decline to answer. Or, like John Roberts, he might acknowledge that Roe v. Wade established a precedent, but will not say whether and how that precedent might be overruled. They will ask whether the Constitution limits the president’s power, wrapping such questions with denunciations of President Donald Trump’s travel bans, and point to memos Gorsuch wrote while in the Bush administration, embracing a robust view of that power. He, and the Republicans on the panel, will note that he was serving as an advocate back then, and no conclusion can be fairly drawn about how he might rule as a Supreme Court justice.

Democrats will ask Gorsuch why he rules so often in favor of corporate and business interests. He will say his job is to apply the law, not to reach beyond it to make political judgments. Or Gorsuch might be asked which justices he most admires, a backhand way of asking what judicial philosophies he admires; he could well respond by offering diplomatic praise for the two justices he clerked for—Byron White and Anthony Kennedy—and leave it at that.

Gorsuch’s opponents will have combed the record, looking for any writing or statement that could prove troublesome. Back in 2009, Justice Sonia Sotomayor found herself having to explain over and over—to every Republican on the panel—what she meant when she said in 2001 that: “I would hope that a wise Latina woman with the richness of her experiences would more often than not reach a better conclusion than a white male who hasn't lived that life.”

And the process is made worse by the uncertain grasp that many of the senators, on both sides of the aisle, have about the subtleties of constitutional law. (I am waiting for the day when an exasperated nominee challenges members of the committee to ask a question without reading from the talking points prepared by their staffs; in many cases, the silence would be deafening.)

So, faced with a nominee likely to shield himself by invoking “the Ginsburg Rules” (named after Justice Ruth Bader Ginsburg’s determination to offer no “hints” “previews” or “forecasts”), are there any questions that might offer a chance to draw Gorsuch into a genuine glimpse of his thinking? It’s worth a close look, since if a hearing features nothing more than partisan oratory and skillful evasions, it might be better just to call the whole thing off.

Considering the areas likely to dominate the hearings, and in which the public has the greatest interest in knowing the answers, here are some proposed questions that might help cut through the usual charade and give us a chance for a genuine window into Gorsuch's thinking.

by Jeff Greenfield, Politico |  Read more:
Image: Getty

[ed. See also: Trump's Method, Our Madness.]

Image: Jean Dubuffet, “Tissu d’episodes” (c. 1976).

Sunday, March 19, 2017

Inside Facebook’s (Totally Insane, Unintentionally Gigantic, Hyperpartisan) Political-Media Machine

[ed. Broken record: get off of Facebook]

Open your Facebook feed. What do you see? A photo of a close friend’s child. An automatically generated slide show commemorating six years of friendship between two acquaintances. An eerily on-target ad for something you’ve been meaning to buy. A funny video. A sad video. A recently live video. Lots of video; more video than you remember from before. A somewhat less-on-target ad. Someone you saw yesterday feeling blessed. Someone you haven’t seen in 10 years feeling worried.

And then: A family member who loves politics asking, “Is this really who we want to be president?” A co-worker, whom you’ve never heard talk about politics, asking the same about a different candidate. A story about Donald Trump that “just can’t be true” in a figurative sense. A story about Donald Trump that “just can’t be true” in a literal sense. A video of Bernie Sanders speaking, overlaid with text, shared from a source you’ve never seen before, viewed 15 million times. An article questioning Hillary Clinton’s honesty; a headline questioning Donald Trump’s sanity. A few shares that go a bit too far: headlines you would never pass along yourself but that you might tap, read and probably not forget.

Maybe you’ve noticed your feed becoming bluer; maybe you’ve felt it becoming redder. Either way, in the last year, it has almost certainly become more intense. You’ve seen a lot of media sources you don’t recognize and a lot of posts bearing no memorable brand at all. You’ve seen politicians and celebrities and corporations weigh in directly; you’ve probably seen posts from the candidates themselves. You’ve seen people you’re close to and people you’re not, with increasing levels of urgency, declare it is now time to speak up, to take a stand, to set aside allegiances or hangups or political correctness or hate.

Facebook, in the years leading up to this election, hasn’t just become nearly ubiquitous among American internet users; it has centralized online news consumption in an unprecedented way. According to the company, its site is used by more than 200 million people in the United States each month, out of a total population of 320 million. A 2016 Pew study found that 44 percent of Americans read or watch news on Facebook. These are approximate exterior dimensions and can tell us only so much. But we can know, based on these facts alone, that Facebook is hosting a huge portion of the political conversation in America.

The Facebook product, to users in 2016, is familiar yet subtly expansive. Its algorithms have their pick of text, photos and video produced and posted by established media organizations large and small, local and national, openly partisan or nominally unbiased. But there’s also a new and distinctive sort of operation that has become hard to miss: political news and advocacy pages made specifically for Facebook, uniquely positioned and cleverly engineered to reach audiences exclusively in the context of the news feed. These are news sources that essentially do not exist outside of Facebook, and you’ve probably never heard of them. They have names like Occupy Democrats; The Angry Patriot; US Chronicle; Addicting Info; RightAlerts; Being Liberal; Opposing Views; Fed-Up Americans; American News; and hundreds more. Some of these pages have millions of followers; many have hundreds of thousands.

Using a tool called CrowdTangle, which tracks engagement for Facebook pages across the network, you can see which pages are most shared, liked and commented on, and which pages dominate the conversation around election topics. Using this data, I was able to speak to a wide array of the activists and entrepreneurs, advocates and opportunists, reporters and hobbyists who together make up 2016’s most disruptive, and least understood, force in media.

Individually, these pages have meaningful audiences, but cumulatively, their audience is gigantic: tens of millions of people. On Facebook, they rival the reach of their better-funded counterparts in the political media, whether corporate giants like CNN or The New York Times, or openly ideological web operations like Breitbart or Mic. And unlike traditional media organizations, which have spent years trying to figure out how to lure readers out of the Facebook ecosystem and onto their sites, these new publishers are happy to live inside the world that Facebook has created. Their pages are accommodated but not actively courted by the company and are not a major part of its public messaging about media. But they are, perhaps, the purest expression of Facebook’s design and of the incentives coded into its algorithm — a system that has already reshaped the web and has now inherited, for better or for worse, a great deal of America’s political discourse. (...)

This year, political content has become more popular all across the platform: on homegrown Facebook pages, through media companies with a growing Facebook presence and through the sharing habits of users in general. But truly Facebook-native political pages have begun to create and refine a new approach to political news: cherry-picking and reconstituting the most effective tactics and tropes from activism, advocacy and journalism into a potent new mixture. This strange new class of media organization slots seamlessly into the news feed and is especially notable in what it asks, or doesn’t ask, of its readers. The point is not to get them to click on more stories or to engage further with a brand. The point is to get them to share the post that’s right in front of them. Everything else is secondary. (...)

In retrospect, Facebook’s takeover of online media looks rather like a slow-motion coup. Before social media, web publishers could draw an audience one of two ways: through a dedicated readership visiting its home page or through search engines. By 2009, this had started to change. Facebook had more than 300 million users, primarily accessing the service through desktop browsers, and publishers soon learned that a widely shared link could produce substantial traffic. In 2010, Facebook released widgets that publishers could embed on their sites, reminding readers to share, and these tools were widely deployed. By late 2012, when Facebook passed a billion users, referrals from the social network were sending visitors to publishers’ websites at rates sometimes comparable to Google, the web’s previous de facto distribution hub. Publishers took note of what worked on Facebook and adjusted accordingly.

This was, for most news organizations, a boon. The flood of visitors aligned with two core goals of most media companies: to reach people and to make money. But as Facebook’s growth continued, its influence was intensified by broader trends in internet use, primarily the use of smartphones, on which Facebook became more deeply enmeshed with users’ daily routines. Soon, it became clear that Facebook wasn’t just a source of readership; it was, increasingly, where readers lived.

Facebook, from a publisher’s perspective, had seized the web’s means of distribution by popular demand. A new reality set in, as a social-media network became an intermediary between publishers and their audiences. For media companies, the ability to reach an audience is fundamentally altered, made greater in some ways and in others more challenging. For a dedicated Facebook user, a vast array of sources, spanning multiple media and industries, is now processed through the same interface and sorting mechanism, alongside updates from friends, family, brands and celebrities.

From the start, some publishers cautiously regarded Facebook as a resource to be used only to the extent that it supported their existing businesses, wary of giving away more than they might get back. Others embraced it more fully, entering into formal partnerships for revenue sharing and video production, as The New York Times has done. Some new-media start-ups, most notably BuzzFeed, have pursued a comprehensively Facebook-centric production-and-distribution strategy. All have eventually run up against the same reality: A company that can claim nearly every internet-using adult as a user is less a partner than a context — a self-contained marketplace to which you have been granted access but which functions according to rules and incentives that you cannot control.

by John Herrman, NY Times |  Read more:
Image: Facebook

Saturday, March 18, 2017

Sand's End

[ed. What's amazing is that anyone would think this would go any other way. And this is just the beginning. See also: The Mining 'Mafias' Killing Each Other to Build Cities.]

Past the towers of downtown Miami and over Biscayne Bay sits the city of Miami Beach. Perched on the tip of a narrow barrier island, Miami Beach is a resort community of just under 100,000 people, though its population swells with a steady stream of tourists. Through the wall of hotels that line its shore is the city's central draw: the wide, white stretch of Miami Beach's beach.

The beach is the centerpiece of the city’s promise of escape — escape from cold winters or college classes or family, where you can drink goblets of bright green liquor and cruise down Ocean Drive in a rented tangerine Lamborghini before retiring to the warm sand. To the casual observer, the beach may look like the only natural bit of the city, a fringe of shore reaching out from under the glass and pastel skyline. But this would be false: the beach is every bit as artificial as the towers and turquoise pools. For years the sea has been eating away at the shore, and the city has spent millions of dollars pumping up sand from the seafloor to replace it, only to have it wash away again. Every handful of sand on Miami Beach was placed there by someone.

That sand is washing away ever faster. The sea around Miami is rising a third of an inch a year, and it’s accelerating. The region is far from alone in its predicament, or in its response to an eroding coast: it’s becoming hard to find a populated beach in the United States that doesn’t require regular infusions of sand, says Rob Young, director of the Program for the Study of Developed Shorelines at Western Carolina University. Virginia Beach, North Carolina’s Outer Banks, New York’s Long Island, New Jersey’s Cape May, and countless other coastal cities are trapped in the same cycle, a cycle whose pace will become harder to maintain as the ocean rises.

"There isn’t a natural grain of sand on the beach in Northern New Jersey; there is no Miami Beach unless we build it," Young says. "The real endangered species on the coast of the US isn’t the piping plover or the loggerhead sea turtle. It’s an unengineered beach."

The sea has been slowly cutting a divot into the shore in front of Miami Beach’s iconic Fontainebleau hotel, encroaching nearly to the promenade. Patching it would normally be a small job. But Miami Beach has a problem, one more cities will soon face: it has run out of sand in the ocean nearby.

The beach is the tattered edge of the land. It’s made of debris, which we call sand when it’s too small to think about discretely, though exactly what it consists of varies. It could be pulverized coral, like in the Maldives, or crushed clamshells, like Shark Bay, Australia, or discarded glass, like around Fort Bragg, California. Often it’s rock that has been crushed by glaciers or eroded off mountains and washed down rivers to the sea. Beaches made from black basalt or purple garnet have a certain novelty value, but the ideal beach, the one you see on ads for airlines and beer, is sugary and white. It’s likely calcium carbonate or quartz.

Coastal engineers talk about "beach behavior," as if dealing with an unruly animal rather than a geologic feature. Waves sort sand grains to a depth where they no longer move them, so some beaches change with the seasons, as winter storms suck sand offshore, leaving only cobblestones, and smaller waves push it back in the summer. One thing all beaches have in common is that they’re always shifting, wave by wave over years or overnight, with a storm.

For much of the 20th century, people tried to hold beaches in place by building groins — lines of rock or wood pylons protruding from the shore. But groins robbed downdrift beaches of sand that would have come their way, creating new erosion problems. (Some came to be called "spite groins.") Seawalls made things worse, further blocking the natural movement of sand and forcing waves back onto the shore, scouring away the beach. By the 1970s, there was very little beach left on Miami Beach or shore at the Jersey Shore. So a new response became popular: add sand.

That job largely fell to the US Army Corps of Engineers. Dredges floated offshore, extending scoops or hoses tipped with cutter heads into the seafloor and piping sand back onto the eroding beach. Nourishment, as the practice is called, maintained the beach, but it was also an admission that there would never be a permanent solution to fixing the shore in place. Once you start nourishing a beach, you can never stop. Its equilibrium state lies elsewhere, and wave after wave will eat away at the shore, and you’ll keep having to find new sand to replace it.

Sand seems like an infinite resource, but it isn’t. You can’t put just any kind of sand on a beach. Forget about the thousands of miles of dunes in the Sahara and Gobi — rounded by wind, those grains are too smooth. Sand made by crushing rock is too jagged. Stones worn down by rivers and waves over millennia is ideal, but even then, it has to be the right type. If the grains are too small, they wash away quickly; too large, and the beach becomes a steep bank. If they’re the wrong density or wrong shape — say, plate-like shards of broken shells — they’ll float in the water, causing clouds. If the sand is too dark it will trap heat, and can shift the gender of sea turtles born there. "You want to match the native sand as close as you can," says Kevin Bodge, a coastal engineering consultant. "That sand was there for a reason."

Tremendous amounts of ocean sand gets used for land reclamation and construction. Countries use it to extend their borders, like Singapore and China, which has built seven new islands in the South China Sea. Billions of tons of sand gets poured into concrete. A United Nations report on sand shortages found that up to 60 billion tons of sand and gravel are mined each year, more than twice the amount moved by all the rivers in the world, which the report notes makes "humankind the largest of the planet’s transforming agents with respect to aggregates."

The United States has lined its coasts with over a billion cubic yards of sand, at a cost of $8.6 billion, according to a database maintained by Andy Coburn at Western Carolina University’s Program for the Study of Developed Shorelines. All that sand inevitably washes back into the sea. Sometimes waves bring it back, but for the most part, it’s lost to us; if it’s sucked out past a certain depth, it’s scattered along the continental shelf, too dispersed to be gathered back.

With sea levels rising, demand for beach sand is only going to grow. About 57 percent of the coast in the lower 48 states is already eroding, according to the USGS. "Every single coastal erosion problem we have right now is only going to get worse, not better," Young says. "It’s only going to erode faster, not slower, require more sand, not less." Gradually now, but soon overwhelmingly, every coastline is going to want to move inland. Young foresees a future of rising costs and conflict over diminishing sand. "If you want to invest, buy a dredge."

No state requires more sand than Florida, which sits in the middle of hurricane alley and has the longest coastline after Alaska. Half of the 825 miles of beaches monitored by the state’s Department of Environmental Protection are designated as critically eroding, from Daytona Beach to the Kennedy Space Center on Cape Canaveral to the shore in front of Mar-a-Lago, the Palm Beach estate of President-elect Donald Trump.

On July 31st, 2015, the Army Corps released a plan for patching eroding sections of Miami Beach. Miami-Dade’s sand resources had been exhausted, the Corps wrote, and some of the best alternatives lay to the north, offshore of Martin and St. Lucie counties. Though the shoals were in federal waters and the northern counties had no greater right to them than anyone else, they viewed the sand as theirs, and with the Corps’ announcement began the latest skirmish in what local officials call "the sand wars."

State Senator Joe Negron, whose district includes parts of Martin and St. Lucie, swore that Miami-Dade "wouldn’t get a single grain." Frannie Hutchinson, a St. Lucie commissioner, demanded the Corps "take its shovels and buckets and go home." She filed 15 public comments on the Corps’ proposal, saying that it failed to address sea level rise and would rob St. Lucie of needed sand. The county erosion chair for 14 years, Hutchinson says that she cringes every time she sweeps dirt out of her house. "Do you know how much sand is in there? You can’t replace sand."

There was a sense, in council meetings and public statements, that Miami Beach was reaping what it sowed, and that with the sea rising, it was every county for itself. "They’ve squandered their sand, they’ve overdeveloped, they’ve depleted their resources and now they want to come and take ours," says Sarah Heard, a Martin County Commissioner. "We need to protect that offshore site, we need to guard it very carefully. We don’t know exactly how sea level rise is going to impact us, but we know it’s accelerating rapidly, we know there’s going to be inundation."

Heard is a Republican, but laments her party’s denial of climate change. (Last year, the Florida Center for Investigative Reporting found that the state's governor, Rick Scott, forbid state officials from using the term in emails or reports.) Jacqui Thurlow-Lippisch, another Martin County commissioner who objected to the Corps plan, is also a Republican, and also clear-eyed about what rising seas will do to her community. Just as there are proverbially no atheists in foxholes, it’s increasingly difficult to be a local politician in coastal Florida and deny the sea is rising.

Yet what to do about it at a local level is a conundrum. Right now, the answer is to keep piling on more sand. Thurlow-Lippisch describes nourishment as a loop her town is trapped in: the most expensive property is on the beach, she says, and letting it fall into the sea would rob her county of 30 percent of its tax base, making it impossible to fund schools, run buses, and provide lunches for children in need. Though she wonders whether she’s doing the right thing, she continues to fight for the sand that her community will eventually have to put on its shore. "We all have to look ourselves in the mirror and ask, is this a sustainable life? What are we doing here? But right now, we’re in it, we’re doing it."

As the northern counties lobbed angry missives at the Corps, one alternative kept coming up: the Bahamas.

The nearest Bahamian islands are just 50 miles east of Miami. The sand grains there aren’t rock, but orbs of calcium carbonate called aragonite, which some scientists believe is formed by bacteria as deep ocean water moves into the warm, shallow banks of the Caribbean. The exact process that produces the sand is poorly understood, says Lisa Robbins, an oceanographer who studies it, and occurs in only a few other places in the world, such as the Arabian Gulf.

One thing is clear: it’s premium stuff. "They’re not only mysterious, they’re gorgeous, and wonderful to step on," Robbins says of the grains, which she likens to "little pearls." The sand is so white that when coastal engineer Kevin Bodge brought in a barge’s worth in the early ‘90s for Fisher Island, a wealthy community willing to pay for it, the customs official looked on incredulously.

"It was 1991," Bodge recalls, "the height of the Miami Vice thing, so we had to clear customs, and it came in on a barge and when the sun hit that thing coming over the horizon in the early morning light, it was the most incredible pile of gleaming white powder I’ve ever seen. The customs agent just looked at me and said, ‘You gotta be kidding me.’"

by Josh Dzieza , The Verge | Read more:
Image: John Francis Peters

What About the Fathers?

As the author of two books about low-income single mothers, I often give talks or appear on call-in shows. Audiences always want to know about the men single mothers have children with. They ask me, “Why don’t you talk to the dads? What about the fathers?”

I used to brush the question aside. After all, I had spent years living and talking with black, white, and Hispanic single mothers in some of the nation’s toughest urban neighborhoods in Philadelphia, Chicago, the deep South, and the West Coast—10 cities in all. I thought I had learned everything there was to know about these men from the moms. Besides, didn’t everyone know the guys were irresponsible? That they really didn’t care about the kids they conceived? In 2008, even presidential candidate Barack Obama was calling them out, saying they had better stop acting like boys and have the courage to raise a child not just create one.

Finally, fellow researcher Tim Nelson and I began actually talking to these men—more than 100 low-income noncustodial dads living in poor neighborhoods in the Philadelphia area. As it turns out, “everyone” wasn’t right. We were all dead wrong—me, the country, and even Barack Obama.

After several years of interviewing, observing, and living among these fathers, I’ve learned that not caring about their children is not the problem. Our 2013 book, Doing the Best I Can: Fatherhood in the Inner City, reveals that these men desperately want to be good fathers, and they are often quite intensively involved in the early years of their children’s lives. Yet they usually fail to stay closely connected as their kids grow older.

If lack of caring isn’t the problem, then what is? To answer that question, we have to start with how their relationships form.

Romance in the inner city typically proceeds quickly. Just six or seven months after they first begin “kicking it,” most of these couples “come up pregnant.” Usually neither he nor she explicitly plans to have a baby, but neither of them does much to avoid pregnancy, at least not for long. Inner-city youth often view condoms as a method of disease prevention, not contraception. They believe that ongoing condom use says you don’t trust your partner to be faithful, so as soon as there is a kernel of trust, the condom stays in the drawer—a ritual marking the transition to a more serious relationship.

Pretty soon, the women are skipping doses of the pill or letting the patch or other forms of contraception lapse. Why? In these communities, motherhood often exerts a strong pull on young women’s hearts and minds and weakens their motivation to avoid pregnancy. Being a mom serves as the chief source of meaning and identity in neighborhoods where significant upward mobility is rare. She realizes that her circumstances aren’t ideal, so she doesn’t explicitly “plan” to get pregnant. But she’ll readily admit that it wasn’t exactly an accident either. She’ll say she knew full well where unprotected sex would lead.

For their part, the men typically say they “just weren’t thinking” about the possibility of pregnancy when conception occurs. Yet contrary to the hit-and-run stereotype of the deadbeat dad, 7 times out of 10, men’s reaction to the news of a pregnancy is happiness—even downright joy. In fact, we found they are more likely to be happy than the mothers are! Andre Green, still in high school, told us he shouted “Thank you, Jesus!” when he heard the news, even though he and the would-be mother were no longer together.

What accounts for this strong, latent desire for kids among young people who can ill-afford to support them? Here, context is key. Andre Green and his peers are coming of age in some of the most violent and poverty-stricken neighborhoods in America. Their lives are marked by trauma. Just months before Andre learned that he was about to become a dad, his brother was murdered, and his mother turned to drugs as a salve. Like Andre, many men we spoke with described their lives up to that moment with a single word: “Negativity.”

In this context, a baby—fresh and innocent—is pure potential, a chance to move away from the mistakes of the past and turn to activities that are wholly good. Celebrating those precious first words and first steps. Spending the night soothing a fussy teether. Carefully fixing a little girl’s hair. For middle-class teens coming of age on Philadelphia’s affluent Main Line, early pregnancy ruins lives—a bright future snuffed out, or at least diminished. But if you’re already at the bottom, a baby means something else entirely.

As I’ve said, poor women find meaning in motherhood when sources of meaning are in short supply. But what we often fail to appreciate is how large the rewards of fatherhood also can be for men in extraordinarily challenging circumstances. Seven White, who conceived his first child at 17, told us, “I couldn’t imagine being without them, because when I am spending time with my kids it is like, now that is love! That is unconditional love. … It is like a drug that you got to have. I would never want to be without them.” (...)

In this corner of America, pregnancy is often the impetus for a relationship, not the outgrowth of one. He and she usually become a “couple” only after a baby is on the way. Shotgun relationships have replaced the shotgun marriage. Yet as the time bomb of pregnancy ticks, men rarely flee. Instead, they try mightily to “get it together for the baby”—the “it” being the relationship with the woman who is about to become their child’s mother. In fact, when the baby enters the world, more than 8 in 10 men are still together with the mother. Yet due to their laissez-faire route to conception, they may not really know their kid’s mom very well when the child is born.

Those first, very tough months of being new parents put these fragile relationships under tremendous strain, made worse by a lack of money. With hardly any shared history to draw on, is it any wonder that half of these couples break up before their child’s first birthday? Even the relationships of middle-class married couples are often tested when a baby comes into the picture. Usually though, they can draw on the trust generated by the years they’ve already shared when those hard times hit.

Some readers might wonder, “Why don’t they get married?” The young couples we interviewed certainly aspire to marriage. In fact, they revere it. But they strongly reject the idea that a hasty wedding is a good idea. Isn’t it better, they reason, to wait until they can get their finances in order and be sure the relationship is strong? Why get married if you’re just going to get a divorce? For them, this would merely make a mockery of a sacred institution. For reasons I’ve outlined above, most of these relationships soon fail their own test.

After the breakup, inner-city dads firmly believe that a shattered couple bond should not get in the way of a father’s relationship with his most precious resource: his kid. They’re not just out to claim status with their peers by getting women pregnant. They long to engage in the father role. But the young men we spoke to have tried to redefine fatherhood to fit their circumstances.

All fathers across America, rich and poor alike, have avidly embraced fatherhood’s softer side. Imparting love, maintaining a clear channel of communication, and spending quality time together are seen as the keys to being a good dad. This “new father” model, which spurred middle-class men to begin changing diapers several decades ago, has gained amazing traction with disadvantaged dads in the inner city, perhaps because it’s the kind of fatherhood they can most easily afford. But while middle-class men now combine these new tasks with being breadwinners, low-income fathers who face growing economic adversity are trying to substitute one role for the other.

Here is the problem: Neither society nor their children’s mothers are willing to go along with this trade-off. Love and affection are all fine and good, but who’s going to pay the light bill? What about keeping the heat on? If a child’s father can’t provide money, the attitude goes that he’s more trouble than he’s worth. Why strive to make sure he stays involved with the kids?

But we’re wrong about that too. From the kid’s point of view, it is hard to make up for the loss of a parent. When a single mom in the inner city feels her kid’s father has failed to provide, there is an enormous temptation to “swap daddies,” pushing the child’s dad aside while allowing a new man—perhaps one with a little more going for him economically—to claim the title of father. These moms are often desperate to find a man who can help with the bills so they can keep a roof over their kid’s head. The problem is that these new relationships may be no more stable than the old ones.

When a mom moves from one relationship to another—playing gatekeeper with the biological father while putting her new boyfriend into the dad’s role—she puts her kids on a “father-go-round.” In the end, will any of these men have the long-term commitment it takes to put these kids through college?

Meanwhile, the biological fathers themselves end up on a family-go-round, having kids by other women in a quest to try to get what they long for—the whole father experience. Each new child with a different mom offers another chance—a clean slate. With eagerness, they once again invest every resource they can muster in service of that new fragile family. But while succeeding with a new child, they often leave others behind. So, while they are good dads to some of their children, they end up being bad dads to others.

by Kathryn Edin, The Shriver Report |  Read more:
Image: Amazon

Ignore the Snobs, Drink the Cheap, Delicious Wine

So-called natural wines have recently supplanted kale as the “it” staple of trendy tables — the “latest in holier-than-thou drinking,” according to The Financial Times. Farmed organically and made with minimal intervention, the wine in these special bottles is not to be confused with what one natural wine festival called “industrialized, big-brand, manufactured, nothing-but-alcoholic-grape-juice wines.” In other words, what most of us drink.

The mania for natural wine has puzzled many: How can wine, presumably a simple mix of grapes and yeast, be unnatural? Yet when it comes to sub-$40 wines — the sweet spot for American drinkers, who spend an average of $9.89 per bottle — the winemaking process can be surprisingly high-tech. Like the Swedish Fish Oreos or Dinamita Doritos engineered by flavor experts at snack food companies, many mass-market wines are designed by sensory scientists with the help of data-driven focus groups and dozens of additives that can, say, enhance a wine’s purple hue or add a mocha taste. The goal is to turn wine into an everyday beverage with the broad appeal of beer or soda.

Connoisseurs consider processed wines the enological equivalent of processed foods, if not worse. The natural winemaker Anselme Selosse maintains that chemical futzing “lobotomizes the wine.”

But they are wrong. These maligned bottles have a place. The time has come to learn to love unnatural wines.

As a trained sommelier, I never expected to say that. I spent long days studying the farming practices that distinguish the Grand Crus of Burgundy and learning to savor the delicate aromas of aged Barolos from organic growers in Piedmont. Yellow Tail, that cheap staple of grocery stores and bodegas, was my sworn enemy.

When Treasury Wine Estates, one of the world’s largest wine conglomerates, invited me to California for a rare view into how its inexpensive offerings are — in industry parlance — “created from the consumer backwards,” I was prepared to be appalled. Researchers who’d worked with Treasury spoke of wine “development” as if it were software or face cream. That seemed like a bad sign.

Then I learned Treasury had parted from the tried-and-true method of making wine, in which expert vintners create bottles that satisfy their vision of quality. Instead, amateurs’ tastes were shaping the flavors.

I watched this process unfold in a cramped conference room where Lei Mikawa, the head of Treasury’s sensory insights lab, had assembled nearly a dozen employees from across the company. First, Ms. Mikawa had the tasters calibrate their palates, so they shared a consistent definition of “earthy” or “jammy.” In a few days, they would blind taste 14 red wines and rate the flavors of each. (The samples usually include a mix of existing Treasury offerings, unreleased prototypes and hit wines that the company may hope to emulate.) Next, approximately 100 amateurs from the general public would score the samples they liked best. By comparing the sensory profile of the wines with the ones consumers most enjoyed, Ms. Mikawa could tell Treasury what its target buyers crave.

Maybe they’d want purplish wines with blackberry aromas, or low-alcohol wines in a pink shade. Whatever it was, there was no feature winemakers couldn’t engineer.

Wine too full of astringent, mouth-puckering tannins? Add Ovo-Pure (powdered egg whites), isinglass (fish bladder granulate) or gelatin. Not tannic enough? Replace $1,000 oak barrels with stainless steel and a bag of oak chips (toasted for flavor), tank planks (oak staves), oak dust (what it sounds like) or a few drops of liquid oak tannin (pick between “mocha” and “vanilla”). Cut acidity with calcium carbonate. Crank it up with tartaric acid. When it’s all over, wines still missing that something special can get a dose of Mega Purple, a grape-juice concentrate that has been called a “magic potion” for its ability to deepen color and fruit flavors.

More than 60 additives can legally be added to wine, and aside from the preservative sulfur dioxide, winemakers aren’t required to disclose any of them.

This should have been the ultimate turnoff. Where was the artistry? The mystery? But the more I learned, the more I accepted these unnatural wines as one more way to satisfy drinkers and even create new connoisseurs.

For one thing, winemaking has long fused art with science, even if that’s not the story drinkers are told. Ancient Romans doctored their wines with pig’s blood, marble dust, lead and sulfur dioxide. Bordelaise winemakers have been treating their wines with egg whites for centuries. And though the chemicals dosed into wine can sound alarming, some, like tartaric acid, already occur naturally in grapes. The only difference is that today’s winemakers can manage the process with more precision.

by Bianka Bosker, NY Times |  Read more:
Image: Sébastien Plassard

Friday, March 17, 2017

Bare Necessities

Fun, in Prudhoe Bay, Alaska, is a calendar event. Out here, on the largest and most remote oil field in the United States, thousands of workers rise each morning in endless summer, eternal darkness, mosquitos, and snow, to begin twelve-hour shifts, which on the drilling rigs requires a discipline that is taken seriously: a mistake, however small, could cause this entire place to explode, as it did in West Texas two years ago, or in Texas City twelve years ago. For a change of landscape one can board a bus with elderly tourists to the edge of the Arctic Ocean, point out the artificial palm tree, suggest a dip, and laugh—the water is 28 degrees—but even that route grows dull: the single gravel lane that traces tundra abuts miles of pipeline. For the oil workers, there is little to look forward to before the end of a two-week shift except for scheduled socialization. Each summer, such fun goes by the name Deadhorse Dash, a 5K race traced across nearby Holly Lake.

“Last year, someone dressed up as a dancing polar bear,” Casey Pfeifer, a cafeteria attendant, tells me when I arrive at the Prudhoe Bay Hotel for lunch on the afternoon of the race. Casey is wearing purple eyeliner and a sweatshirt that reads MICHIGAN in looping gold-glitter cursive. Every two months Casey travels between Idaho and Prudhoe Bay—for her, life in Alaska is synonymous with adventure—to work in the service industry at places like the Hotel, which is not actually a hotel at all but a work-camp lodge, with hundreds of tiny rooms housing twin-size cots and lockers. Casey smiles at me from behind her warming tray and I feel cozy, despite the dirt and dust clinging to my skin. The fluorescent lights illuminate her golden hair, which is tucked into a sock bun, and she tongs a sliver of battered cod. “Picture it,” Casey says. She sways her butt to the sound of nothing. “This giant bear, and he is grooving.”

I picture an enormous mascot gyrating to the Backstreet Boys. It is not my idea of fun, but I am an outsider. I had arrived on the North Slope only the day before, seeking a week in the most isolated community in America and what I hoped would be storybook Alaska: purple arching Coho salmon, caribou, moose, air that belongs in a breath-mint commercial. Instead I found square buildings like so many others, and a cafeteria just like that of a high school, with wheels of cheesecake and racks of chips. How normal everything felt. At an empty table, I watch workers lay playing cards out in front of them. Behind them, mounted televisions loop the Steve Harvey Show and Maury, The Price Is Right and Dr. Phil. Workers in heavy coveralls spoon cubes of honeydew onto their plates, consider the merits of the cacciatore, and pile their bowls with limp linguini. They puff their cheeks like chipmunks, gearing up, they joke, for what would no doubt prove a feat of monumental athleticism.

“The calories aren’t expended in the walking,” one worker tells me, reaching into a basket of Little Debbie Swiss Cake Rolls. I watch as his hands, the largest I have ever seen, raise the cakes to his mouth. He consumes them whole, parting his lips dramatically—wet pink petals, upon which the skin blisters, burned by Arctic sunlight. His name, he says, is Jeff Snow, but he goes by Snowman. He earned the nickname in the dead of winter, because up here, he comes alive: a redneck, forklift-driving Frosty the Snowman, made animate by extremes.

“The real work tonight is swatting the mosquitos,” Snowman says. He rolls his eyes, he laughs.

“The Deadhorse Dash is mostly bullshit. But it’s the sort of bullshit you look forward to.”

According to posters fixed to the cafeteria’s white-painted cinderblock walls, participants are to meet at six o’clock by the biggest warehouse in the stretch, owned by Carlisle Transport. The evening would start with a few minutes of mingling, during which men with binoculars would scan the horizon for polar bears. “They rarely come in this far in summer,” Snowman says, “but better alive than dead and sorry.” Once our safety is assured, we would set out across the tundra, tracing a two-mile stretch from one edge of Holly Lake to another across a landscape normally restricted to oil-field employees and suppliers who hold the highest level of security clearance. At the halfway checkpoint, marked by a folding table, we would collect a token, redeemable at the finish line for a burger, a handful of chips, a chocolate-chip cookie wrapped in thin plastic, and our choice of apple or banana.

An Arctic picnic in eternal summer.

“It’s a privilege, really,” Casey says. There is a home-away-from-home feeling here, she explains. But still, one passes most days as if a zombie. You rise, you work, you eat, you go to bed, repeat. Mostly, life on the North Slope is spent waiting to return to life in the Lower 48 and, with it, a return to children’s birthday parties, dinners with the spouse, backyard barbecues, and the simplicities of normal life: a fishing line, unfurled and bobbing red above a riverbank in Idaho.

“Put it to you this way,” Snowman says. “We don’t do anything up here but work and sleep and eat. So shit like this means an awful lot.”

by Amy Butcher, Harper's |  Read more:
Image: Amy Butcher

Thursday, March 16, 2017

The Lessons of Obamacare

What Republicans should have learned, but haven't.

On January 6, President Barack Obama sat down with us for one of his final interviews before leaving the White House. The subject was the Affordable Care Act — the legislation that has come to carry his name and define his legacy.

It was strange circumstances Obama found himself in. He was leaving office an unusually popular president, with approval numbers nearing 60 percent. But his most important domestic achievement was imperiled. Republicans had spent years slamming Obamacare for high premiums, high deductibles, high copays, and daunting complexity. Donald Trump had won the White House in part by promising to repeal the ACA and replace it with “something terrific.” Both houses of Congress would be controlled by Republicans who appeared set to carry out his plan.

But over the course of the next 70 minutes, it became clear that Obama didn’t think they would get the job done. If he sounded unexpectedly confident, it’s because he believed the wicked problems of health reform — problems that bedeviled him and his administration for eight years — would turn on the GOP with equal force.

“Now is the time when Republicans have to go ahead and show their cards,” he said. “If in fact they have a program that would genuinely work better, and they want to call it whatever they want — they can call it Trumpcare or McConnellcare or Ryancare — if it actually works, I will be the first one to say, ‘Great; you should have told me that in 2009. I asked.’”

Two months later, the release of House Republicans’ replacement plan — the American Health Care Act — has made Obama look prescient. The bill quickly placed Republicans under siege from both the left, which has found more to like in Obamacare as its survival has become threatened, and the right, which attacked the replacement as unrealistic and ill-considered, and, most damning of all, as “Obamacare 2.0.”

The biggest problem Republicans face, though, isn’t from activists in either party. It’s from the tens of millions of Americans who now depend on Obamacare, and their friends, families, co-workers, and neighbors. They have been promised a replacement that costs less and covers more, and the GOP’s plan does neither.

According to the Congressional Budget Office, the AHCA would throw 24 million people off health insurance over the next 10 years and leave the remnant in plans with higher deductibles, higher copays, and less coverage. The law would let insurers charge older Americans 500 percent more than younger Americans, and the sparer subsidies wouldn’t adjust to the local cost of insurance coverage, and thus would be insufficient in many areas. This is not the “something terrific” Trump promised, nor the kind of health care that polling shows Americans want.

We are reporters who have covered health care, and the legislative ideas that became the Affordable Care Act, since before Obama’s election. In the course of that reporting, including recent conversations with Obama and dozens of elected officials and staffers responsible for the Affordable Care Act’s design, passage, and implementation, we have unearthed several lessons from the law, which current and future health reformers should heed.

At the moment, Republicans are ignoring most of them.

Lesson 1: Everything in health care is a painful trade-off. Own it.

Obama had a habit, back in meetings during the Affordable Care Act’s drafting, his former advisers recall. He would start twisting an invisible Rubik’s cube in the air, working his hands around to try to make the pieces fit together just right.

This was what health policy felt like: trying to slot together competing priorities in a way that was just as maddening as trying to get the color sides of a Rubik’s cube to line up.

Any government health coverage expansion involves a series of trade-offs, decisions that will inevitably anger one constituency or another. Provide robust health insurance plans, for example, and you need to spend more money — if you don’t, you must decide to cover fewer people. Provide skimpier coverage, and the price tag of a health insurance expansion goes down, but people get frustrated with their high deductibles and copays.

Change the system so one group pays less, and another group, inevitably, has to pay more.

“Those trade-offs have bedeviled efforts to expand health insurance coverage for decades,” says Doug Elmendorf, who directed the Congressional Budget Office during the health law debate. “It is very hard to maximize health coverage while minimizing the cost to the government and disruptions to current insurance arrangements.”

The most important part of writing health policy isn’t figuring out a way around those trade-offs, although many legislators have tried. It’s making the trade-offs that will lead to the best outcomes, and explaining those clearly to constituents. The Obama administration knew from the start that it wanted to make health insurance more accessible to those who had traditionally struggled to get covered: people who are sicker, older, and poorer — and did not have access to employer-sponsored coverage. Democrats didn’t just want to get millions covered. They had specific demographics in mind they wanted to benefit.

“If you replace a 60-year-old with a 20-year-old, that doesn’t change the number of people covered, but it changes the value of the coverage and of the program,” says Jonathan Gruber, an MIT economist who helped the White House model the economic effects of Obamacare.

Democrats had to make very clear trade-offs to advantage this older, sicker population.

For example, the law limits the premiums that insurers could charge their oldest consumers to just three times whatever they billed the youngest enrollees. The Affordable Care Act mandated that insurers must cover 10 “essential health benefit” categories. These included medical care that plans in the individual market have historically left out, such as mental health services and maternity care.

These changes were great for those who were older and required significant medical care. But bringing unhealthy people into the market is difficult, “because it requires the healthy people who had a sweet deal in the past to pay higher rates,” says former Health and Human Services Secretary Kathleen Sebelius. “There is no question that some people’s rates went up, but the old market didn’t work very well for the majority of people who needed coverage.”

Democrats’ trade-off brought consequences, one of them being that the health care law has long struggled to attract as many young people as the White House would like. Back in 2012, administration officials told us they wanted one-third of the marketplace enrollees to be between 18 and 34. The number has never gotten there, hovering around one-quarter for the past four years.

Zeke Emanuel, who worked as one of President Obama’s health care advisers, says the administration tilted the playing field too far in favor of the sick and elderly, making it difficult for young people to sign up. He says the administration should have let insurers charge older people more, perhaps four times as much as the youngest consumers.

“We made the wrong trade-off,” he says. “The consequence is costs for old people are higher because we don’t have enough young people in the pool.”

Veterans of the 2009 health care fight have dozens of stories about different trade-offs they had to make, ones that would anger different constituencies. The administration was constantly trying to balance the desire to expand coverage to as many people as possible against the commitment to keeping the package revenue-neutral. It faced outside pressure from hospitals and insurers, who some thought might turn their backs on the effort if it didn’t bring tens of millions of Americans into the health insurance system.

“Almost every aspect of the bill was inextricably linked,” says Nancy-Ann DeParle, one of Obama’s top health care advisers. “Every time we tweaked the subsidies or the individual mandate penalties, CBO had to re-estimate the bill to see how it affected coverage. If CBO said that coverage decreased, that was a big problem, because the hospitals’ support for the bill was contingent on getting a high percentage of the uninsured covered.”

Trump’s own ideas about health policy do not seem to grapple seriously with these trade-offs. He repeatedly talks about covering more people at a lower cost but has offered no plan to do so.

The American Health Care Act, however, lays these issues bare. It makes different trade-offs than the ones that Democrats made. The bill would change the rules of the individual market to advantage people who are younger, healthier, and higher-income — but disadvantage people who are older, sicker, and poorer.

AHCA, for example, would allow insurers to charge the oldest enrollees five times as much as the youngest enrollees. It would allow insurers to sell less robust health insurance plans that cover a smaller percentage of enrollees’ costs.

The results are particularly grim for older, poorer enrollees — many of whom vote Republican. According to the CBO’s analysis of the plan, a 64-year-old making $26,500 would see his premiums rise by 750 percent under the AHCA. But not only are Republicans refusing to own that trade-off — they’re refusing to own any trade-offs.

“Nobody will be worse off financially,” promised Health and Human Services Secretary Tom Price on a recent Meet the Press appearance. That’s a promise no plan could keep, but that Republicans have now made, in public, and that will be played back on ad after ad after ad.

Democrats learned, over months of hard work, that there was no free lunch in health policy. Republicans are now beginning to run into the same difficult truth: Every new winner in health care comes with a new loser.

by Sarah Kliff and Ezra Klein, Vox | Read more:
Image: uncredited

Crocodile Tears: The Logo Battle Between Lacoste, Izod & Crocodile Garments

Image: Lacoste S.A.

The Revolution Will Not Be Curated

The era of the curator has begun,” declared the prominent art critic Michael Brenson in 1998. The figures who assembled artworks into galleries, he reasoned, were now “as essential” to exhibits as the artists themselves. Curators were a species of universal genius who “must be at once aestheticians, diplomats, economists, critics, historians, politicians, audience developers, and promoters,” Brenson wrote. “They must be able to communicate not only with artists but also with community leaders, business executives, and heads of state.” And what a curator “welcomes or excludes” is what makes all the difference.

Whatever else we might think of this assertion, it was certainly prescient. Today the era of the curator is in full flower. The contemporary literature about the heroic organizer of exhibitions is large and enthusiastic, with adulatory new installments added all the time. In 2006, a prominent art writer saw a generation of bold young curators “armed with a vision of possibility and an image of the curator as a free agent, capable of almost anything.” In 2012, the New York Times marveled at the growing number of “programs in curating studies” and at how certain curators established themselves as “star names” in the art world. Brightest among these stars, without a doubt, is one Hans Ulrich Obrist, a curator at the Serpentine Gallery in London, the author of Ways of Curating and A Brief History of Curating, and the closest thing there is to an art-world superstar these days, his every taste-quirk fawned over by the press. (...)

And, of course, “curating” describes something that websites are supposed to do. It is the new and more benign word for what a short while ago was called “aggregating,” or what a less pretentious person might call “editing” or “sifting.” The web is a vast, chaotic, onrushing thing, the idea goes, and “curators” promise to sort it all out for us, welcoming and excluding as they see fit. That’s why what goes on at Pinterest and Tumblr and Instagram and Digg is often called “curating.” Above all, curating is what takes place at Facebook, where busily sifting “news curators” used to choose stories to be included in the hotly desirable “trending” category. (...)

What is a curator, and why is it the admired cultural position of the moment? Why is this the word that springs to our tongues today when once we would have said “DJ,” or “blogger,” or “expert,” or just “snob”? And why is it persistently associated with liberals?

Consider the most basic aspect of the word as we use it today. A curator is an arbiter, someone who distinguishes between what is good and what is bad. Curators tell us what to welcome and what to exclude, what to keep and what to toss. They make judgments. They define what is legitimate and what is not.

But curators don’t make these judgments subjectively or out of the blue, as would chefs or gourmands or other sorts of fussy people. No, curators are professional arbiters of taste and judgment, handing down their verdicts on news stories or pot roasts from a position of dignity and certified authority.

The word is deeply associated with academic achievement. Gallery curators are often people with advanced degrees, and “curation” and its variants are sometimes used to describe certain kinds of university officials. The highest officers of the University of Missouri, for example, are called curators, and at Bennington College, even prospective students are encouraged to think of themselves as curators—curators, that is, of their applications to associate with this illustrious institution. As Bennington’s magazine puts it, they are invited to “curate their submissions and engage in the admissions process as a learning experience.”

It’s all about social status, in other words, and the eternal desire of Americans to claw their way upward by means of some fancy-sounding euphemism. Back in the 1980s, English professor Paul Fussell set down a list of occupations that had contrived to class themselves up by adopting longer names that sounded more professional. “In many universities,” he wrote,
what used to be the bursar is now the disbursement officer, just the way what used to be an undertaker (already sufficient as a euphemism, one would think) is now a funeral director, an advance of two whole syllables. . . . Selling is raised to retailing or marketing, or even better, to merchandising, an act that exactly doubles its syllables, while sales manager in its turn is doubled by being raised to Vice-President, Merchandising. The person on the telephone who used to provide Information now gives . . . Directory Assistance, which is two syllables grander.
And so experts of every kind have in our time been promoted to curators, which is not just a longer word but one that carries grand professional implications.

Curatolatry also imparts a certain smiling friendliness to expertise. Long ago, a “curator” was a medical worker of some indeterminate sort—someone charged with curing, basically. And even today we can see that curators do what they do not because they are greedy or snobbish but because they want to nurture the public. This is especially important as scandals ripple through profession after profession: accounting, appraising, investment banking, medicine, and so on. A curator would never use monopoly power to gouge users of some prescription medicine, for example. They care about us too much. They are not dictators; they are explainers, to mention another occupation that is much in vogue nowadays. They just want to help you to learn and understand. They are authority figures, yes, but they are lovable and benevolent ones.

And they are infinitely adaptable. Curatorial authority can be counted on to fine-tune your taste in food, your news consumption—even, in all likelihood, your ideological worldview. And with every sphere of American experience so promiscuously aestheticized, the curatorial reflex can be understood as something that a benevolent class of tastemakers and enlightened celebrities is selflessly undertaking for your own good. That’s why, for example, self-appointed celebrity pundits such as Lena Dunham and Alec Baldwin claim improbable perches in the protest culture of liberalism—and why Meryl Streep, who laid into Trump on the press’s behalf at the Golden Globe awards, has acquired the status of a twenty-first-century Edward R. Murrow.

by Thomas Frank, The Baffler |  Read more:
Image: Lindsay Ballant

Malcolm Gladwell Wants to Make the World Safe for Mediocrity

On black identity in the Caribbean and the United States

COWEN: There’s a discussion that Sylvia Wynter, the Jamaican intellectual, offered in year 2000, and I’d like your opinion on this. She said there’s something special about the United States: that in Jamaica, or in many parts of the Caribbean more broadly, that being middle class can in some way counter the fact of blackness socially, and serve as a kind of offset. But she said about the United States, and here I quote, “The US itself is based on the insistent negation of black identity, the obsessive hypervaluation of being white.” Do you think that’s an accurate perspective?

GLADWELL: Well, yeah, there is something . . . well, I hesitate to say under-theorized, but there is something under-theorized about the differences between West Indian and American black culture, the psychological difference between what it means to come from those two places. I think only when you look very closely at that difference do you understand the heavy weight that particular American heritage places on African-Americans. What’s funny about West Indians is, they can always spot another West Indian. And at a certain point you wonder, “How do they always know?” It’s because after a while you get good at spotting the absence of that weight.

And it explains as well the well-known phenomenon of how disproportionately successful West Indians are when they come to the United States because they seem to be better equipped to deal with the particular pathologies attached to race in this country — my mother being a very good example. But of course there are a million examples.

I was just reading for one of my podcasts; I’ve been reading all these oral history transcripts from the civil rights movement. I was reading one today and I’m halfway through. And I had that completely unbidden thing, “Oh, this guy’s a West Indian.” He was an African-American attorney and a civil rights lawyer in Virginia in the ’60s. I got a 30-page transcript. I got to page 15, I’m like, “He’s West Indian.” And then, literally page 16, “My father came from Trinidad and Tobago with my mother and me.”

COWEN: [laughs]

GLADWELL: There is something very, very real there that’s not, I feel, fully appreciated.

COWEN: Another difference that struck me — tell me what you think of this — is that the notion of freedom for much of the Caribbean, it’s in some way more celebratory, and it’s more rooted in history, and it may be because these are mostly majority black societies. History is in a sense controlled; it’s much more commemorative. Does that make sense to you? It’s not a struggle to control the narration of history at a national level.

GLADWELL: Yes. You’re in charge of the narrative —

COWEN: Yes.

GLADWELL: . . . which is huge. I thought of this because I wanted to do — sorry, my podcast is on my mind — I wanted to do and I haven’t managed to figure out how to do it, but there’s a Jamaican poet called Louise Bennett. If you are Jamaican, you know exactly who this person is. She’s probably the most important colloquial poet. I think that’s the wrong word. Popular poet. And she wrote poetry in dialect. So for a generation of Jamaicans, she was an assertion of Jamaican identity and culture. My mother was a scholarship student at a predominantly white boarding school in Jamaica. She and the other black students of the school, as an act of protest, read Louise Bennett poetry at the school function when she was 12 years old.

If you read Louise Bennett’s poetry, much of it is about race. It’s about race where the Jamaican, the black Jamaican often has the upper hand. The black Jamaican is always telling some sly joke at the expense of the white minority. So it’s poetry that doesn’t make the same kind of sense in a society where you’re a relatively powerless minority. It’s the kind of thing that makes sense if you’re not in control of major institutions and such, but you are 95 percent of the population and you feel like you’re going to win pretty soon.

My mother used to read this poem to me as a child where Louise Bennett is . . . the poem is all about sitting in a beauty parlor, getting her hair straightened, sitting next to a white woman who’s getting her hair curled.

[laughter]

GLADWELL: And the joke is that the white woman’s paying a lot more to get her hair curled than Louise Bennett is to get her hair straightened. That’s the point. It’s all this subtle one-upmanship. But that’s very Jamaican.

On the subject of Revisionist History season two

COWEN: Now, to ask about your podcasts. I know some of them in the second season, they’ll be about the civil rights movement — in particular, the 1950s, which are a somewhat neglected time. I’ll throw out just a few possible forces that led America to start to become more integrated in the ’50s, and you tell me which you think are neglected or underrated.

One would be professional sports and Jackie Robinson starting to play baseball in the late ’40s. Another would be entertainers, a move toward having more black leads in movies and also music, say Chuck Berry or even James Brown. Harry Truman integrating the military, or the desire, for purposes of Cold War propaganda, to actually show this country is making some progress on civil rights issues. Which of those or which other factors do you feel are the ones we’re missing in understanding this history?

GLADWELL: If I had to rank those, army one. And I would say that the entertainment and sports . . . I would say that it was either neutral or worse than neutral.

COWEN: Why worse than neutral?

GLADWELL: Because I actually think if we were to take the long view, and we would look at this from a hundred years from now, we would say that . . . it is not unusual for minorities to first make their mark in sports and entertainment. You see it with Jews, you see it with Italians, you see it with Irish. But the thing that’s striking to me about those movements is they move in and out of those worlds pretty quickly. So the Jewish moment in sports is really quite short.

COWEN: Sure.

[laughter]

GLADWELL: Which is in retrospect not that surprising.

COWEN: Boxing especially.

GLADWELL: It’s like that long. The African-American moment in those transitional fields is really long; it continues to this day. And it’s almost to the point where you feel that what happens is, they move into those worlds and get stalled there. And their presence in that world accentuates and aggravates existing prejudice about their community as opposed to serving as a way station to a better place.

So, if your problem is that you’re facing a series of stereotypes about how you are intellectually inferior, how you have a broken culture, how you have . . . I could go on and on and on with all of the stereotypes that exist. Then how does playing brutally violent sports help you? How is an association, almost an overrepresentation in these various kinds of public entertainments advance your cause? I’m for those things when they’re transitional, and I’m against them when they seem like dead ends.

COWEN: How important a factor was the research of Mamie and Kenneth Clark? That’s some work that, had there been a Malcolm Gladwell at the time, would have been written up even more — the notion that when there’s segregation, people may value themselves or their race less. It seems that had a big impact on the Warren Court, on other thinking. What’s your take on their influence?

GLADWELL: Well, the great book on this is Daryl Scott’s Contempt and Pity. He’s a very good black historian at Howard [University], I believe. Yes, he’s the chair of history at Howard. And he has much to say, so I got quite taken when I was doing this season of my podcast with the black critique of Brown v. Board of Education]. And the black critique of Brown starts with some of that psychological research because the psychological research is profoundly problematic on many levels.

So what Clark was showing, and what so moved the court in the Warren decision, was this research where you would take the black and the white doll, and you show that to the black kid. And you would say, “Which is the good doll?” And the black kid points to the white doll. “And which doll do you associate with yourself?” And they don’t want to answer the question. And the court said, “This is the damage done by segregation.”

Scott points out that if you actually look at the research that Clark did, the black children who were most likely to have these deeply problematic responses in the doll test were those from the North, who were in integrated schools. The southern kids in segregated schools did not regard the black doll as problematic. They were like, “That’s me. Fine.”

That result, that it was black kids, minority kids from integrated schools, who had the most adverse reactions to their own representation in a doll, is consistent with all of the previous literature on self-hatred, which starts with Jews. That literature begins with, where does Jewish self-hatred come from? Jewish self-hatred does not come from Eastern Europe and the ghettos. It comes from when Jewish immigrants confront and come into close conflict and contact with majority white culture. That’s when self-hatred starts, when you start measuring yourself at close quarters against the other, and the other seems so much more free and glamorous and what have you.

So, in other words, the Warren Court picks the wrong research. There are all kinds of problems caused by segregation. This happens to be not one of them. So why does the Warren Court do that? Because they are trafficking — this is Scott’s argument — they are trafficking in an uncomfortable and unfortunate trope about black Americans, which is that black American culture is psychologically damaged. That the problem with black people is not that they’re denied power, or that doors are closed to them, or that . . . no, it’s because that something at their core, their family life and their psyches, have, in some way, been crushed or distorted or harmed by their history.

It personalizes the struggle. By personalizing the struggle, what the Warren Court is trying to do is to manufacture an argument against segregation that will be acceptable to white people, particularly Southern white people. And so, what they’re saying is, “Look, it’s not you that’s the problem. It’s black people. They’re harmed in their hearts, and we have to usher them into the mainstream.”

They’re not making the correct argument, which was, “You guys have been messing with these people for 200 years! Stop!” They can’t make that argument because Warren desperately wants a majority. He wants a nine-nothing majority on the court. So, instead, they construct this, in retrospect, deeply offensive argument, about how it’s all about black people carrying this . . . and using social science in a way that’s actually quite deeply problematic. It’s not what the social science said.

by Tyler Cowen and Malcolm Gladwell, Medium |  Read more:
Image: Caren Louise Photographs