Friday, November 8, 2019

Away Fans are Taking Over the NFL

The Los Angeles Chargers returned to southern California on Sunday after playing the previous two weeks on the road, but it didn’t make much difference. Home-field advantage doesn’t really apply to the Chargers, not when visiting fans routinely make the team feel like they’re behind enemy lines in their own stadium. That was the case again on Sunday, when the Chargers hosted the Green Bay Packers. The predominant color in the stands was the green of the visitors, and the cheers rang out louder for Aaron Rodgers than Philip Rivers. The home team won, convincingly at that, but most people left the stadium disappointed.

The stands were a sea of Packers green rather than Chargers blue when the teams met in LA on SundayIt has become one of the peculiar features of the NFL calendar since both the Chargers and Rams relocated to Los Angeles in 2017, marking a reunion between America’s second-largest market and its most popular sporting league: more often than not, the teams’ home games look and sound like home games for the opposition. Chargers players were showered with boos when they took the field against the visiting Philadelphia Eagles two years ago. The Rams got the same treatment last season at home against the Packers. Both Rivers, the Chargers quarterback, and Rams quarterback Jared Goff have regularly been forced to use a silent count to combat the noise generated by the away side’s fans, typically an unnecessary measure to take for a team playing at home.

“It’s certainly not ideal,” Rivers said with a hint of resignation after the 2017 game against the Eagles. The home-field hostility hit a fresh apex for both teams on the same Sunday last month. That afternoon, the Rams were overwhelmed on the field and in the stands, which were blanketed by the red of the visiting San Francisco 49ers. “This turned into a home game pretty quickly,” said San Francisco quarterback Jimmy Garoppolo after the game. “I’ve never seen anything like it.”

It hasn’t been quite as enjoyable for the ostensible home teams. A few hours later that day, the Chargers hosted the Pittsburgh Steelers, whose fans roared with approval when the stadium PA system blasted their team’s adopted anthem, Renegade by Styx. It was supposed to be a gag; the song eventually transitioned to Never Gonna Give You Up by Rick Astley, the punctuation to a long-running internet prank. But the joke didn’t land, and Chargers players were miffed.

“It was crazy,” Chargers running back Melvin Gordon said. “They started playing [the Steeelers’] theme music. I don’t know what we were doing – that little soundtrack, what they do on their home games. I don’t know why we played that.” Chargers offensive lineman Forrest Lamp was more blunt: “We’re used to not having any fans here. It does suck, though, when they’re playing their music in the fourth quarter. We’re the ones at home. I don’t know who’s in charge of that, but they probably should be fired.”

The go-to line from Rams and Chargers brass is that it will take time to cultivate a true fan base in Los Angeles. Chargers owner Dean Spanos, who engineered the franchise’s move from San Diego after voters there rejected his bid for public funding of a new stadium, told the New York Times earlier this year that it will “take maybe a generation” for the team to find its footing in LA. On Tuesday this week, he was forced to deny rumors the team has discussed relocating to London.

by Tom Kludt, The Guardian | Read more:
Image: Jake Roth/USA Today Sports

Thursday, November 7, 2019

Julian Lage, Scott Colley, Kenny Wollesen



[ed. The most talented jazz guitarist working today (well, most talented young guitarist anyway).]

Why We Wish for Wilderness

Wilderness, the environmental historian Roderick Nash has argued, is not so much a place as an idea. Nash’s essential book Wilderness and the American Mind, which traced the long evolution of attitudes toward wild places from fear and avarice to awe and nostalgia, was submitted as a doctoral thesis at the University of Wisconsin in 1964, the same year that the Wilderness Act enshrined environmental activist Howard Zahniser’s somewhat whimsical legal definition of wilderness as “an area where the earth and its community of life are untrammeled by man, where man himself is a visitor who does not remain.”

That anthropocentric definition—wilderness is wherever we’re not—has sparked plenty of debate in conservation circles about what types of places deserve protection, from whom, and for whose benefit. “[I]f nature dies because we enter it,” another University of Wisconsin environmental historian, William Cronon, proposed in 1995, “then the only way to save nature is to kill ourselves.” But the idea of wilderness as the absence of humans remains implicit in much of how we think and talk about wild places. Nash traced this conception back to the advent of herding, agriculture, and settlement 10,000 years ago, when lines both literal and metaphorical began to be etched into the land to delineate where human dominion started and stopped. And this still has a particular American resonance thanks to the lingering influence of the historian Frederick Jackson Turner’s Frontier Thesis, first proposed in 1893, which attributed the emergence of a hardy and distinct national character in the 1800s to the century-long process of conquering the wild and untrammeled West.

You can still see this idea play out in popular survival shows like Man vs. Wild. Bear Grylls and his peers parachute into barren hinterlands that are always completely devoid of all traces of human presence, and triumph over the malign forces of nature while relying only on fortitude, MacGyver-like cunning, and occasionally, when dehydration strikes, their own bodily fluids. The same vibe of self-reliance and unflappable omnicompetence runs through Shoalts’s books—like the time, on a trip deep into British Columbia’s Great Bear Rainforest, with no gun and no bear-repellent spray, he returns to his tent to find fresh grizzly claw slashes in the bark of a nearby cedar:
The only tools at my disposal were a hatchet, my knife, and a folding saw, along with some rope and paracord. With these tools I could fashion a sleeping platform between four trees a safe distance off the ground… Working quickly to beat the sunset, I had to shinny up each tree and lash together the strong sticks I had cut to create a platform between four hemlocks. After the frame was finished, I cut sticks that I would bind to the rectangular platform, creating a solid floor to sleep on. For protection against the rain, I made a roof out of my tarp and enclosed the sides with hemlock boughs. To make the floor comfortable, I laid moss and more hemlock boughs over the platform. My shelter finished, it served as a rather cozy abode for the next five nights.
Though undeniably entertaining, this adversarial approach to a human-less nature, implicit even in the title Alone Against the North, carries a lot of baggage. Most obviously, it assumes either that the lands to be explored are uninhabited, or that its inhabitants somehow don’t count. The writer and explorer Kate Harris, writing in The Walrus, mocked Shoalts for claiming to have “‘discovered’ waterfalls in [the Moose Cree First Nation’s] traditional territory (when he accidentally canoed over them, no less).” The Globe and Mail’s review similarly called out his elision of indigenous knowledge and his “misguided reverence for the lumbering spirit of European colonialism.”

My initial reaction to these criticisms was defensive—on both Shoalts’s behalf and my own. After all, Shoalts had clearly addressed these concerns pre-emptively with two distinct arguments. The first was geographical: unlike more densely populated areas farther south, Canada’s subarctic wilderness is both imponderably vast and all but uninhabitable. The Again River is located in the Hudson Bay Lowlands, a swampy, bigger-than-Minnesota wetland most notable for its polar bears and for having the highest concentration of bloodsucking insects in the world. Aboriginal peoples certainly ventured into the area along its major rivers, but, according to Shoalts, they considered it “sterile country,” and there’s little evidence of sustained pre-contact indigenous inhabitation. Given the near-nonexistent settlement, along with the fact that Canada has something like three million lakes—no one has ever succeeded in counting them properly—and innumerable rivers, creeks, and ponds, the math argues overwhelmingly against the notion that humans have visited every single one of these waterways.

Of course, as Shoalts himself acknowledges, there’s simply no way of knowing for sure whether anyone in previous centuries, let alone previous millennia, has ever visited a given place. But his second argument is that this doesn’t matter, because exploration isn’t just about hair-raising adventures or breaking new ground, but about “the generation of new geographical information that adds to humanity’s stock of collective knowledge.” If someone once paddled the Again but didn’t file any report of it, preferably with an august geographical society brimming with cabinets of yellowing files that date back to earlier centuries, then Shoalts has not been pre-empted in performing the task as he defines it.

On that narrow question of whether Shoalts’s trip down the Again represents some sort of geographical first, I believe he’s right. But the broader issues raised by critics left me uneasy. Do our romanticized vision of exploration and perhaps the very concept of wilderness itself require a mental erasure of the native experience—one that echoes their physical removal from the places we now treasure as national parks? Contrary to what William Denevan, another University of Wisconsin scholar, called “the Pristine Myth,” it’s now widely accepted that the pre-contact population of the Americas was vastly greater than once thought, with some estimates exceeding 50 million. The only reason early European settlers found the land seemingly empty was that as much as 95 percent of the indigenous population had already been wiped out by European diseases transmitted at the initial time of contact. Moreover, the “primeval” landscapes and fauna, from the Great Plains to the Amazon rainforest, didn’t exist in some untouched, Edenic original form; they had already been widely and deliberately modified by fire, agriculture, hunting, and other human activity.

All of this undercuts the stories I like to tell myself about why I love wilderness travel—that, beyond the clean air and nice scenery, I’m experiencing the planet as it was when “we” (a pronoun I leave deliberately vague) got here. In fact, when I look back at the series of wilderness travel articles I wrote for The New York Times a decade ago, what jumps out at me is the almost monomaniacal obsession with enacting Denevan’s myth by finding unpopulated places. Camped out in the Australian outback, I boasted that it was “the farthest I’d ever been from other human beings.” Along the “pristine void” of a remote river in the Yukon, I climbed ridges and scanned the horizon: “It was intoxicating,” I wrote, “to pick a point in the distance and wonder: Has any human ever stood there?”

Rereading those and other articles, I now began to reluctantly consider the possibility that my infatuation with the wilderness was, at its core, a poorly cloaked exercise in colonial nostalgia—the urbane Northern equivalent of dressing up as Stonewall Jackson at Civil War reenactments because of an ostensible interest in antique rifles.

My first wilderness trip was a week-long canoe trip, at age fifteen, with two friends in Algonquin Park, a 3,000-square-mile canoeing paradise four hours north of my home in Toronto. I’d done some car camping and learned basic canoe strokes at summer camp, but I had no idea what an actual backcountry trip entailed. None of us did—we didn’t know about such technical innovations as sleeping pads and camp stoves, or even about basic skills like how to cook. Our packs were so heavy with sacks of onions, potatoes, carrots, and other bad choices that even the outfitter who rented us a canoe could barely lift them.

On the very first portage, I was lagging behind when a grouse charged at me. I stood up straight in surprise, and immediately toppled over backward, pinned like an overturned turtle. Since I couldn’t lift the pack on my own, I had to hike to the end of the portage to get my friends to return and hoist the pack onto my back again. The same thing happened on the next portage, except this time I was carrying the canoe, and it was a moose that spooked me. That evening, we discovered that dumping chunks of potato, onion, and carrot into water boiling on the fire, then adding rice, doesn’t produce a very good stew, even after waiting ten whole minutes. We stayed up half the night trying to burn surplus root vegetables to lighten our packs, with very limited success.

The eventual upshot of this bumbling display of ineptitude, strangely enough, was an incredible feeling of accomplishment. As a coddled and privileged kid whose path through life was meticulously well-signposted and more or less strewn with rose petals, I reveled in the opportunity to tackle problems on my own, with a legitimate chance of doing something wrong, in a context where errors could have serious consequences. In the years that followed, through trial and copious error, I got more competent at taking care of myself in the bush—and in consequence, started seeking out ever more remote settings so that the challenges, problems, and potential consequences remained real.

No one, in the course of my admirably progressive education, ever tried to tell me that Christopher Columbus was any kind of hero. Still, as I hiked across mountain passes or paddled down lonely rivers, I’d often daydream about what it would have been like to travel those routes for the first time—or at least, without the benefit of knowledge passed on from previous travelers or inhabitants. I read a lot about early explorers of what is now Canada—Étienne Brûlé, Alexander Mackenzie, John Franklin—but until I started worrying about the Shoalts critiques, I’d never actually read their journals.

Mackenzie, in particular, has a hallowed place in Canadian history, having completed the first overland journey across North America—a dozen years before Lewis and Clark. (I’m just saying.) His journals turned out to be as gripping as I’d expected. But reading with new eyes, I couldn’t help noticing how little of his travel involved actually venturing into the unknown. Instead, his progress amounted to a relay from tribe to tribe, shanghaiing locals into guiding his crew through each leg of the journey. “Thunder and rain prevailed during the night,” he writes at one point, “and, in the course of it, our guide deserted; we therefore compelled another of these people, very much against his will, to supply the place of his fugitive countryman.” This pattern recurs over and over.

That’s not to say Mackenzie’s voyages were easy. In fact, for both explorers and the settlers who followed, the fact that the so-called wilderness was already inhabited did little to tame their perceptions of the land. Brûlé, according to some versions of the story, was killed and eaten by Hurons who thought he had betrayed them—a legend that may owe as much to colonial mythologizing as to historical fact. “Regardless of what we might think about it today,” Roderick Nash wrote, “Indians made the New World a greater, not a lesser, wilderness for the pioneer pastoralists.” In my own intrepid-explorer fantasies, indigenous people mostly didn’t appear at all. On further reflection, I’m not sure which is worse: dismissing people as savages, or ignoring their existence altogether.

Yet I persist in believing there’s something special about paths untrodden. To read accounts of Antarctic exploration, or even early space travel, is to find some of the same fire that animates Shoalts. In these cases, there are no native life forms (that we know of) being brushed aside. Granted, there are heavy doses of nationalism, mercantilism, and ego at work, but there’s also something else there that’s harder to articulate—something that’s not specific to any particular landscape or historiography. The closest I’ve come to putting my finger on it was in a conversation I had a few years ago with a University of Utah professor named Daniel Dustin.

Back in 1981, Dustin and a colleague wrote an article in the Journal of Forestry called “The Right to Risk in Wilderness,” in which they proposed the creation of “no-rescue” wilderness zones in places like Gates of the Arctic National Park in Alaska. The idea has never been adopted, but over the years it has proven to be a provocative spur to discussions about what it is people are seeking in the wilderness. “Publicly you could never say this is a good idea, because it sounds so heartless and cruel,” he admitted to me. “But my point is that in our culture, we glorify a few experts who do this sort of thing, and we’ll put ’em on television, maybe a create a reality TV show or whatever, but if every man wants to do it, stretch herself or himself, we somehow suggest that that’s just ridiculous.” Even three decades after the original article, Dustin didn’t have a simple answer for what you’d get from a trip to a no-rescue zone. But the ideas he spitballed—the opportunity to face the unknown, to take personal responsibility, to become self-reliant—somehow reminded me of that first day of my first canoe trip, backing away from a moose and eating undercooked rice.

by Alex Hutchison, NYRB |  Read more:
Image: VW Pics/Universal Images Group via Getty Images

Every Bon Appétit: Gourmet Makes Video, Ranked

Few could have predicted the massive cultural impact of Claire Saffitz, then–Senior Food Editor of Bon Appétit, creating an upgraded version of a Hostess Twinkie. It’s been over two years since the appropriately titled “Pastry Chef Attempts to Make a Gourmet Twinkie” was uploaded, and 6.3 million views later, what began as an 11-minute video has become the Bon Appétit YouTube channel’s signature series, earning an impressive legion of devoted fans and turning Claire and her co-workers into internet stars. Gourmet Makes has tackled sweet, savory, and everything in-between, putting Claire’s culinary expertise (and crafting skills) to the test by asking her to recreate beloved junk food with a gourmet twist.

The results of Claire’s efforts are variable, but over the course of its 28 and counting episodes, Gourmet Makes has become as much about the Bon Appétit test-kitchen personalities as it is about perfecting the texture of Twizzlers or Doritos’ nacho-cheese flavor. While each installment still ends with a how-to guide, at this point Gourmet Makes is less instructional video and more legitimate web series, with all the drama, surprises, and rich character arcs of prestige television. With that in mind, the show feels long overdue for a ranking: not of how close Claire’s food comes to the original, but as episodes of an ensemble series starring Claire, her fellow chefs, and a surprisingly useful dehydrator.

Twinkies


It’s hard to judge “Twinkies” objectively — this is the episode that started it all, the first exposure that many of us had to Gourmet Makes and to Claire. When compared to the complexity and nuance of later episodes, it’s admittedly lacking. And yet, beyond its nostalgic appeal, “Twinkies” is an essential foundational text, laying the groundwork for everything that follows. We have Claire discovering that the endeavor is more challenging than anticipated: “This is harder than I thought it was gonna be,” she says for the first and not the last time. Her relationship with Brad, who alternates between compassionate ally and merciless bully, is already coming in to focus. Here, he ends up being helpful, suggesting Claire combine a yellow cake and a chiffon cake to get that unique Twinkie texture. “Twinkies are a Frankenstein, in my opinion,” he offers. The result is not so much a Twinkie as the platonic ideal of a Twinkie, which is pretty much Gourmet Makes’ mission statement. And unlike the vast majority of Claire’s future efforts, this Twinkie is something viewers could actually make at home — no background in food science required.

by Louis Peitzman, Vulture | Read more:
Image: YouTube
[ed. End times.]

Samsara

The man standing outside my front door was carrying a clipboard and wearing a golden robe. “Not interested,” I said, preparing to slam the door in his face.

“Please,” said the acolyte. Before I could say no he’d jammed a wad of $100 bills into my hand. “If this will buy a few moments of your time.”

It did, if only because I stood too flabbergasted to move. Surely they didn’t have enough money to do this for everybody.

“There is no everybody,” said the acolyte, when I expressed my bewilderment. “You’re the last one. The last unenlightened person in the world.”

And it sort of made sense. Twenty years ago, a group of San Francisco hippie/yuppie/techie seekers had pared down the ancient techniques to their bare essentials, then optimized hard. A combination of drugs, meditation, and ecstatic dance that could catapult you to enlightenment in the space of a weekend retreat, 100% success rate. Their cult/movement/startup, the Order Of The Golden Lotus, spread like wildfire through California – a state where wildfires spread even faster than usual – and then on to the rest of the world. Soon investment bankers and soccer moms were showing up to book clubs talking about how they had grasped the peace beyond understanding and vanquished their ego-self.

I’d kind of ignored it. Actually, super ignored it. First a flat refusal to attend Golden Lotus retreats. Then slamming the door in their face whenever their golden-robed pamphleteers came to call. Then quitting my job to live off savings after my coworkers started converting and the team-building exercises turned into meditation sessions. Then unplugging my cable box after the sitcoms started incorporating Golden Lotus themes and the national news started being about how peaceful everybody was all the time. After that I might have kind of become a complete recluse, never leaving the house, ordering meals through UberEats, cut off from noticing any of the changes happening outside except through the gradual disappearance of nonvegetarian restaurants on the app.

I’m not a bigot; people can have whatever religion they choose. But Golden Lotus wasn’t for me. I don’t want to be enlightened. I like being an individual with an ego. Ayn Rand loses me when she starts talking politics, but the stuff about selfishness really speaks to me. Tend to your own garden, that kind of thing. I’m not becoming part of some universal-love-transcendent-joy hive mind, and I’m not interested in what Golden Lotus is selling.

So I just said: “Cool. Do I get a medal?”

“This is actually very serious,” said the acolyte. “Do you know about the Bodhisattva’s Vow?”

“The what now?”

“It’s from ancient China. You say it before embarking on the path of enlightenment. ‘However innumerable sentient beings are, I vow to save them all.’ The idea is that we’re all in this together. We swear that we will not fully forsake this world of suffering and partake of the ultimate mahaparanirvana – complete cosmic bliss – until everyone is as enlightened as we are.”

“Cool story.”

“That means 7.5 billion people are waiting on you.”

“What?”

“We all swore not to sit back and enjoy enlightenement until everyone was enlightened. Now everyone is enlightened except you. You’re the only thing holding us all back from ultimate cosmic bliss.”

“Man. I’m sorry.”

“You are forgiven. We would like to offer you a free three-day course with the Head Lama of Golden Lotus to correct the situation. We’ll pick you up at your home and fly you to the Big Island of Hawaii, where the Head Lama will personally…”

“…yeah, no thanks.”

“What?”

“No thanks.”

“But you have to! Nobody else can reach mahaparanirvana until you get enlightened!”

“Sure they can. Tell them I’m okay, they can head off to mahabharata without me, no need to wait up.”

“They can’t. They swore not to.”

“Well, they shouldn’t have done that.”

“It’s done! It’s irreversible! The vow has been sworn! Each of the seven point five billion acolytes of Golden Lotus has sworn it!”

“Break it.”

“We are enlightened beings! We can’t break our solemn vow!”

“Then I guess you’re going to learn an important lesson about swearing unbreakable vows you don’t want to keep.”

“Sir, this entire planet is heavy with suffering. It groans under its weight. Seven billion people, the entirety of the human race, and for the first time they have the chance to escape together! I understand you’re afraid of enlightenment, I understand that this isn’t what you would have chosen, but for the sake of the world, please, accept what must be!”

“I’m sorry,” I said. “I really am. But the fault here is totally yours. You guys swore an oath conditional on my behavior, but that doesn’t mean I have to change my behavior to prevent your oath from having bad consequences. Imagine if I let that work! You could all swear to kill yourself unless I donated money, and I’d have to donate or have billions of deaths on my hands. That kind of reasoning, you’ve got to nip it in the bud. I’m sorry about your oath and I’m sorry you’re never going to get to Paramaribo but I don’t want to be enlightened and you can’t make me.”

I slammed the door in his face.

by Scott Alexander, Slate Star Codex |  Read more:

Tuesday, November 5, 2019

Rosalyn Drexler Hold Your Fire (Men and Machines) 1966
https://rosalyndrexler.org/selected-paintings/

Rosalyn Drexler, Hold Your Fire (Men and Machines) 1966
via:
Eduardo Paolozzi Wittgenstein in New York (from the series As is When), 1965 (detail)
http://www.whitechapelgallery.org/exhibitions/eduardo-paolozzi/

Eduardo Paolozzi, Wittgenstein in New York (from the series As is When), 1965
via:

Manufacturing Fear and Loathing, Maximizing Corporate Profits! Why Today’s Media Makes Us Despise One Another

Matt Taibbi’s Hate Inc. is the most insightful and revelatory book about American politics to appear since the publication of Thomas Frank’s Listen, Liberal almost four full years ago, near the beginning of the last presidential election cycle.

While Frank’s topic was the abysmal failure of the Democratic Party to be democratic and Taibbi’s is the abysmal failure of our mainstream news corporations to report news, the prominent villains in both books are drawn from the same, or at least overlapping, elite social circles: from, that is, our virulently anti-populist liberal class, from our intellectually mediocre creative class, from our bubble-dwelling thinking class. In fact, I would strongly recommend that the reader spend some time with Frank’s What’s the Matter with Kansas? (2004) and Listen, Liberal! (2016) as he or she takes up Taibbi’s book. And to really do the book the justice it deserves, I would even more vehemently recommend that the reader immerse him- or herself in Taibbi’s favorite book and vade-mecum, Manufacturing Consent (which I found to be a grueling experience: a relentless cataloging of the official lies that hide the brutality of American foreign policy) and, in order to properly appreciate the brilliance of Taibbi’s chapter 7, “How the Media Stole from Pro Wrestling,” visit some locale in Flyover Country and see some pro wrestling in person (which I found to be unexpectedly uplifting — more on this soon enough).

Taibbi tells us that he had originally intended for Hate, Inc. to be an updating of Edward Herman and Noam Chomsky’s Manufacturing Consent (1988), which he first read thirty years ago, when he was nineteen. “It blew my mind,” Taibbi writes. “[It] taught me that some level of deception was baked into almost everything I’d ever been taught about modern American life…. Once the authors in the first chapter laid out their famed propaganda model [italics mine], they cut through the deceptions of the American state like a buzz saw” (p. 10). For what seemed to be vigorous democratic debate, Taibbi realized, was instead a soul-crushing simulation of debate. The choices voters were given were distinctions without valid differences, and just as hyped, just as trivial, as the choices between a Whopper and a Big Mac, between Froot Loops and Frosted Mini-Wheats, between Diet Coke and Diet Pepsi, between Marlboro Lites and Camel Filters. It was all profit-making poisonous junk.

Manufacturing Consent,” Taibbi writes, “explains that the debate you’re watching is choreographed. The range of argument has been artificially narrowed long before you get to hear it” (p. 11). And there’s an indisputable logic at work here, because the reality of hideous American war crimes is and always has been, from the point of view of the big media corporations, a “narrative-ruining” buzz-kill. “The uglier truth [brought to light in Manufacturing Consent], that we committed genocide of a fairly massive scale across Indochina — ultimately killing at least a million innocent civilians by air in three countries — is pre-excluded from the history of the period” (p. 13).

So what has changed in the last thirty years? A lot! As a starting point let’s consider the very useful metaphor found in the title of another great media book of 1988: Mark Crispin Miller’s Boxed In: The Culture of TV. To say that Americans were held captive by the boob tube affords us not only a useful historical image but also suggests the possibility of their having been able to view the television as an antagonist, and therefore of their having been able, at least some of them, to rebel against its dictates. Three decades later, on the other hand, the television has been replaced by iPhones and portable tablets, the workings of which are so precisely intertwined with even the most intimate minute-to-minute aspects of our lives that our relationship to them could hardly ever become antagonistic.

Taibbi summarizes the history of these three decades in terms of three “massive revolutions” in the media plus one actual massive political revolution, all of which, we should note, he discussed with his hero Chomsky (who is now ninety! — Edward Herman passed away in 2017) even as he wrote his book. And so: the media revolutions which Taibbi describes were, first, the coming of FoxNews along with Rush Limbaugh-style talk radio; second, the coming of CNN, i.e., the Cable News Network, along with twenty-four hour infinite-loop news cycles; third, the coming of the Internet along with the mighty social media giants Facebook and Twitter. The massive political revolution was, going all the way back to 1989, the collapse of the Berlin Wall, and then of the Soviet Union itself — and thus of the usefulness of anti-communism as a kind of coercive secular religion (pp. 14-15).

For all that, however, the most salient difference between the news media of 1989 and the news media of 2019 is the disappearance of the single type of calm and decorous and slightly boring cis-het white anchorman (who somehow successfully appealed to a nationwide audience) and his replacement by a seemingly wide variety of demographically-engineered news personæ who all rage and scream combatively in each other’s direction. “In the old days,” Taibbi writes, “the news was a mix of this toothless trivia and cheery dispatches from the frontlines of Pax Americana…. The news [was] once designed to be consumed by the whole house…. But once we started to be organized into demographic silos [italics mine], the networks found another way to seduce these audiences: they sold intramural conflict” (p. 18).

And in this new media environment of constant conflict, how, Taibbi wondered, could public consent, which would seem to be at the opposite end of the spectrum from conflict, still be manufactured?? “That wasn’t easy for me to see in my first decades in the business,” Taibbi writes. “For a long time, I thought it was a flaw in the Chomsky/Herman model” (p. 19).

But what Taibbi was at length able to understand, and what he is now able to describe for us with both wit and controlled outrage, is that our corporate media have devised — at least for the time being — highly-profitable marketing processes that manufacture fake dissent in order to smother real dissent (p. 21). And the smothering of real dissent is close enough to public consent to get the goddam job done: The Herman/Chomsky model is, after all these years, still valid.

Or pretty much so. Taibbi is more historically precise. Because of the tweaking of the Herman/Chomsky propaganda model necessitated by the disappearance of the USSR in 1991 (“The Russians escaped while we weren’t watching them, / As Russians do…,” Jackson Browne presciently prophesied on MTV way back in 1983), one might now want to speak of a Propaganda Model 2.0. For, as Taibbi notes, “…the biggest change to Chomsky’s model is the discovery of a far superior ‘common enemy’ in modern media: each other. So long as we remain a bitterly-divided two-party state, we’ll never want for TV villains” (pp. 207-208).

To rub his great insight right into our uncomprehending faces, Taibbi has almost sadistically chosen to have dark, shadowy images of a yelling Sean Hannity (in lurid FoxNews Red!) and a screaming Rachel Maddow (in glaring MSNBC Blue!) juxtaposed on the cover of his book. For Maddow, he notes, is “a depressingly exact mirror of Hannity…. The two characters do exactly the same work. They make their money using exactly the same commercial formula. And though they emphasize different political ideas, the effect they have on audiences is much the same” (pp. 259-260).

And that effect is hate. Impotent hate. For while Rachel’s fan demographic is all wrapped up in hating Far-Right Fascists Like Sean, and while Sean’s is all wrapped up in despising Libtard Lunatics Like Rachel, the bipartisan consensus in Washington for ever-increasing military budgets, for everlasting wars, for ever-expanding surveillance, for ever-growing bailouts of and tax breaks for and and handouts to the most powerful corporations goes forever unchallenged.

Oh my. And it only gets worse and worse, because the media, in order to make sure that their various siloed demographics stay superglued to their Internet devices, must keep ratcheting up levels of hate: the Fascists Like Sean and the Libtards Like Rachel must be continually presented as more and more deranged, and ultimately as demonic. “There is us and them,” Taibbi writes, “and they are Hitler” (p. 64). A vile reductio ad absurdum has come into play: “If all Trump supporters are Hitler, and all liberals are also Hitler,” Taibbi writes, “…[t]he America vs. America show is now Hitler vs. Hitler! Think of the ratings!…” The reader begins to grasp Taibbi’s argument that our mainstream corporate media are as bad as — are worse than — pro wrestling. It’s an ineluctable downward spiral.

Taibbi continues: “The problem is, there’s no natural floor to this behavior. Just as cable TV will eventually become seven hundred separate twenty-four-hour porn channels, news and commentary will eventually escalate to boxing-style, expletive-laden, pre-fight tirades, and the open incitement to violence [italics mine]. If the other side is literally Hitler, … [w]hat began as America vs. America will eventually move to Traitor vs. Traitor, and the show does not work if those contestants are not eventually offended to the point of wanting to kill one another” (pp. 65-69). (...)

On the same day I read this chapter I saw that, on the bulletin board in my gym, a poster had appeared, as if by magic, promoting an upcoming Primal Conflict (!) professional wrestling event. I studied the photos of the wrestlers on the poster carefully, and, as an astute reader of Taibbi, I prided myself on being able to identify which of them seemed be playing the roles of heels, and which of them the roles of babyfaces.

For Taibbi explains that one of the fundamental dynamics of wrestling involves the invention of crowd-pleasing narratives out of the many permutations and combinations of pitting heels against faces. Donald Trump, a natural heel, brings the goofy dynamics of pro wrestling to American politics with real-life professional expertise. (Taibbi points out that in 2007 Trump actually performed before a huge cheering crowd in a Wrestlemania event billed as the “battle of the billionaires.” Watch it on YouTube! https://youtu.be/5NsrwH9I9vE— unbelievable!!)

The mainstream corporate media, on the other hand, their eyes fixed on ever bigger and bigger profits, have drifted into the metaphorical pro wrestling ring in ignorance, and so, when they face off against Trump, they often end up in the role of inept prudish pearl-clutching faces.

Taibbi condemns the mainstream media’s failure to understand such a massively popular form of American entertainment as “malpractice” (p. 125), so I felt more than obligated to buy a ticket and see the advertised event in person. To properly educate myself, that is.

On the poster in my gym I had paid particular attention to the photo of character named Logan Easton Laroux, who was wearing a sweater tied around his neck and was extending an index finger upwards as if he were summoning a waiter. Ha! I thought. This Laroux chap must be playing the role of an arrogant preppy face. The crowd will delight in his humiliation! I imagined the vile homophobic and even Francophobic abuse to which he would likely be subjected.

On the night of the Primal Conflict event, I intentionally showed up a little bit late, because, to be honest, I was fearing a rough crowd. Pro wrestling in West Virginia, don’t you know. But I was politely greeted and presented with the ticket I had PayPal-ed. I looked over to the ring, and, sure enough, there was Logan Easton Laroux being body-slammed to the mat. Ha! Just the ritual humiliation I anticipated! But I had most certainly not anticipated the sudden display of Primal Conflict wit that ensued. Our plucky Laroux dramatically recovered from his fall and adroitly pinned his opponent as the crowd happily cheered for him, cheered in unison, cheered an apparently rehearsed chant again and again: ONE PER CENT! ONE PER CENT!

So no homophobic obscenities??Au contraire! Here was a twist in narrative far more nuanced than anything you might read in the New York Times!

Soon enough I realized that this was wholesome family entertainment. The most enthusiastic fans seemed to be the eight- and nine-year-old boys. (A couple of the boys were proudly wearing their Halloween costumes.) There was no smoking, no drinking, no foul language, no sexual innuendo of any sort, and, above all no racial insults — just the opposite: For both the wrestlers and the spectators were a mix of white and black, and the most popular wrestler was a big black guy in an afro wig who “lost” his bout to a white guy who played a cheating sleazebag heel named Quinn. Also, significantly, there was zero police presence, and zero chance of any kind of actual altercation. When the night was over the promoter stood at the exit and shook the hand of and said good-bye and come-back to each of us departing spectators — sort of like, well, a pastor after church in a small southern town as his congregation disperses.

So here I was in the very midst of — to use Hillary Clinton’s contemptuous terminology— the deplorables. But they weren’t the racist misogynistic homophobes Clinton had condemned. The vibe was that everyone liked all the wrestlers, even the ones they had booed, and that everyone pretty much liked each other. During intermission the promoter called out a birthday greeting to a spectator named John. A middle-aged black guy stood up to a round of applause. He was with his wife and kids.

Where was the hate?

by Yves Smith, Naked Capitalism |  Read more:
Image: OR Books
[ed. See also: Is Politics a War of Ideas or of Us Against Them? (NY Times)]

US National Debt Passed $23 Trillion, Jumped $1.3 Trillion in 12 Months

And these are the good times. What happens in a recession?

The US gross national debt – the sum of all Treasury securities outstanding – passed another illustrious milestone, $23.01 trillion, the US Treasury department disclosed on Friday. And it got there at lightning speed just eight months after having passed the illustrious milestone of $22 trillion on February 11. Over the past 12 months, the US national debt has jumped by $1.33 trillion – and these are the good times, and not a financial crisis when everything goes to heck:


The cute flat spots in the charts are periods when the US government bumped into the “debt ceiling.” The US is unique among countries in that Congress first tells the government how much to spend, what to spend it on, and in whose Congressional district to spend it in, and then on the appointed day, Congress tells the government that it cannot borrow the money that it needs in order to spend the money that Congress told it to spend. The charade, carried out regularly for political purposes to arm-twist one or the other side, leaves these flat spots behind as permanent testimony to this idiocy.

Over the 12 months through the third quarter, the US gross national debt rose by 5.6% from the same period a year earlier. But nominal GDP over the same period rose only 3.7%: meaning the growth of the debt is outrunning the growth even in what President Trump called two days ago, the “Greatest Economy in American History!”

If the growth of the federal debt outruns the economy during these fabulously good times, what will the debt do when the recession hits? When government tax receipts plunge and government expenditures for unemployment and the like soar? The federal debt will jump by $2.5 trillion or more in a 12-month period. That’s what it will do.

The growth in the US debt (the growth of Treasury securities outstanding) is the most accurate measure of the true deficit – the actual cash difference between how much cash the government takes in and how much cash the government spends. The government has to borrow the difference between the two – and that’s what the increase in the debt measures.

This increase in the debt shows the negative cash flow of the government. And it’s almost always significantly larger than the “budget deficit,” which is based on government accounting.

For example, in the fiscal year 2019, ended September 30, the “budget deficit” was $984 billion, according to the Treasury Department. This is a huge number, considering that these are the good times. But the government had to borrow an additional $1.2 trillion over the same period. In other words, the actual cash deficit, as represented by the increase in the debt, was $219 billion higher than the government accounting of the deficit.

And this is the case year after year. The chart below shows the increase in the debt for each fiscal year (blue column) and the “deficit” as per government accounting, going back to 2002. Over these 18 years, there were only two years when the deficit was either the same or larger than the increase in the debt. For the remaining 16 years, the increase in the debt was far larger than the deficit. In total, over those 18 years, all added together, the increase in debt has exceeded the “budget deficit” by $5 trillion:


The budget deficit – the much more benign figure, huge as it is – is what is being bandied about. Practically no one in government or the mainstream media bandies about the increase in the debt, though it is the more truthful figure that cannot be played with.

by Wolf Richter, Wolfstreet |  Read more:
Images: US Treasury Dept. and Wolfstreet
[ed. See also: What Will Stocks Do When “Consensual Hallucination” Ends? and, for head-shaking, throw up your hands incredulity, Uber Loses Another $1.2 billion, Stock Dives Again (Wolfstreet).]
thatsbutterbaby:
“ Björn Keller
”

Björn Keller
via:

Roger Miller

Geisha Selfies Banned in Kyoto

Authorities in Kyoto have banned photography in parts of the city’s main geisha neighbourhood, amid a flurry of complaints about harassment and bad behaviour by foreign tourists in the quest for the perfect selfie.

The ban, introduced recently on private roads in the city’s Gion district, includes a fine of up to 10,000 yen (£70), as Kyoto and other sightseeing spots in Japan grapple with the downside of a boom in visitors that is expected to last long after next summer’s Tokyo Olympics.

Two geishas.Tourism pollution” is a growing problem in Kyoto, where tourists flock to ancient shrines and temples and, in Gion, catch sight of the female entertainers – known locally as geiko – and maiko apprentices dressed in elaborate kimono on their way to evening appointments.

In response to complaints by residents and businesses, the local ward has put up signs near narrow streets leading off Hanamikoji, a public main road, warning visitors not to take snapshots.

The neighbourhood is home to exclusive restaurants where geiko and maiko entertain customers on tatami-mat floors and over multiple course kaiseki dinners. (...)

Existing signs reminding visitors about etiquette appear to have had little effect on tourist behaviour. Residents say the explosion in the number of visitors to Kyoto has led to overcrowded buses, fully booked restaurants and a general din that spoils the city’s miyabi – the refined atmosphere that draws people to the city in the first place.

by Justin McCurry, The Guardian | Read more:
Image: xavierarnau/Getty Images
[ed. Industrial tourism - killing cultures and the environment daily! Here's the kicker: A record 31 million people visited Japan last year – up almost 9% from the previous year – helped by a weaker yen, an easing of visa requirements and the increasing availability of cheap flights. The government has set a target of 40 million overseas visitors by next year, rising to 60 million by 2030. See also: It's Time to Take Down the Mona Lisa (Guardian)]

Purged

How a failed economic theory still rules the digital music marketplace

Unless you spent a lot of time listening to early ’00s techno-utopian babble, the Theory of the Long Tail probably means nothing to you. Yet if you live in the US or Europe and you run a digital music label, you’re living it – or the fallout from it – almost every day.

In 2004, Wired magazine editor Chris Anderson proposed The Long Tail, an economic theory blown up by futurist steroids. It theorized that with the introduction of the internet, blockbusters would matter less and everyone would sell “less of more.” The Long Tail prophesied “How Endless Choice Is Creating Unlimited Demand,” according to the subtitle of Anderson’s later book, which if true would turn the field of economics on its head.

For a practical example of what this all means, compare a brick-and-mortar record store like the old Tower Records vs. an online retailer like Traxsource. Your local Tower Records had to limit its inventory to take into account a finite shelf space. Their stock might have consisted of a couple hundred records. And each record didn’t get equal shelf space: your hippie boomer parents were going to buy more copies of Beatles records than all your Belgian techno records, so the store would stock and give more attention to the former. This “artificial” scarcity of physical products taking up physical space and depriving it from other products had bent consumer behavior out of shape for basically all of history.

With the internet and the creation of intangible digital products, this was supposed to change. Traxsource and other digital retailers are limited not by shelf space but by the size of their server hard drive array. And buying more server space is cheaper than building a new store.

According to Anderson, sales would in the future would represent a classic “Pareto” or “power law” demand curve: 20% of sales would be by “star” artists selling millions of copies each in our record store analogy, while 80% would consist of many thousands, tens of thousands or even millions of artists selling relatively few copies of each of their albums as the store’s near-infinite inventory meant people could metaphorically “wander about” and choose from millions of options.

This was the “Long Tail” in a nutshell, represented on a chart stretching to the right into infinity: in the future, music retailers would sell “less” copies from “more” artists. Many more.

And then this elegant economic theory ran headlong into the tsunami of shitmusic.

The Marvel-ization of the Music Industry

Nothing turned out the way Anderson predicted.

As early as 2008 – five years after iTunes was founded and we began to get actual data of how this whole thing was working – keen observers began chopping the Long Tail down to size. Economist Will Page working with Andrew Bud and Gary Eggleton was able to obtain somewhat anonymized transactions from a “large digital music provider” rumored to be either Rhapsody or iTunes itself. They had so much data, in fact, that an ordinary Excel spreadsheet choked on it.

It was a gigantic sample of… nothing.

80% of the songs had no transaction data: they had sold no copies at all.

There wasn’t any volume in the “Long Tail” and nothing had really changed – except for the worst. The actual sales data showed an even greater concentration of sales in the “Fat Head.” Page later spoke about their findings:

“We found that only 20% of tracks in our sample were ‘active,’ that is to say they sold at least one copy, and hence, 80% of the tracks sold nothing at all. Moreover, approximately 80% of sales revenue came from around 3% of the active tracks. Factor in the dormant tail and you’re looking at an ’80/0.38% rule’ for all the inventory on the digital shelf.

“Finally, only 40 tracks sold more than 100,000 copies, accounting for 8% of the business. Think about that – back in the physical world, forty tracks could be just 4 albums, or the top slice of the best-selling ‘Now That’s What I Call Music, Volume 70’ which bundles up 43 ‘hits’ into one perennially popular customer offering!”


When the new owners of Rolling Stone recently announced they would challenge Billboard’s dominance of the pop charts, what was left unsaid is how pointless a “top 100” of ANYTHING has become. As far as big-time music industry relevance, a “top 100” could probably be cut down to a “top 8” or “top 11.” Sales are so heavily concentrated at the top that you’d expect artists to start their own campaign for industry income equality. (Which, in a way, we are.)

Paradoxically, though economists are now skeptical of the Theory of the Long Tail, people – including artists and management – still base their careers on it. It’s one of the guiding, unquestioned principles of doing business in the digital world. Axioms such as “getting on all platforms” and “going where the people are listening” are music industry fortune cookies, urging everyone to fall into place in an economic system that works for almost no one. Apple and Spotify boast of their huge inventories of tracks – millions upon millions that no human could listen to – but the lion’s share of listens and revenue still go to the head.

by Terry Matthew, 5Mag.net |  Read more:
Image: uncredited

Monday, November 4, 2019

Falter: The Human Game

Is the human race approaching its demise? The question itself may sound hyperbolic — or like a throwback to the rapture and apocalypse. Yet there is reason to believe that such fears are no longer so overblown. The threat of climate change is forcing millions around the world to realistically confront a future in which their lives, at a minimum, look radically worse than they are today. At the same time, emerging technologies of genetic engineering and artificial intelligence are giving a small, technocratic elite the power to radically alter homo sapiens to the point where the species no longer resembles itself. Whether through ecological collapse or technological change, human beings are fast approaching a dangerous precipice.

Emissions rise from the Northern Indiana Public Service Co. (NIPSCO) Bailly generating station on the shore of Lake Michigan at dusk in Chesterton, Indiana, U.S., on Wednesday, Oct. 7, 2015. For the second month in a row, natural gas beat coal as the main source of U.S. electricity generation, accounting for 35.2 percent of supplies in August, a government report showed. Photographer: Luke Sharrett/Bloomberg via Getty ImagesThe threats that we face today are not exaggerated. They are real, visible, and potentially imminent. They are also the subject of a recent book by Bill McKibben, entitled “Falter: Has the Human Game Begun to Play Itself Out?” McKibben is an environmentalist and author, as well as the founder of 350.org, a campaign group working to reduce carbon emissions. His book provides a sober, empirical analysis of the reasons why the human race may be reaching its final stages.

Can you explain what you mean by the “human game”?

I was looking for a phrase to describe the totality of everything that we do as human beings. You could also term it as human civilization, or the human project. But “game” seems like a more appropriate term. Not because it’s trivial, but because, like any other game, it doesn’t really have a goal outside of itself. The only goal is to continue to play, and hopefully play well. Playing the human game well might be described as living with dignity and ensuring that others can live with dignity as well.

There are very serious threats now facing the human game. Basic questions of human survival and identity are being realistically called into question. It’s become clear that climate change is dramatically shrinking the size of the board on which the game is played. At the same time, some emerging technologies threaten the idea that human beings as a species will even be around to play in the future.

Could you briefly run down the implications of climate change for the future of human civilization, as we presently understand it?

Climate change is by far the biggest thing that humans have ever managed to do on this planet. It has altered the chemistry of the atmosphere in fundamental ways, raised the temperature of the planet over 1 degree Celsius, melted half the summer ice in the Arctic, and made the oceans 30 percent more acidic. We are seeing uncontrollable forest fires around the world, along with record levels of drought and flooding. In some places, average daily temperatures are already becoming too hot for human beings to even work during the daylight.

People are making plans to leave major cities and low-lying coastal areas, where their ancestors have lived for thousands of years. Even in rich countries like the United States, critical infrastructure is being strained. We saw this recently with the shutdown of electrical power in much of California due to wildfire risk. This is what we’ve done at merely 1 degree Celsius of warming above pre-industrial levels. It is already becoming difficult to live in large parts of the planet. On our current trajectory, we are headed for 3 or 4 degrees of warming. At that level, we simply won’t have a civilization like we do now.

Since the major culprit in climate change remains the fossil fuel industry, what practical steps can be taken to get their activities under control? And given that they also share a planet with everyone else, what exactly is their plan for a future of climate dystopia?

We have already made efforts at divestment and halting the construction of pipelines, but the next crucial area is finance: focusing on the banks and asset managers that give them the money to do what they do. (...)

The other major threat that you identify is posed by technologies like genetic engineering. Can you explain the threat that they pose to human identity and purpose?

Just as we had long taken for granted the stability of the planet, we have likewise taken for granted the stability of the human species. There are technologies now emerging that call into question very fundamental assumption about what it means to be a human being. Take, for example, genetic engineering technologies like CRISPR. These are already now coming into effect, as we saw recently in China, where a pair of twins were reportedly born after having their genes modified in embryo. I don’t see any problem with using gene editing to help existing people with existing diseases. That is very different, however, from genetically engineering embryos with specialized modifications.

Let’s say for example that an expectant couple decides to engineer their new child to have a certain hormonal balance aimed at improving their mood. That child may reach adolescence one day and find themselves feeling very happy without any particular explanation why. Are they falling in love? Or is it just their genetic engineering specs kicking in? Human beings could soon be designed with a whole range of new specs that modifies their thoughts, feelings, and abilities. I think that such a prospect — not far-fetched at all today — will be a devastating attack on the most vital things about being human. It will call into question basic ideas of who we are and how we think about ourselves.

There is also the implication of accelerating technological change in genetic engineering technology. After modifying their first child, those same parents may come back five years later to the clinic to make changes to their second child. In the meantime, the technology has marched on, and you can now get a whole new series of upgrades and tweaks. What does that mean for the first child? It makes them the iPhone 6: obsolete. That’s a very new idea for human beings. One of the standard features of technology is obsolescence. A situation where you are rapidly making people themselves obsolete seems wrongheaded to me.

by Murtaza Hussain, The Intercept | Read more:
Image: Luke Sharrett/Bloomberg via Getty Images

'The Stakes Are Enormous'

The candidate who lost to Trump is making all the right moves as some fear a primary gone too far left. It’s a tantalising notion, but most observers counsel caution – and a dose of realism

Hillary Clinton speaks at the funeral service for Elijah Cummings, in Baltimore late last month.A high-profile book tour. Countless TV interviews. Political combat with a Democratic primary candidate and Donald Trump. A year before the US presidential election, it looks like a campaign and it sounds like a campaign but it isn’t a campaign. At least, not as far anyone knows.

Yet a recent surge of activity by Hillary Clinton, combined with reports and columns suggesting the Democrats have not found the right candidate, have made a 2016 rematch a fun, speculative and potentially intriguing topic of Washington conversation.

‘The stakes are enormous’: is Hillary Clinton set for a White House run? (The Guardian)

[ed. Ack... just kill me now. The fact that anyone's even speculating about this (mostly brain-dead media/Washington political hacks, but still) should be cause for alarm. It's the same political calculus that motivates Biden - if you win the primary, it's home free. Because, Blue... No Matter Who.]

Public Pension Funds Criticized for Profiting From Private Equity’s “Surprise Billing” Abuse

We’ve described how private equity is behind the stunningly widespread abuse known as “surprise billing” or “balance billing”. This occurs when Americans with health insurance get hit with “out of network” charges for ambulances, emergency room services, or even with scheduled surgeries when they did what they could to make sure that only medical professionals in their networks were part of their operating room team.

This scam has become so widespread that a 2019 survey by the Kauffman Foundation found that 40% of American families had been hit with an unexpected medical bill, and half of those were for out of network charges. Other studies found that over one in four emergency room visits resulted in a surprise bill, as did over four in ten ambulance rides. No wonder there have been more and more efforts at the state and local level to prohibit or severely limit this practice.

Eileen Appelbaum, co-head of CEPR, has identified the hidden hand behind this abuse: private equity. In turn, consumer and patient advocates have wised up and are starting to pressure the public pension funds that profit from investing in the private equity funds that have been leading this abuse, KKR and Blackstone. This is the key section of a must-read post by Appelbaum:
The problem of surprise billing has grown substantially in recent years because hospitals have been under financial pressure to reduce overall costs and have turned to outsourcing expensive and critical services to third-party providers as a cost-reduction strategy. Outsourcing is not new, as hospitals began outsourcing non-medical ancillary services such as facilities management and food services in the 1980s… 
Recent outsourcing, however, has expanded to critical care areas – emergency rooms, radiology, anesthesiology, surgical care, and specialized units for burn, trauma, or neo-natal care. Now hospitals contract with specialty physician practices or professional physician staffing firms to provide these services – even if the patient receives treatment at a hospital or at an outpatient center that is in the patients’ insurance network. According to one study, surprise billing is concentrated in those hospitals that have outsourced their emergency rooms.[vii]A recent report found that almost 65 percent of U.S. hospitals now have emergency rooms that are staffed by outside companies.[viii]… 
Private equity firms have played a critical role in consolidating physicians’ practices into large national staffing firms with substantial bargaining power vis-à-vis hospitals and insurance companies. They have also bought up other emergency providers, such as ambulance and medical transport services. They grow by buying up many small specialty practices and ‘rolling them up’ into umbrella organizations that serve healthcare systems across the United States. Mergers of large physician staffing firms to create national powerhouses have also occurred. As these companies grow in scale and scope and become the major providers of outsourced services, they have gained greater market power in their negotiations with both hospitals and insurance companies: hospitals with whom they contract to provide services and insurance companies who are responsible for paying the doctors’ bills. 
Hospitals have consolidated in order to gain market share and negotiate higher insurance payments for procedures. Healthcare costs have been driven up further by the dynamics associated with payments for out-of-network services. As physicians’ practices merge or are bought out and rolled up by private equity firms, their ability to raise prices that patients or their insurance companies pay for these doctors’ services increases. The larger the share of the market these physician staffing firms control, the greater their ability to charge high out-of-network fees. The likelihood of surprise medical bills goes up, and this is especially true when Insurance companies find few doctors with these specialties in a given region with whom they can negotiate reasonable charges for their services. 
The design of the private equity business model is geared to driving up the costs of patient care. Private equity funds rely on the classic leveraged buyout model (LBO) in which they use substantial debt to buyout companies (in this case specialty physician practices as well as ambulance services) because debt multiplies returns if the investment is successful. They target companies that have a steady and high cash flow so they can manage the cash in order to service the debt and make high enough returns to pay their investors ‘outsized returns’ that exceed the stock market.[xi] Emergency medical practices are a perfect buyout target because demand is inelastic, that is, it does not decline when prices go up. Moreover, demand for these services is large – almost 50 percent of medical care comes from emergency room visits, according to a 2017 national study by the University of Maryland School of Medicine, and demand has steadily increased.[xii] PE firms believe they face little or no downside market risk in these buyouts.
Appelbaum has carefully documented the history of the two biggest players in this patient-muscling operation: Envision Healthcare, a rollup of emergency ambulance and specialty physicians’ practices now owned by KKR funds, and TeamHealth, a healthcare staffing company that provides hospitals with ER professoinals, anesthesiologists, hospitalists, and hospital specialists such as OB/GYN, orthopedics, general surgery, pediatric services as well as post-acute care, now owned by Blackstone funds. And she shows that private equity ownership has led to extortionate billing practices: (...)

We also pointed out how the sudden death of a promising and pretty comprehensive California bill to end surprise billing, where no one even deigned to explain what happened, had all the hallmarks of heavyweight donors putting the kibosh on it. The revelation of private equity as the big moving force behind surprise bills makes it all make sense.

by Yves Smith, Naked Capitalism |  Read more:
[ed. See also: Elizabeth Warren’s Plan Is a Massive Win for the Medicare for All Movement (The Intercept).]

This Alaskan Beat the Odds at the Supreme Court - It Cost $1.5 Million

Moose hunter John Sturgeon serves as both inspiration and warning for anyone who has ever gotten worked up over a perceived injustice and vowed to fight it “all the way to the Supreme Court.”

An inspiration because Sturgeon took on the federal government and - not once but twice - beat the odds to get the high court to accept his case and rule in his favor.

Why a warning? Because Sturgeon's 12-year, only-in-Alaska battle to travel on a forbidden hovercraft through national parkland to his favorite hunting spot cost well north of $1.5 million.

John Sturgeon, in front of the Supreme Court, is part of a case involving a hovercraft and moose hunting,, on January, 17, 2016 in Washington, DC. (Bill O'Leary / The Washington Post)“I had no idea how much it was going to cost, but you start down this slide and there’s no stopping it,” Sturgeon said. “Not many people could do what I did, because they don’t have the financial resources, which I don’t either. But I did have a cause that really ignited people.”

Sturgeon agreed to let The Washington Post examine the details of his costs and the donations to his cause to illuminate what it takes to bring a lawsuit before the Supreme Court.

Among his donors: the Alaska Wildlife and Conservation Fund, the National Rifle Association, the Alaska Conservative Trust, national and international hunting groups, hundreds of ordinary Alaskans and one very wealthy one.

Edward Rasmuson read about Sturgeon's case, called him up and found him sincere, and then offered to help pay the legal bill. "I maybe gave $250,000 to $300,000 to $400,000 - hell, I don't know," Rasmuson said in an interview. "But I'm fortunate. I'm wealthy, I can afford it."

The money covered things such as the $20,891.89 bill to print legal documents exactly as the Supreme Court requires. To reimburse the $11,063.25 in hotel costs for Sturgeon's lawyers to hone their strategy at three moot courts in Washington. To pay for 3,691 hours of legal work at the law firms that have represented him since 2011.

At the end of summer, Sturgeon's supporters boarded a stern-wheeler for a trip down the Chena River for one last fundraiser. It was billed as the "Thanks a Million Victory Cruise." There were drinks and dinner, tributes to Sturgeon and a silent auction offering uniquely Alaskan items such as a gold nugget and several fur pelts donated by Willy Keppel of the village of Quinhagak, nearly 600 miles away.

"Hope this helps," wrote Keppel, who said he was strapped for cash but wanted to contribute. "Thank you for taking the fight to the feds!"

Republican Gov. Mike Dunleavy was onboard to call Sturgeon a “hero” (and to submit the winning bid for one of the pelts). To show how broad the support for Sturgeon is, also along were several prominent Alaskans who have signed a petition to remove Dunleavy from office.

Sturgeon's case resonated because it could bring a long-sought clarification of the Alaska National Interest Lands Conservation Act (ANILCA), with which Congress set aside more than 100 million acres for preservation. Alaskans have argued that Congress did not intend for the land to be regulated like other federal parkland and preserves because the way of life is so different in the Last Frontier.

Sturgeon said he only knew that he didn’t think National Park rangers had the authority to tell him he could not use the hovercraft as he had for years.

"I called the state of Alaska and said, 'Aren't you supposed to manage the state's rivers?' and they said yes," Sturgeon recalled. "That's when I kinda decided I wanted to, maybe, you know, sue the federal government. But I didn't want a frivolous lawsuit."

He went to an Anchorage lawyer named Doug Pope and laid out the situation. Pope did some research and came back with good news and bad news.

"Not only do you have a case, but this could go all the way to the Supreme Court," Pope told Sturgeon.

The bad news: A legal fight like that could take six years or more, and maybe cost about $700,000.

Pope advised instead: "Spend the money on your grandkids. By the time this is done, are you even still going to be moose-hunting, John?"

Turns out he is - he filled the freezer just a couple of weeks ago - and his grandkids will be just fine.

The origins of Sturgeon’s case have been told now in two Supreme Court decisions. In the fall of 2007, he was trying to fix a steering cable on his hovercraft, which was beached on a gravel shoal of the Nation River, within the Yukon-Charley Rivers National Preserve.

Sturgeon for years had used his hovercraft to traverse the shallow rivers to his favorite hunting spot near the Canadian border. But on this day, he was approached by National Park rangers who informed him that the craft was banned in all federal park lands. Not only was he barred from using it to get to the hunting ground, he was told he could not use it to get home.

by Robert Barnes, WaPo via ADN | Read more:
Image: Bill O'Leary/The Washington Post