Thursday, September 19, 2019


Crow Native Americans watching the rodeo at Crow fair in Montana, 1941
via:

David Byrne


Well the wind so strong, it’s blown us all around
Wind so strong, nobody settle down
Ev'ryday another apocalypse
Had a TV but I don’t know how deep it is


Wednesday, September 18, 2019

California’s Luxury Dining Circuit: Delicious and Dull


Late in the summer, late in the afternoon, I woke from a nap by a glittering pool. Over the last few days in California wine country, I had eaten macaroni and cheese out of a golden egg and broken into a juicy quenelle of caviar over softly set custard. I had drawn slices of aged beef, so tender it barely required chewing, through a sticky, peppery Cognac sauce and rinsed, after dessert, before petits fours, with a glug of Sauternes.

In other words, I had reached my final form and stepped into the old stereotype of the restaurant critic, driving a rental car through wine country, racking up the expenses. And I was feeling sedated by this ideal of luxury: technically flawless, incredibly expensive and, in the end, somewhat predictable.

For decades, the region’s hospitality business has grown alongside its wine industry, and tourists have come here for the small towns and extreme leisure — restaurants, golf courses, spas — and maybe the odd novelty magnet that says “Wine Time” in wiggly letters.

When I woke up and checked my schedule, it was, in fact, wine time. It was always wine time.

Few parts of the country have such a concentration of this nostalgic genre of fine dining: grand destination restaurants with big reputations, extravagant food and deep wine cellars. When Michelin released its 2019 guide to California dining in June, the tire company’s anonymous inspectors awarded three restaurants in the area three stars each, the highest rating, suggesting they were “worth a special journey.”

by Tejal Rao, NY Times | Read more:
Image: Preston Gannaway for The New York Times

The Black Swan Is a Drone

What was "possible" yesterday is now a low-cost proven capability, and the consequences are far from predictable.

Predictably, the mainstream media is serving up heaping portions of reassurances that the drone attacks on Saudi oil facilities are no big deal and full production will resume shortly. The obvious goal is to placate global markets fearful of an energy disruption that could tip a precarious global economy into recession.

The real impact isn't on short-term oil prices, it's on asymmetric warfare: the coordinated drone attack on Saudi oil facilities is a Black Swan event that is reverberating around the world, awakening copycats and exposing the impossibility of defending against low-cost drones of the sort anyone can buy.

(Some published estimates place the total cost of the 10 drones deployed in the strike at $15,000. Highly capable commercially available drones cost around $1,200 each.)

The attack's success should be a wake-up call to everyone tasked with defending highly flammable critical infrastructure: there really isn't any reliable defense against a coordinated drone attack, nor is there any reliable way to distinguish between an Amazon drone delivering a package and a drone delivering a bomb.

Whatever authentication protocol that could be required of drones in the future--an ID beacon or equivalent--can be spoofed. For example: bring down an authenticated drone (using nets, etc.), swap out the guidance and payload, and away it goes. Or steal authentication beacons from suppliers, or hack an authenticated drone in flight, land it, swap out the payload--the list of spoofing workaround options is extensive.

This is asymmetric warfare on a new scale: $20,000 of drones can wreak $20 million in damage and financial losses of $200 million--or $2 billion or $20 billion, if global markets are upended.

If it's impossible to defend against coordinated drone attacks, and impossible to differentiate "good" drones from "bad" drones, then the only reliable defense is to ban drones entirely from wide swaths of territory.

So much for the lightly regulated commercialization of drones.

What sort of light bulbs are going off in the minds of copycats? It doesn't take much imagination to see the potential for mayhem--and without sacrificing your own life. I won't elaborate on the possibilities here, but they're obvious to us all.

The range and payload of low-cost drones is limited. The big drones can fly hundreds of miles and carry hundreds of pounds of weaponry, but these can be targeted by radar and conventional ground-to-air missiles. So-called hobby drones skimming over the rooftops (or deserts or forests) are difficult to shoot down, especially if the attack is coordinated to arrive from multiple directions.

Small hobby drones may only carry 3 KG (roughly 6 pounds), but how much damage can 3 KG of high explosives cause? The answer is "considerable" if the target is flammable, or lightly shielded electronics.

Larger commercially available drones can carry up to 20 KG or 40 pounds--more than enough explosive capacity to take out any number of targets.

Defense and intelligence agencies have no doubt war-gamed the potential for coordinated drone attacks, and the world's advanced militaries are already exploring the potential for self-organizing "drone hordes" of hundreds or even thousands of drones overwhelming defenders with sheer numbers. The success of the oil facilities attack proves the effectiveness of much smaller scale drone attacks.

Put yourself in the shoes of those tasked with securing hundreds of miles of pipelines carrying oil and natural gas around the world. What's your defense against drone attacks? A.I.-controlled or remote-operated gun towers every few hundred yards, along thousands of miles of pipelines? Human patrols covering the entire pipeline 24/7? The cost of such defenses would burden the defenders with enormous costs without providing 100% reliable security.

by Charles Hugh Smith, Oftwominds.com |  Read more:
Image: via

Losing The Narrative Battle Over Iran

I’m expected to write something about the Trump administration’s warmongering against Iran over an attack on a Saudi oil refinery, because that’s typically what I do in this ongoing improvisational exercise of mine: I write about the behavior of the US war machine and the propaganda that is used to bolster it. It’s what my readers have come to expect. But honestly I find the whole thing extremely tedious and I’ve been putting off writing about it for two days.

This is because from a propaganda analysis point of view, there’s really not much to write about. The Trump administration has been making bumbling, ham-fisted attempts at manufacturing public support for increasing aggressions against Iran since it initiated withdrawal from the JCPOA a year and a half ago, yet according to a Gallup poll last month Americans still overwhelmingly support diplomatic solutions with Tehran over any kind of military aggression at all. In contrast, most Americans supported a full-scale ground invasion of Iraq according to Gallup polls taken in the lead-up to that 2003 atrocity. With the far less committed Libya intervention, it was 47 percent supportive of US military action versus 37 percent opposed.

That’s the kind of support it takes to get a US war off the ground these days. And it’s going to take a lot more than a busted Saudi oil refinery to get there, even in the completely unproven event that it was indeed Iran which launched the attack.

The reason I’m able to spend so much time writing about war propaganda as part of my job is because war propaganda is happening constantly, and the reason war propaganda is happening constantly is because it’s absolutely necessary for the perpetuation of the US-centralized empire’s slow-motion third world war against unabsorbed governments. In other words, the propaganda apparatus of the empire works constantly to manufacture consent for military aggressions because it absolutely requires that consent.

When I say that the imperial war machine requires public consent before it can initiate overt warfare, I’m not saying that the US government is physically or legally incapable of launching a war that the public disapproves of, I’m saying that it is absolutely essential for the drivers of empire to preserve the illusion of freedom and democracy in America. People need to feel like their government is basically acting in everyone’s best interest, and that it is answerable to the will of the electorate, otherwise the illusion of freedom and democracy is shattered and people lose all trust in their government and media. If people no longer trust the political/media class, they can’t be propagandized. Without the ability to propagandize the masses, the empire collapses.

So out of sheer self-interest, establishment power structures necessarily avoid overt warfare until they have successfully manufactured consent for it. If they didn’t do this and chose instead to take off the nice guy mask, say “Screw you we’re doing what we want,” and start butchering Iranians at many times the cost of Iraq in both money and in American lives lost, people would immediately lose trust in their institutions and the narrative matrix which holds the whole thing together would crack open like an egg. From there revolution would become an inevitability as people are no longer being successfully propagandized by the establishment narrative managers into believing that the system is working fine for everyone.

Think about it: why else would the mass media be churning out propaganda about disobedient governments like Iran, Venezuela, Syria, Russia and China if they didn’t need to? They need the citizenry they’re charged with manipulating to consent to important geostrategic imperialist maneuvers, or they’ll break the hypnotic trance of relentless narrative control. And make no mistake, maintaining narrative control is the single highest priority of establishment power structures, because it’s absolutely foundational to those structures.

This is why the warmongers have been favoring economic warfare over conventional warfare; it’s much easier to manufacture support for civilian-slaughtering starvation sanctions. It’s slower, it’s sloppier, and it’s surely a lot less fun for the psychopaths in charge, but because the public will consent to economic sanctions far more readily than ground invasions or air strikes, it’s been the favored method in bringing disobedient governments to their knees. That’s how important manufacturing consent is.

So a bunch of drama around a Saudi oil refinery isn’t going to do the trick. The US government is not going to leap into an all-out war which would inevitably be many times worse than Iraq based on that, because they can’t manufacture consent for it right now. All they’re trying to do is escalate things a bit further with the goal of eventually getting to a point where Iran either caves to Washington’s demands or launches a deadly attack, at which point the US can play victim and the mass media can spend days tearfully running photos of the slain US troops. If that happens they might gain their consent from the public. If not, we may see them get a little more creative with their “crisis initiation”.

Until then this is a whole lot of noise and very little signal, which is why I find this current circus uninteresting to write about. It seems like every week now the Trump administration is trotting out some new narrative with the help of the mass media explaining why the Iranian government is evil and must be toppled, and nobody buys it because it’s on the other side of the damn planet and it’s always about something silly like oil or broken drones. Their unappealing pestering about this is starting to remind me of a really awkward loser who’s constantly asking out the prettiest girl in the office over and over again; you just want to pull him aside and say dude, stop. She’s just not into you.

by Caitlin Johnstone, Medium |  Read more:

Tuesday, September 17, 2019

Against Against Pseudoaddiction

“Pseudoaddiction” is one of the standard beats every article on the opioid crisis has to hit. Pharma companies (the story goes) invented a concept called “pseudoaddiction”, which looks exactly like addiction, except it means you just need to give the patient more drugs. Bizarrely gullible doctors went along with this and increased prescriptions for their addicted patients. For example, from a letter in the Wall Street Journal:
Parroting Big Pharma’s excuses about FDA oversight and black-box warnings only discounts how companies like Johnson & Johnson engaged in pervasive misinformation campaigns and even promoted a theory of “pseudoaddiction” to encourage doctors to prescribe even more opioids for patients who displayed signs of addiction.
Or from CBS:
But amid skyrocketing addiction rates and overdoses related to OxyContin, Panara claimed the company taught a sales tactic she now considers questionable, saying some patients might only appear to be addicted when in fact they’re just in pain. In training, she was taught a term for this:“pseudoaddiction.” 
“So the cure for ‘pseudoaddiction,’ you were trained, is more opioids?” Dokoupil asked. 
“A higher dose, yes,” Panara said. 
“Did this concept of pseudoaddiction come with studies backing it up?” 
“We had no studies. We actually — we did not have any studies. That’s the thing that was kind of disturbing, was that we didn’t have studies to present to the doctors,” Panara responded. 
“You know how that sounds?” Dokoupil asked. 
“I know. I was naïve,” Panara said. (...)
Let me confess: I think pseudoaddiction is real. In fact, I think it’s obviously real. I think everyone should realize it’s real as soon as it’s explained properly to them. I think we should be terrified that any of our institutions – media, academia, whatever – think they could possibly get away with claiming pseudoaddiction isn’t real. I think people should be taking to the streets trying to overthrow a medical system that has the slightest doubt about whether pseudoaddiction is real. If you can think of more hyperbolic statements about pseudoaddiction, I probably believe those too.

Neuroscientists define addiction in terms of complicated brain changes, but ordinary doctors just go off behavior. The average doctor treats “addiction” and “drug-seeking behavior” as synonymous. This paper lists signs of drug-seeking behavior that doctors should watch out for, like:

– Aggressively complaining about a need for a drug
– Requesting to have the dose increased
– Asking for specific drugs by name
– Taking a few extra, unauthorised doses on occasion
– Frequently calling the clinic
– Unwilling to consider other drugs or non-drug treatments
– Frequent unauthorised dose escalations after being told that it is inappropriate
– Consistently disruptive behaviour when arriving at the clinic

You might notice that all of these are things people might do if they actually need the drug. Consider this classic case study of pseudoaddiction from Weissman & Haddox, summarized by Greene & Chambers:
The 1989 introduction of pseudoaddiction happened in the form a single case report of a 17-year-old man with acute leukemia, who was hospitalized with pneumonia and chest wall pain. The patient was initially given 5 mg of intravenous morphine every 4 to 6 h on an as-needed dosing schedule but received additional doses and analgesics over time. After a few days, the patient started engaging in behaviors that are frequently associated with opioid addiction, such as requesting medication prior to scheduled dosing, requesting specific opioids, and engaging in pain behaviors (e.g., moaning, crying, grimacing, and complaining about various aches and pains) to elicit drug delivery. The authors argued that this was not idiopathic opioid addiction but pseudoaddiction, which resulted from medical under-treatment (insufficient opioid dosing, utilization of opioids with inadequate potency, excessive dosing intervals) of the patient’s pain. In describing pseudoaddiction as an “iatrogenic” syndrome, Weissman and Haddox inverted the traditional usage of iatrogenic as harm caused by a medical intervention. In pseudoaddiction, iatrogenic harm was described as being caused by withholding treatment (opioids), not by providing it.
Greene & Chambers present this as some kind of exotic novel hypothesis, but think about this for a second like a normal human being. You have a kid with a very painful form of cancer. His doctor guesses at what the right dose of painkillers should be. After getting this dose of painkillers, the kid continues to “engage in pain behaviors ie moaning, crying, grimacing, and complaining about various aches and pains”, and begs for a higher dose of painkillers.

I maintain that the normal human thought process is “Since this kid is screaming in pain, looks like I guessed wrong about the right amount of painkillers for him, I should give him more.”

The official medical-system approved thought process, which Greene & Chambers are defending in this paper, is “Since he is displaying signs of drug-seeking behavior, he must be an addict trying to con you into giving him his next fix.” They never come out and say this. But they define pseudoaddiction as meaning not that, and end up saying “in conclusion, we find no empirical evidence yet exists to justify a clinical ‘diagnosis’ of pseudoaddiction.” More on this later.

The concept of “pseudoaddiction” was invented as a corrective to an all-too-common tendency for doctors to assume that anyone who seems too interested in getting more medications is necessarily an addict. It was invented not by pharma companies, but by doctors working with patients in pain, building upon a hundred-year-long history of other doctors and medical educators trying to explain the same point.

And in case you think this is a weird ivory tower debate that doesn’t influence real clinical practice, I offer you these cases from my own experience. Stories slightly changed or merged together to protect patient privacy:

Case 1: Mary is an elderly woman who undergoes a surgery known to have a painful recovery process. The surgeon prescribes a dose of painkillers once every six hours. The painkillers last four hours. From hours 4-6, Mary is in terrible pain. During one of these periods, she says that she wishes she was dead. The surgeon leaps into action by…calling the on-call psychiatrist and saying “Hey, there’s a suicidal person on my ward, you should do psychiatry to her or something.” I am the on call psychiatrist. After a brief evaluation, I tell the surgeon that Mary has no psychiatric illness but needs painkillers every four hours. The surgeon lectures me on how There Is An Opioid Crisis, Y’Know, and we can’t negotiate with addicts and drug-seekers. I am a consultant on the case and can’t overule the surgeon on his own ward, so I just hang out with Mary for a while and talk about things and distract her and listen to her scream during the worst part of the six-hour cycle. After a few days the surgery has healed to the point where Mary is only in excruciating pain rather than actively suicidal, and so we send her home.

Case 2: Juan is a middle-aged man with depression who is using Geodon for antidepressant augmentation. This is kind of a weird choice, and has theoretical potential to interact poorly with some of his other medications, but nothing else has worked for him and he’s done great for ten years. He switches psychiatrists. The new psychiatrist is really worried about the theoretical interaction, so he tells him that he can’t take Geodon anymore and switches him to something else. Juan falls into a deep depression. He asks to have Geodon back and the doctor says no. Juan yells at the psychiatrist and says he is ruining his life. The psychiatrist diagnoses him with a personality disorder and anger management problems, and tells him to attend therapy. Juan actually does this for a while, but eventually wises up and switches doctors to me. I put him back on Geodon and within a month he’s doing great again. Note that Juan displayed every sign of “drug-seeking behavior” even though Geodon is not addictive.

Case 3: This one courtesy of Zvi. Zvi’s friend is diabetic. He runs out of insulin and asks his doctor for more. The doctor wants to wait until his next free appointment in a few weeks before prescribing the insulin. Zvi’s friend points out that he will die unless he gets more insulin now. The doctor gets very angry about this and spends a long phone call haranguing Zvi’s friend about how inconvenient it is that he’s demanding the insulin now rather than at a more convenient time. Zvi’s friend has to threaten the doctor with a lawsuit before the doctor finally relents and gives him the insulin. I like this story because, again, insulin is not addictive, there is no way that the patient could possibly be doing anything wrong, but the patient still gets treated as a drug-seeker. The very act of wanting medication according to the logic of his own disease, rather than at the doctor’s convenience, is enough to make his request suspicious.

Case 4: John is a 70 year old man on opioids for 30 years due to a mining-related injury. He is doing very well. I am his outpatient psychiatrist but I only see him once every few months to renew meds. He gets some kind of infection, goes to the hospital, and due to normal hospital incompetence he doesn’t get his opioids. He demands his meds, and like many 70 year old ex-miners in terrible pain, he is not diligently polite the whole time. The hospital doctors are excited: they have caught an opioid addict! They tell his family and outpatient doctors he cannot have opioids from now on, then discharge him. He continues to be in terrible pain. At first he sneaks pills from an extra bottle of opioids he has at home, but eventually he uses all those up. After this, he is still in terrible pain with no reason to expect this to ever change, and so he quite reasonably shoots himself in the chest. This is the first point in this entire process at which anyone attempts to tell me any of this is going on, so I get a “HEY DID YOU KNOW YOUR PATIENT SHOT HIMSELF? DOESN’T SEEM LIKE YOU’RE DOING VERY GOOD PSYCHIATRIST-ING?” call. The patient miraculously survives, eventually finds a new pain doctor, and goes on to live a normal and happy life on the same dose of opioids he was using before.

Let’s look at those warning signs of addiction again:

– Aggressively complaining about a need for a drug
– Requesting to have the dose increased
– Asking for specific drugs by name
– Taking a few extra, unauthorised doses on occasion
– Frequently calling the clinic
– Unwilling to consider other drugs or non-drug treatments
– Frequent unauthorised dose escalations after being told that it is inappropriate
– Consistently disruptive behaviour when arriving at the clinic


In Case 1, Mary requested her dose of painkiller be increased (from once per six hours to once per four hours). In Case 2, Juan asked for a specific drug by name (Geodon), and was unwilling to consider other drugs. In Case 3, Zvi’s friend frequently called the clinic (to get them to refill his insulin). In Case 4, John showed consistently disruptive behavior in the hospital and took extra unauthorized doses. Etc.

All of these are drug-seeking behaviors. But I maintain that none of these patients were addicted. The correct action in all of these cases is to listen to the patient’s reasons for wanting the drug, realize that you (the doctor) screwed up, and give them the drug that they are asking for. Although the point that these behaviors can be signs of addiction is well-taken and important, it’s equally important to remember they can be signs of other things too.

Media portrayals of pseudoaddiction portray it as this bizarre contortion of logic: “A patient is displaying signs of addiction, so you should give them more of the drug! Haha, nice try, pharma companies!” But this is exactly what you should do! The real problem lies with anyone who conceptualizes pseudoaddiction as a novel hypothesis that requires proof, rather than as the obvious possibility you have to check for before accusing patients of addiction. (...)

As far as I can tell, the concept started off well-intentioned. But painkiller companies realized that the debate over when to diagnose addiction vs. pseudoaddiction was relevant to their bottom line, and started funding the pseudoaddiction side of it.

I’m not sure how substantial an effort this was. G&C note that of 224 papers mentioning pseudoaddiction, 22 were sponsored by pharma (but that means 202 weren’t). Of a stricter category of 12 papers that focused on arguing for the concept, 4 were sponsored by pharma (but 8 were not). Taking their numbers at face value, the majority of discussion of pseudoaddiction had no pharma company sponsorship. But the image of an expert getting up in front of a medical conference and telling doctors that the solution to opioid addiction was more opioids – something that certainly did happen, I’m not sure how often – was so lurid that it burned itself into the popular consciousness. The media exaggerated this from “basically good idea gets misused” to “doctors invent vicious lies to addict your loved ones” to get more clicks. Experts didn’t want to be the guy saying “well actually” in the middle of an Opioid Crisis, so they kept their mouths shut. Reporters copied each others’ denunciations of ‘pseudoaddiction’ without checking what the term really meant.

Into all this came the drug warriors. It’s hard for me to be angry at addictionologists, because they have a terrible job and are probably traumatized by it. But they really hate drugs and will say whatever it takes to make you hate drugs too. These are the people who gave us articles on how one hit of marijuana will get you addicted forever and definitely kill you, how one hit of LSD will make you go crazy and get addicted and probably kill you, how there can never be any legitimate medical reason for using cannabis, how e-cigarettes are deadly poison, and other similar classics. Sensing that they had the high ground, they wrote a couple of papers about how pseudoaddiction isn’t “empirically proven”, as if this were a meaningful claim. This gave the media the ammunition they needed to declare that pseudoaddiction was always pseudoscience and has now been debunked and well-refuted.

This is just my story, and it’s kind of bulverist. But if you think it’s plausible, I recommend the following lessons:

First, when the media decides to craft a narrative, and the government decides to hold a moral panic, arguments get treated as soldiers. Anything that might sound like it supports the “wrong” side will be mercilessly debunked, no matter how true it is. Anything that supports the “right” side will be celebrated and accepted as obvious, no matter how bad its arguments. Good scientists feel afraid to speak up and question the story, lest they be seen as “soft on the Opioid Crisis” or “stooges of Big Pharma”. This happens again and again on any issue people care about, and I want to reiterate for the nth time that you should treat reporting on medical, scientific, and social scientific topics as having almost zero credibility.

Second, you should stay cautious about bias arguments. Yes, some people pushed pseudoaddiction because they were shills of the opioid companies. But other people pushed pseudoaddiction because it was true. Just because you can generate the hypothesis “maybe people are just shills of the opioid companies” doesn’t mean you’ve disproven pseudoaddiction. And if you focus too hard on the opioid companies’ obvious financial bias, then you’ll miss less obvious but possibly more important biases like those of the drug warriors. Your best bet would have been to just stop worrying about biases and try to figure out what was actually true.

by Scott Alexander, Slate Star Codex |  Read more:
[ed. For an excellent up-to-the-minute example of the opioid hysteria (and political posturing) making people's lives miserable, see also: US attack on WHO 'hindering morphine drive in poor countries' (The Guardian).]

Moose Run Creek Course, AK
Image: markk

How Medicare for All Looks From Britain

Babies are often expensive for creatures that are so small: they need new clothes, bedding, toys once they’re a little more agile, and the time you spend caring for them isn’t spent working, so your bank balance is run down for every day you aren’t in the office.

By posting a photograph of the bill she received after the birth of her second child, Washington Post columnist Elizabeth Bruenig also underscored that in the United States, the care you and your child receive during the delivery also costs money; even though Bruenig is insured, the hospital billed her nearly $8,000. The responses were mixed: many people understandably found the idea of billing people for bringing new life into being abhorrent, but others were defensive — child-rearing was a choice, their argument went, and having children was bound to cost money, so no one should complain that some companies were profiting from the creation of future generations.

Travelling to the United States several years ago, I spent more than twice as much time searching for insurance than booking flights, accommodation, and planning a sightseeing itinerary combined. My friends sorted their insurance quickly and cheaply but finding a company who would insure me for less than the cost of the return flight was a challenge. Since insurance is essentially gambling with risk, the vast majority of companies were unwilling to take a chance on a traveller with a rare genetic condition that causes multiple tumors to grow in my spinal cord and severe, poorly controlled epilepsy. Finally, I found a reasonable quote, but spent a huge amount of time in fear of seizing in public and being rushed to hospital, racking up an enormous bill.

Mercifully, I was seizure free for a week in New York, but came down with a brutal chest infection, coughing like a medieval peasant with tuberculosis, and raided CVS for anything that might help so as to avoid having to seek medical help. The cough left me unable to sleep for longer than a couple of hours a night and made enemies of the people around me on the flight back. The pain, fever, and shortness of breath made the tail end of my holiday miserable, but the fear of an expensive medical bill affected me far more. On returning to London, I secured an appointment with the National Health Service (NHS) quickly, was diagnosed with pneumonia, and sent home with a free prescription. I didn’t pay a penny for anything.

After a recent seizure left me unconscious for several minutes, I was kept in hospital for a little over a week. I had my own room in a facility directly over the river from the House of Parliament. Doctors performed multiple tests, including full body MRIs, CT scans, tests that tracked the electrical activity of my heart and brain, and staff gave me three meals a day, many cups of coffee, and medication at regular intervals. Free Wi-Fi throughout the hospital meant that when I felt able to, I could work on my laptop and explore the hospital grounds with friends. As we sat in the well-manicured gardens outside one of the hospital cafes, an American friend marveled that the place was “like a mini-city.”

It’s often, correctly, observed that to people in the United Kingdom the National Health Service is akin to a religion. Since its creation after World War II, the mere suggestion, by any party, of a shift away from free health care provokes horror in the electorate. To British people, the US model of health care appears like a hellscape: the easiest way to go viral in the United Kingdom is to post a scan of a US hospital bill and be met with horror by British people from across the political spectrum. Bruenig’s delivery may be at the lower end of that scale, but the outlook of those Americans tweeting about how health care shouldn’t be free is as alien to UK Conservatives as they are to the Left.

The NHS has changed the psychology of an entire nation, across multiple generations: we know that no matter what happens we will receive care and pay little, if anything, for it. Nowadays there are some costs: prescriptions are charged at a flat rate of £9 per person, but a large number of people are exempt — children, pregnant people, pensioners, low earners, and people like me, with certain health conditions that require daily medication, such as epilepsy, diabetes, thyroid issues, and cancer. Shortly after the introduction of the NHS, the government opted to impose charges for spectacles, wigs, and dentures, a highly controversial move that provoked the resignation of health minister Aneurin Bevan, the father of the NHS system. It was an unpopular choice but one that has stuck. (Again, there are certain exemptions similar to those listed above, and my eyesight is so poor, I am given free eye tests, and money off the cost of my lenses.)

Even those in Britain who pay to access private health care, either through work or through their personal wealth, use the NHS: their doctors are trained in the NHS, and specialist care is often available only through the NHS for rare and complex diseases. If you need to visit the emergency room, you’ll be taken to an NHS hospital. People might complain about certain aspects of the NHS, such as waiting times, or personal treatment when they disagree with a doctor, but these are minor gripes, and few would claim that charging people would improve matters. Most problems with the NHS are caused by underfunding at the hands of governments that will happily finance wars while cutting funding for nurses. The United Kingdom spends only about $4,000 per capita on health — the lowest of any G7 country save Italy — compared to more than $10,000 in the United States.

by Dawn Foster, Jacobin |  Read more:
Image: Christopher Furlong / Getty Images
[ed. See also: Does Anyone Really ‘Love’ Their Private Health Insurance? (NY Times).]

The Smug Style in American Liberalism

There is a smug style in American liberalism. It has been growing these past decades. It is a way of conducting politics, predicated on the belief that American life is not divided by moral difference or policy divergence — not really — but by the failure of half the country to know what's good for them.

In 2016, the smug style has found expression in media and in policy, in the attitudes of liberals both visible and private, providing a foundational set of assumptions above which a great number of liberals comport their understanding of the world.

It has led an American ideology hitherto responsible for a great share of the good accomplished over the past century of our political life to a posture of reaction and disrespect: a condescending, defensive sneer toward any person or movement outside of its consensus, dressed up as a monopoly on reason.

The smug style is a psychological reaction to a profound shift in American political demography.

Beginning in the middle of the 20th century, the working class, once the core of the coalition, began abandoning the Democratic Party. In 1948, in the immediate wake of Franklin Roosevelt, 66 percent of manual laborers voted for Democrats, along with 60 percent of farmers. In 1964, it was 55 percent of working-class voters. By 1980, it was 35 percent.

The white working class in particular saw even sharper declines. Despite historic advantages with both poor and middle-class white voters, by 2012 Democrats possessed only a 2-point advantage among poor white voters. Among white voters making between $30,000 and $75,000 per year, the GOP has taken a 17-point lead.

The consequence was a shift in liberalism's intellectual center of gravity. A movement once fleshed out in union halls and little magazines shifted into universities and major press, from the center of the country to its cities and elite enclaves. Minority voters remained, but bereft of the material and social capital required to dominate elite decision-making, they were largely excluded from an agenda driven by the new Democratic core: the educated, the coastal, and the professional.

It is not that these forces captured the party so much as it fell to them. When the laborer left, they remained.

The origins of this shift are overdetermined. Richard Nixon bears a large part of the blame, but so does Bill Clinton. The Southern Strategy, yes, but the destruction of labor unions, too. I have my own sympathies, but I do not propose to adjudicate that question here.

Suffice it to say, by the 1990s the better part of the working class wanted nothing to do with the word liberal. What remained of the American progressive elite was left to puzzle: What happened to our coalition?

Why did they abandon us?

What's the matter with Kansas?

The smug style arose to answer these questions. It provided an answer so simple and so emotionally satisfying that its success was perhaps inevitable: the theory that conservatism, and particularly the kind embraced by those out there in the country, was not a political ideology at all.

The trouble is that stupid hicks don't know what's good for them. They're getting conned by right-wingers and tent revivalists until they believe all the lies that've made them so wrong. They don't know any better. That's why they're voting against their own self-interest.

As anybody who has gone through a particularly nasty breakup knows, disdain cultivated in the aftermath of a divide quickly exceeds the original grievance. You lose somebody. You blame them. Soon, the blame is reason enough to keep them at a distance, the excuse to drive them even further away.

Finding comfort in the notion that their former allies were disdainful, hapless rubes, smug liberals created a culture animated by that contempt. The result is a self-fulfilling prophecy.

Financial incentive compounded this tendency — there is money, after all, in reassuring the bitter. Over 20 years, an industry arose to cater to the smug style. It began in humor, and culminated for a time in The Daily Show, a program that more than any other thing advanced the idea that liberal orthodoxy was a kind of educated savvy and that its opponents were, before anything else, stupid. The smug liberal found relief in ridiculing them.

The internet only made it worse. (...)

Of course, there is a smug style in every political movement: elitism among every ideology believing itself in possession of the solutions to society's ills. But few movements have let the smug tendency so corrupt them, or make so tenuous its case against its enemies.

"Conservatives are always at a bit of a disadvantage in the theater of mass democracy," the conservative editorialist Kevin Williamson wrote in National Review last October, "because people en masse aren't very bright or sophisticated, and they're vulnerable to cheap, hysterical emotional appeals."

The smug style thinks Williamson is wrong, of course, but not in principle. It's only that he's confused about who the hordes of stupid, hysterical people are voting for. The smug style reads Williamson and says, "No! You!"

Elites, real elites, might recognize one another by their superior knowledge. The smug recognize one another by their mutual knowing.

Knowing, for example, that the Founding Fathers were all secular deists. Knowing that you're actually, like, 30 times more likely to shoot yourself than an intruder. Knowing that those fools out in Kansas are voting against their own self-interest and that the trouble is Kansas doesn't know any better. Knowing all the jokes that signal this knowledge.

The studies, about Daily Show viewers and better-sized amygdalae, are knowing. It is the smug style's first premise: a politics defined by a command of the Correct Facts and signaled by an allegiance to the Correct Culture. A politics that is just the politics of smart people in command of Good Facts. A politics that insists it has no ideology at all, only facts. No moral convictions, only charts, the kind that keep them from "imposing their morals" like the bad guys do.

Knowing is the shibboleth into the smug style's culture, a cultural that celebrates hip commitments and valorizes hip taste, that loves nothing more than hate-reading anyone who doesn't get them. A culture that has come to replace politics itself.

The knowing know that police reform, that abortion rights, that labor unions are important, but go no further: What is important, after all, is to signal that you know these things. What is important is to launch links and mockery at those who don't. The Good Facts are enough: Anybody who fails to capitulate to them is part of the Problem, is terminally uncool. No persuasion, only retweets. Eye roll, crying emoji, forward to John Oliver for sick burns.

by Emmett Rensin, Vox | Read more:
Image: Brittany Holloway-Brown

Monday, September 16, 2019

How Long Before The Salmon Are Gone?


How Long Before These Salmon Are Gone? ‘Maybe 20 Years’ (NY Times)
Image: Leon Werdinger, via Alamy

The Deep-Pocket Push to Preserve Surprise Medical Billing

As proposals to ban surprise medical bills move through Congress and state legislatures with rare bipartisan support, physician groups have emerged as the loudest opponents.

Often led by doctors with the veneer of noble concern for patients, physician-staffing firms—third-party companies that employ doctors and assign them out to health care facilities—have opposed efforts to limit the practice known as balance billing. They claim such bans would rob doctors of their leverage in negotiating, drive down their payments and push them out of insurance networks.

Opponents have been waging well-financed campaigns. Slick TV ads and congressional lobbyists seek to stop legislation that had widespread support from voters. Nearly 40% of patients said they were “very worried” about surprise medical bills, which generally arise when an insured individual inadvertently receives care from an out-of-network provider.

But as lobbyists purporting to represent doctors and hospitals fight the proposals, it has become increasingly clear that the force behind the multimillion-dollar crusade is not only medical professionals, but also investors in private equity and venture capital firms.

In the past eight years, in such fields as emergency medicine and anesthesia, investors have bought and now operate many large physician-staffing companies. And key to their highly profitable business strategy is to not participate in insurance networks, allowing them to send surprise bills and charge patients a price they set—with few limitations.

“We’ve started to realize it’s not us versus the hospitals or the doctors, it’s us versus the hedge funds,” said James Gelfand, senior vice president of health policy at ERIC, a group that represents large employers. (...)

To understand the power and size of private equity in the U.S. health care system, one must first understand physician-staffing firms.

Increasingly, hospitals have turned to third-party companies to fill their facilities with doctors. Among driving factors: physician shortages, a bigger insured population because of the Affordable Care Act and an aging population, according to research from the investment firm Harris Williams & Co.

In some areas, doctors have few options but to contract with a staffing service, which hires them out and helps with the billing and other administrative headaches that occupy much of a doctor’s time. Staffing companies often have profit-sharing agreements with hospitals, so some of the money from billing patients is passed back to the hospitals.

The two largest staffing firms, EmCare and TeamHealth, together make up about 30% of the physician-staffing market.

That’s where private equity comes in. A private equity firm buys companies and passes on the profits they squeeze out of them to the firm’s investors. Private equity deals in health care have doubled in the past 10 years. TeamHealth is owned by Blackstone, a private equity firm. Envision and EmCare are owned by KKR, another private equity firm.

With affiliates in every state, these privately owned, profit-driven companies staff emergency rooms, own dialysis facilities and operate physician practices. Research from 2017 shows that when EmCare entered a market, out-of-network billing rates went up between 81 and 90 percentage points. When TeamHealth began working with a hospital, its rates increased by 33 percentage points.

by Rachel Bluth and Emmarie Huetteman, Kaiser Health News via Daily Beast |  Read more:
Image: Shutterstock
[ed. Hedge funds. Again. They're an economic virus. See also: Kaiser healthcare workers plan for nation's largest strike since 1997 (Salon)]

Kabul Relieves Traffic Congestion By Creating Car Bomb Lane

KABUL, Afghanistan — Residents of Kabul are enjoying shorter commute times on the Kandahar–Kabul Highway thanks to the recent completion of a designated car bomb toll lane, sources report.

“For over 18 years motorists had to endure expressways choked with vehicle-borne improvised explosive devices (VBIEDs), resulting in driver frustration, spilled coffee, and premature detonations due to excessive delays,” said Minster of Transportation and Civil Aviation Muhammad Hamid Tahmasi.

“Now,” continued Tahamsi, “with the patent-pending FastBlast® app, drivers can prepay their tolls and rest assured that they will reach their destination on-time and on-target.”

In addition to helping jihadists deliver their payloads in record time, the $2 billion project funded by the US Army Corps of Engineers is a surprising new stream of revenue for both the Afghan government and local businesses in the postwar draw down.

“We are definitely seeing a lot of new foreign investment in the fertilizer and ball bearing industries,” said Minister of Commerce and Industries Anwar ul-Haq Ahady. “Plus, we are providing generous electric car bomb incentives to help aspiring domestic terrorists ‘go green.'”

by Jack S. McQuack, DuffleBlog | Read more:
Image: MichelleWalz CC2.0 license

Overtourism

Saturday, September 14, 2019

Friday, September 13, 2019

The Zollar


The 100 trillion dollar bank note that is nearly worthless (CNN)
Image: uncredited
[ed. The things you learn every day! For example, I knew Zimbabwe suffers from hyper-inflation, but didn't know that it's currency - even in trillion dollar denominations - was still insufficient. So they invented - the Zollar!]

The 100 Best


The 100 best films of the 21st century (The Guardian)
Image: uncredited
[ed. See also: The 100 best albums of the 21st century (The Guardian).]

Ken Burns’s ‘Country Music’ Traces the Genre’s Victories, and Reveals Its Blind Spots

Tell a lie long enough and it begins to smell like the truth. Tell it even longer and it becomes part of history.

Throughout “Country Music,” the new omnibus genre documentary from Ken Burns and Dayton Duncan, there are moments of tension between the stories Nashville likes to tell about itself — some true, some less so — and the way things actually were.

And while from a distance, this doggedly thorough eight-part, 16-hour series — which begins Sunday on PBS — hews to the genre’s party line, viewed up close it reveals the ruptures laid out in plain sight.

Anxiety about race has been a country music constant for decades, right up through this year’s Lil Nas X kerfuffle. In positioning country music as, essentially, the music of the white rural working class, Nashville streamlined — make that steamrollered — the genre’s roots, and the ways it has always been engaged in wide-ranging cultural dialogue.

But right at the beginning of “Country Music” is an acknowledgment that slave songs formed part of early country’s raw material. And then a reminder that the banjo has its roots in West African stringed gourd instruments. The series covers how A.P. Carter, a founder of the Carter Family, traveled with Lesley Riddle, a black man, to find and write down songs throughout Appalachia. And it explores how Hank Williams’s mentor was Rufus Payne, a black blues musician.

It goes on and on, tracing an inconvenient history for a genre that has generally been inhospitable to black performers, regardless of the successes of Charley Pride, Darius Rucker or DeFord Bailey, the first black performer on the Grand Ole Opry. Over and again, “Country Music” lays bare what is too often overlooked: that country music never evolved in isolation.

Each episode of this documentary tackles a different time period, from the first Fiddlin’ John Carson recordings in the 1920s up through the pop ascent of Garth Brooks in the 1990s. Burns has used this multi-episode approach on other American institutions and turning-point historical events: “The Civil War,” “The Vietnam War” and “Jazz.” These are subjects that merit rigor and also patience — hence the films’ length. But country music, especially, demands an approach that blends reverence and skepticism, because so often its story is one in which those in control try to squelch counternarratives while never breaking a warm smile.

“Country Music” rolls its eyes at the tension between the genre imagining itself as an unvarnished platform for America’s rural storytelling and being an extremely marketable racket where people from all parts of the country, from all class levels, do a bit of cosplay.

Minnie Pearl, from “Hee Haw,” came from a wealthy family and lived in a stately home next to the governor’s mansion. Nudie Cohn, the tailor whose vividly embroidered suits became country superstar must-haves in the 1960s and beyond, was born Nuta Kotlyarenko in Kiev, and worked out of a shop in Hollywood. The number of life insurance advertisements sprinkled throughout the photos in the early episodes serve as a reminder of just how contingent the spread of country music was on its sponsors. One salesman recalled determining which homes were tuning in to the Grand Ole Opry on the weekend, and then going to try to sell them insurance on Monday morning.

The only constant in this film is Nashville’s repeated efforts to fend off new ideas like a body rejecting an organ transplant. Merle Haggard, Willie Nelson, Charlie Pride, Hank Williams Jr. — they’re all genre icons who first met resistance because of their desire to make music different from the norm of their day, then ended up establishing new norms.

by Jon Caramanica, NY Times | Read more:
Image: Les Leverett Collections

The Distinctly American Ethos of the Grifter


The Distinctly American Ethos of the Grifter (NY Times)
Image: Stan Douglas, “Two Friends, 1975”

Thursday, September 12, 2019


Unknown, Astronomical Photos, 1863
via:

What Happened to Urban Dictionary?

On January 24, 2017, a user by the name of d0ughb0y uploaded a definition to Urban Dictionary, the popular online lexicon that relies on crowdsourced definitions. Under Donald Trump—who, four days prior, was sworn in as the 45th president of the United States, prompting multiple Women's Marches a day later—he wrote: "The man who got more obese women out to walk on his first day in office than Michelle Obama did in eight years." Since being uploaded, it has received 25,716 upvotes and is considered the top definition for Donald Trump. It is followed by descriptions that include: "He doesn't like China because they actually have a great wall"; "A Cheeto… a legit Cheeto"; and "What all hispanics refer to as 'el diablo.'" In total, there are 582 definitions for Donald Trump—some are hilarious, others are so packed with bias you wonder if the president himself actually wrote them, yet none of them are completely accurate.

The site, now in its 20th year, is a digital repository that contains more than 8 million definitions and famously houses all manner of slang and cultural expressions. Founded by Aaron Peckham in 1999—then a computer science major at Cal Poly—Urban Dictionary became notorious for allowing what sanctioned linguistic gatekeepers, such as the Oxford English Dictionary and Merriam-Webster, would not: a plurality of voice. In interviews, Peckham has said the site began as a joke, as a way to mock Dictionary.com, but it eventually ballooned into a thriving corpus.

Today, it averages around 65 million visitors a month, according to data from SimilarWeb, with almost 100 percent of its traffic originating via organic search. You can find definitions for just about anything or anyone: from popular phrases like Hot Girl Summer ("a term used to define girls being unapologetically themselves, having fun, loving yourself, and doing YOU") and In my bag ("the act of being in your own world; focused; being in the zone; on your grind") to musicians like Pete Wentz ("an emo legend. his eyeliner could literally kill a man"); even my name, Jason, has an insane 337 definitions (my favorite one, which I can attest is 1,000 percent true: "the absolute greatest person alive").

In the beginning, Peckham's project was intended as a corrective. He wanted, in part, to help map the vastness of the human lexicon, in all its slippery, subjective glory (a message on the homepage of the site reads: "Urban Dictionary Is Written By You"). Back then, the most exciting, and sometimes most culture-defining, slang was being coined constantly, in real time. What was needed was an official archive for those evolving styles of communication. "A printed dictionary, which is updated rarely," Peckham said in 2014, "tells you what thoughts are OK to have, what words are OK to say." That sort of one-sided authority did not sit well with him. So he developed a version that ascribed to a less exclusionary tone: local and popular slang, or what linguist Gretchen McCulloch might refer to as "public, informal, unselfconscious language" now had a proper home.

In time, however, the site began to espouse the worst of the internet—Urban Dictionary became something much uglier than perhaps what Peckham set out to create. It transformed into a harbor for hate speech. By allowing anyone to post definitions (users can up or down vote their favorite ones) Peckham opened the door for the most insidious among us. Racism, homophobia, xenophobia, and sexism currently serve as the basis for some of the most popular definitions on the site. One of the site's definitions for sexism details it as "a way of life like welfare for black people. now stop bitching and get back to the kitchen." Under Lady Gaga, one top entry describes her as the embodiment of "a very bad joke played on all of us by Tim Burton." For LeBron James, it reads: "To bail out on your team when times get tough." (...)

Early on, the beauty of the site was its deep insistence on showing how slang is socialized based on a range of factors: community, school, work. How we casually convey meaning is a direct reflection of our geography, our networks, our worldviews. At its best, Urban Dictionary crystallized that proficiency. Slang is often understood as a less serious form of literacy, as deficient or lacking. Urban Dictionary said otherwise. It let the cultivators of the most forward-looking expressions of language speak for themselves. It believed in the splendor of slang that was deemed unceremonious and paltry.

In her new book, Because Internet: Understanding the New Rules of Language, McCulloch puts forward a question: "But what kind of net can you use to capture living language?" She tells the story of German dialectologist Georg Wenker, who mailed postal surveys to teachers and asked them to translate sentences. French linguist Jules Gilliéron later innovated on Wenker's method: He sent a trained worker into the field to oversee the surveys. This practice was known as dialect mapping. The hope was to identify the rich, varied characteristics of a given language: be it speech patterns, specific terminology, or the lifespan of shared vocabulary. For a time, field studies went on like this. Similar to Wikipedia and Genius, Urban Dictionary inverted that approach through crowdsourcing: the people came to it.

"In the early years of Urban Dictionary we tried to keep certain words out," Peckham once said. "But it was impossible—authors would re-upload definitions, or upload definitions with alternate spellings. Today, I don't think it's the right thing to try to remove offensive words." (Peckham didn't respond to emails seeking comment for this story.) One regular defense he lobbed at critics was that the site, and its cornucopia of definitions, was not meant to be taken at face value. Its goodness and its nastiness, instead, were a snapshot of a collective outlook. If anything, Peckham said, Urban Dictionary tapped into the pulse of our thinking.

But if the radiant array of terminology uploaded to the site was initially meant to function as a possibility of human speech, it is now mostly a repository of vile language. In its current form, Urban Dictionary is a cauldron of explanatory excess and raw prejudice. "The problem for Peckham's bottom line is that derogatory content—not the organic evolution of language in the internet era—may be the site's primary appeal," Clio Chang wrote in The New Republic in 2017, as the site was taking on its present identity.

by Jason Parham, Wired |  Read more:
Image: Elena Lacey/Getty