Wednesday, September 18, 2019

Losing The Narrative Battle Over Iran

I’m expected to write something about the Trump administration’s warmongering against Iran over an attack on a Saudi oil refinery, because that’s typically what I do in this ongoing improvisational exercise of mine: I write about the behavior of the US war machine and the propaganda that is used to bolster it. It’s what my readers have come to expect. But honestly I find the whole thing extremely tedious and I’ve been putting off writing about it for two days.

This is because from a propaganda analysis point of view, there’s really not much to write about. The Trump administration has been making bumbling, ham-fisted attempts at manufacturing public support for increasing aggressions against Iran since it initiated withdrawal from the JCPOA a year and a half ago, yet according to a Gallup poll last month Americans still overwhelmingly support diplomatic solutions with Tehran over any kind of military aggression at all. In contrast, most Americans supported a full-scale ground invasion of Iraq according to Gallup polls taken in the lead-up to that 2003 atrocity. With the far less committed Libya intervention, it was 47 percent supportive of US military action versus 37 percent opposed.

That’s the kind of support it takes to get a US war off the ground these days. And it’s going to take a lot more than a busted Saudi oil refinery to get there, even in the completely unproven event that it was indeed Iran which launched the attack.

The reason I’m able to spend so much time writing about war propaganda as part of my job is because war propaganda is happening constantly, and the reason war propaganda is happening constantly is because it’s absolutely necessary for the perpetuation of the US-centralized empire’s slow-motion third world war against unabsorbed governments. In other words, the propaganda apparatus of the empire works constantly to manufacture consent for military aggressions because it absolutely requires that consent.

When I say that the imperial war machine requires public consent before it can initiate overt warfare, I’m not saying that the US government is physically or legally incapable of launching a war that the public disapproves of, I’m saying that it is absolutely essential for the drivers of empire to preserve the illusion of freedom and democracy in America. People need to feel like their government is basically acting in everyone’s best interest, and that it is answerable to the will of the electorate, otherwise the illusion of freedom and democracy is shattered and people lose all trust in their government and media. If people no longer trust the political/media class, they can’t be propagandized. Without the ability to propagandize the masses, the empire collapses.

So out of sheer self-interest, establishment power structures necessarily avoid overt warfare until they have successfully manufactured consent for it. If they didn’t do this and chose instead to take off the nice guy mask, say “Screw you we’re doing what we want,” and start butchering Iranians at many times the cost of Iraq in both money and in American lives lost, people would immediately lose trust in their institutions and the narrative matrix which holds the whole thing together would crack open like an egg. From there revolution would become an inevitability as people are no longer being successfully propagandized by the establishment narrative managers into believing that the system is working fine for everyone.

Think about it: why else would the mass media be churning out propaganda about disobedient governments like Iran, Venezuela, Syria, Russia and China if they didn’t need to? They need the citizenry they’re charged with manipulating to consent to important geostrategic imperialist maneuvers, or they’ll break the hypnotic trance of relentless narrative control. And make no mistake, maintaining narrative control is the single highest priority of establishment power structures, because it’s absolutely foundational to those structures.

This is why the warmongers have been favoring economic warfare over conventional warfare; it’s much easier to manufacture support for civilian-slaughtering starvation sanctions. It’s slower, it’s sloppier, and it’s surely a lot less fun for the psychopaths in charge, but because the public will consent to economic sanctions far more readily than ground invasions or air strikes, it’s been the favored method in bringing disobedient governments to their knees. That’s how important manufacturing consent is.

So a bunch of drama around a Saudi oil refinery isn’t going to do the trick. The US government is not going to leap into an all-out war which would inevitably be many times worse than Iraq based on that, because they can’t manufacture consent for it right now. All they’re trying to do is escalate things a bit further with the goal of eventually getting to a point where Iran either caves to Washington’s demands or launches a deadly attack, at which point the US can play victim and the mass media can spend days tearfully running photos of the slain US troops. If that happens they might gain their consent from the public. If not, we may see them get a little more creative with their “crisis initiation”.

Until then this is a whole lot of noise and very little signal, which is why I find this current circus uninteresting to write about. It seems like every week now the Trump administration is trotting out some new narrative with the help of the mass media explaining why the Iranian government is evil and must be toppled, and nobody buys it because it’s on the other side of the damn planet and it’s always about something silly like oil or broken drones. Their unappealing pestering about this is starting to remind me of a really awkward loser who’s constantly asking out the prettiest girl in the office over and over again; you just want to pull him aside and say dude, stop. She’s just not into you.

by Caitlin Johnstone, Medium |  Read more:

Tuesday, September 17, 2019

Against Against Pseudoaddiction

“Pseudoaddiction” is one of the standard beats every article on the opioid crisis has to hit. Pharma companies (the story goes) invented a concept called “pseudoaddiction”, which looks exactly like addiction, except it means you just need to give the patient more drugs. Bizarrely gullible doctors went along with this and increased prescriptions for their addicted patients. For example, from a letter in the Wall Street Journal:
Parroting Big Pharma’s excuses about FDA oversight and black-box warnings only discounts how companies like Johnson & Johnson engaged in pervasive misinformation campaigns and even promoted a theory of “pseudoaddiction” to encourage doctors to prescribe even more opioids for patients who displayed signs of addiction.
Or from CBS:
But amid skyrocketing addiction rates and overdoses related to OxyContin, Panara claimed the company taught a sales tactic she now considers questionable, saying some patients might only appear to be addicted when in fact they’re just in pain. In training, she was taught a term for this:“pseudoaddiction.” 
“So the cure for ‘pseudoaddiction,’ you were trained, is more opioids?” Dokoupil asked. 
“A higher dose, yes,” Panara said. 
“Did this concept of pseudoaddiction come with studies backing it up?” 
“We had no studies. We actually — we did not have any studies. That’s the thing that was kind of disturbing, was that we didn’t have studies to present to the doctors,” Panara responded. 
“You know how that sounds?” Dokoupil asked. 
“I know. I was naïve,” Panara said. (...)
Let me confess: I think pseudoaddiction is real. In fact, I think it’s obviously real. I think everyone should realize it’s real as soon as it’s explained properly to them. I think we should be terrified that any of our institutions – media, academia, whatever – think they could possibly get away with claiming pseudoaddiction isn’t real. I think people should be taking to the streets trying to overthrow a medical system that has the slightest doubt about whether pseudoaddiction is real. If you can think of more hyperbolic statements about pseudoaddiction, I probably believe those too.

Neuroscientists define addiction in terms of complicated brain changes, but ordinary doctors just go off behavior. The average doctor treats “addiction” and “drug-seeking behavior” as synonymous. This paper lists signs of drug-seeking behavior that doctors should watch out for, like:

– Aggressively complaining about a need for a drug
– Requesting to have the dose increased
– Asking for specific drugs by name
– Taking a few extra, unauthorised doses on occasion
– Frequently calling the clinic
– Unwilling to consider other drugs or non-drug treatments
– Frequent unauthorised dose escalations after being told that it is inappropriate
– Consistently disruptive behaviour when arriving at the clinic

You might notice that all of these are things people might do if they actually need the drug. Consider this classic case study of pseudoaddiction from Weissman & Haddox, summarized by Greene & Chambers:
The 1989 introduction of pseudoaddiction happened in the form a single case report of a 17-year-old man with acute leukemia, who was hospitalized with pneumonia and chest wall pain. The patient was initially given 5 mg of intravenous morphine every 4 to 6 h on an as-needed dosing schedule but received additional doses and analgesics over time. After a few days, the patient started engaging in behaviors that are frequently associated with opioid addiction, such as requesting medication prior to scheduled dosing, requesting specific opioids, and engaging in pain behaviors (e.g., moaning, crying, grimacing, and complaining about various aches and pains) to elicit drug delivery. The authors argued that this was not idiopathic opioid addiction but pseudoaddiction, which resulted from medical under-treatment (insufficient opioid dosing, utilization of opioids with inadequate potency, excessive dosing intervals) of the patient’s pain. In describing pseudoaddiction as an “iatrogenic” syndrome, Weissman and Haddox inverted the traditional usage of iatrogenic as harm caused by a medical intervention. In pseudoaddiction, iatrogenic harm was described as being caused by withholding treatment (opioids), not by providing it.
Greene & Chambers present this as some kind of exotic novel hypothesis, but think about this for a second like a normal human being. You have a kid with a very painful form of cancer. His doctor guesses at what the right dose of painkillers should be. After getting this dose of painkillers, the kid continues to “engage in pain behaviors ie moaning, crying, grimacing, and complaining about various aches and pains”, and begs for a higher dose of painkillers.

I maintain that the normal human thought process is “Since this kid is screaming in pain, looks like I guessed wrong about the right amount of painkillers for him, I should give him more.”

The official medical-system approved thought process, which Greene & Chambers are defending in this paper, is “Since he is displaying signs of drug-seeking behavior, he must be an addict trying to con you into giving him his next fix.” They never come out and say this. But they define pseudoaddiction as meaning not that, and end up saying “in conclusion, we find no empirical evidence yet exists to justify a clinical ‘diagnosis’ of pseudoaddiction.” More on this later.

The concept of “pseudoaddiction” was invented as a corrective to an all-too-common tendency for doctors to assume that anyone who seems too interested in getting more medications is necessarily an addict. It was invented not by pharma companies, but by doctors working with patients in pain, building upon a hundred-year-long history of other doctors and medical educators trying to explain the same point.

And in case you think this is a weird ivory tower debate that doesn’t influence real clinical practice, I offer you these cases from my own experience. Stories slightly changed or merged together to protect patient privacy:

Case 1: Mary is an elderly woman who undergoes a surgery known to have a painful recovery process. The surgeon prescribes a dose of painkillers once every six hours. The painkillers last four hours. From hours 4-6, Mary is in terrible pain. During one of these periods, she says that she wishes she was dead. The surgeon leaps into action by…calling the on-call psychiatrist and saying “Hey, there’s a suicidal person on my ward, you should do psychiatry to her or something.” I am the on call psychiatrist. After a brief evaluation, I tell the surgeon that Mary has no psychiatric illness but needs painkillers every four hours. The surgeon lectures me on how There Is An Opioid Crisis, Y’Know, and we can’t negotiate with addicts and drug-seekers. I am a consultant on the case and can’t overule the surgeon on his own ward, so I just hang out with Mary for a while and talk about things and distract her and listen to her scream during the worst part of the six-hour cycle. After a few days the surgery has healed to the point where Mary is only in excruciating pain rather than actively suicidal, and so we send her home.

Case 2: Juan is a middle-aged man with depression who is using Geodon for antidepressant augmentation. This is kind of a weird choice, and has theoretical potential to interact poorly with some of his other medications, but nothing else has worked for him and he’s done great for ten years. He switches psychiatrists. The new psychiatrist is really worried about the theoretical interaction, so he tells him that he can’t take Geodon anymore and switches him to something else. Juan falls into a deep depression. He asks to have Geodon back and the doctor says no. Juan yells at the psychiatrist and says he is ruining his life. The psychiatrist diagnoses him with a personality disorder and anger management problems, and tells him to attend therapy. Juan actually does this for a while, but eventually wises up and switches doctors to me. I put him back on Geodon and within a month he’s doing great again. Note that Juan displayed every sign of “drug-seeking behavior” even though Geodon is not addictive.

Case 3: This one courtesy of Zvi. Zvi’s friend is diabetic. He runs out of insulin and asks his doctor for more. The doctor wants to wait until his next free appointment in a few weeks before prescribing the insulin. Zvi’s friend points out that he will die unless he gets more insulin now. The doctor gets very angry about this and spends a long phone call haranguing Zvi’s friend about how inconvenient it is that he’s demanding the insulin now rather than at a more convenient time. Zvi’s friend has to threaten the doctor with a lawsuit before the doctor finally relents and gives him the insulin. I like this story because, again, insulin is not addictive, there is no way that the patient could possibly be doing anything wrong, but the patient still gets treated as a drug-seeker. The very act of wanting medication according to the logic of his own disease, rather than at the doctor’s convenience, is enough to make his request suspicious.

Case 4: John is a 70 year old man on opioids for 30 years due to a mining-related injury. He is doing very well. I am his outpatient psychiatrist but I only see him once every few months to renew meds. He gets some kind of infection, goes to the hospital, and due to normal hospital incompetence he doesn’t get his opioids. He demands his meds, and like many 70 year old ex-miners in terrible pain, he is not diligently polite the whole time. The hospital doctors are excited: they have caught an opioid addict! They tell his family and outpatient doctors he cannot have opioids from now on, then discharge him. He continues to be in terrible pain. At first he sneaks pills from an extra bottle of opioids he has at home, but eventually he uses all those up. After this, he is still in terrible pain with no reason to expect this to ever change, and so he quite reasonably shoots himself in the chest. This is the first point in this entire process at which anyone attempts to tell me any of this is going on, so I get a “HEY DID YOU KNOW YOUR PATIENT SHOT HIMSELF? DOESN’T SEEM LIKE YOU’RE DOING VERY GOOD PSYCHIATRIST-ING?” call. The patient miraculously survives, eventually finds a new pain doctor, and goes on to live a normal and happy life on the same dose of opioids he was using before.

Let’s look at those warning signs of addiction again:

– Aggressively complaining about a need for a drug
– Requesting to have the dose increased
– Asking for specific drugs by name
– Taking a few extra, unauthorised doses on occasion
– Frequently calling the clinic
– Unwilling to consider other drugs or non-drug treatments
– Frequent unauthorised dose escalations after being told that it is inappropriate
– Consistently disruptive behaviour when arriving at the clinic


In Case 1, Mary requested her dose of painkiller be increased (from once per six hours to once per four hours). In Case 2, Juan asked for a specific drug by name (Geodon), and was unwilling to consider other drugs. In Case 3, Zvi’s friend frequently called the clinic (to get them to refill his insulin). In Case 4, John showed consistently disruptive behavior in the hospital and took extra unauthorized doses. Etc.

All of these are drug-seeking behaviors. But I maintain that none of these patients were addicted. The correct action in all of these cases is to listen to the patient’s reasons for wanting the drug, realize that you (the doctor) screwed up, and give them the drug that they are asking for. Although the point that these behaviors can be signs of addiction is well-taken and important, it’s equally important to remember they can be signs of other things too.

Media portrayals of pseudoaddiction portray it as this bizarre contortion of logic: “A patient is displaying signs of addiction, so you should give them more of the drug! Haha, nice try, pharma companies!” But this is exactly what you should do! The real problem lies with anyone who conceptualizes pseudoaddiction as a novel hypothesis that requires proof, rather than as the obvious possibility you have to check for before accusing patients of addiction. (...)

As far as I can tell, the concept started off well-intentioned. But painkiller companies realized that the debate over when to diagnose addiction vs. pseudoaddiction was relevant to their bottom line, and started funding the pseudoaddiction side of it.

I’m not sure how substantial an effort this was. G&C note that of 224 papers mentioning pseudoaddiction, 22 were sponsored by pharma (but that means 202 weren’t). Of a stricter category of 12 papers that focused on arguing for the concept, 4 were sponsored by pharma (but 8 were not). Taking their numbers at face value, the majority of discussion of pseudoaddiction had no pharma company sponsorship. But the image of an expert getting up in front of a medical conference and telling doctors that the solution to opioid addiction was more opioids – something that certainly did happen, I’m not sure how often – was so lurid that it burned itself into the popular consciousness. The media exaggerated this from “basically good idea gets misused” to “doctors invent vicious lies to addict your loved ones” to get more clicks. Experts didn’t want to be the guy saying “well actually” in the middle of an Opioid Crisis, so they kept their mouths shut. Reporters copied each others’ denunciations of ‘pseudoaddiction’ without checking what the term really meant.

Into all this came the drug warriors. It’s hard for me to be angry at addictionologists, because they have a terrible job and are probably traumatized by it. But they really hate drugs and will say whatever it takes to make you hate drugs too. These are the people who gave us articles on how one hit of marijuana will get you addicted forever and definitely kill you, how one hit of LSD will make you go crazy and get addicted and probably kill you, how there can never be any legitimate medical reason for using cannabis, how e-cigarettes are deadly poison, and other similar classics. Sensing that they had the high ground, they wrote a couple of papers about how pseudoaddiction isn’t “empirically proven”, as if this were a meaningful claim. This gave the media the ammunition they needed to declare that pseudoaddiction was always pseudoscience and has now been debunked and well-refuted.

This is just my story, and it’s kind of bulverist. But if you think it’s plausible, I recommend the following lessons:

First, when the media decides to craft a narrative, and the government decides to hold a moral panic, arguments get treated as soldiers. Anything that might sound like it supports the “wrong” side will be mercilessly debunked, no matter how true it is. Anything that supports the “right” side will be celebrated and accepted as obvious, no matter how bad its arguments. Good scientists feel afraid to speak up and question the story, lest they be seen as “soft on the Opioid Crisis” or “stooges of Big Pharma”. This happens again and again on any issue people care about, and I want to reiterate for the nth time that you should treat reporting on medical, scientific, and social scientific topics as having almost zero credibility.

Second, you should stay cautious about bias arguments. Yes, some people pushed pseudoaddiction because they were shills of the opioid companies. But other people pushed pseudoaddiction because it was true. Just because you can generate the hypothesis “maybe people are just shills of the opioid companies” doesn’t mean you’ve disproven pseudoaddiction. And if you focus too hard on the opioid companies’ obvious financial bias, then you’ll miss less obvious but possibly more important biases like those of the drug warriors. Your best bet would have been to just stop worrying about biases and try to figure out what was actually true.

by Scott Alexander, Slate Star Codex |  Read more:
[ed. For an excellent up-to-the-minute example of the opioid hysteria (and political posturing) making people's lives miserable, see also: US attack on WHO 'hindering morphine drive in poor countries' (The Guardian).]

Moose Run Creek Course, AK
Image: markk

How Medicare for All Looks From Britain

Babies are often expensive for creatures that are so small: they need new clothes, bedding, toys once they’re a little more agile, and the time you spend caring for them isn’t spent working, so your bank balance is run down for every day you aren’t in the office.

By posting a photograph of the bill she received after the birth of her second child, Washington Post columnist Elizabeth Bruenig also underscored that in the United States, the care you and your child receive during the delivery also costs money; even though Bruenig is insured, the hospital billed her nearly $8,000. The responses were mixed: many people understandably found the idea of billing people for bringing new life into being abhorrent, but others were defensive — child-rearing was a choice, their argument went, and having children was bound to cost money, so no one should complain that some companies were profiting from the creation of future generations.

Travelling to the United States several years ago, I spent more than twice as much time searching for insurance than booking flights, accommodation, and planning a sightseeing itinerary combined. My friends sorted their insurance quickly and cheaply but finding a company who would insure me for less than the cost of the return flight was a challenge. Since insurance is essentially gambling with risk, the vast majority of companies were unwilling to take a chance on a traveller with a rare genetic condition that causes multiple tumors to grow in my spinal cord and severe, poorly controlled epilepsy. Finally, I found a reasonable quote, but spent a huge amount of time in fear of seizing in public and being rushed to hospital, racking up an enormous bill.

Mercifully, I was seizure free for a week in New York, but came down with a brutal chest infection, coughing like a medieval peasant with tuberculosis, and raided CVS for anything that might help so as to avoid having to seek medical help. The cough left me unable to sleep for longer than a couple of hours a night and made enemies of the people around me on the flight back. The pain, fever, and shortness of breath made the tail end of my holiday miserable, but the fear of an expensive medical bill affected me far more. On returning to London, I secured an appointment with the National Health Service (NHS) quickly, was diagnosed with pneumonia, and sent home with a free prescription. I didn’t pay a penny for anything.

After a recent seizure left me unconscious for several minutes, I was kept in hospital for a little over a week. I had my own room in a facility directly over the river from the House of Parliament. Doctors performed multiple tests, including full body MRIs, CT scans, tests that tracked the electrical activity of my heart and brain, and staff gave me three meals a day, many cups of coffee, and medication at regular intervals. Free Wi-Fi throughout the hospital meant that when I felt able to, I could work on my laptop and explore the hospital grounds with friends. As we sat in the well-manicured gardens outside one of the hospital cafes, an American friend marveled that the place was “like a mini-city.”

It’s often, correctly, observed that to people in the United Kingdom the National Health Service is akin to a religion. Since its creation after World War II, the mere suggestion, by any party, of a shift away from free health care provokes horror in the electorate. To British people, the US model of health care appears like a hellscape: the easiest way to go viral in the United Kingdom is to post a scan of a US hospital bill and be met with horror by British people from across the political spectrum. Bruenig’s delivery may be at the lower end of that scale, but the outlook of those Americans tweeting about how health care shouldn’t be free is as alien to UK Conservatives as they are to the Left.

The NHS has changed the psychology of an entire nation, across multiple generations: we know that no matter what happens we will receive care and pay little, if anything, for it. Nowadays there are some costs: prescriptions are charged at a flat rate of £9 per person, but a large number of people are exempt — children, pregnant people, pensioners, low earners, and people like me, with certain health conditions that require daily medication, such as epilepsy, diabetes, thyroid issues, and cancer. Shortly after the introduction of the NHS, the government opted to impose charges for spectacles, wigs, and dentures, a highly controversial move that provoked the resignation of health minister Aneurin Bevan, the father of the NHS system. It was an unpopular choice but one that has stuck. (Again, there are certain exemptions similar to those listed above, and my eyesight is so poor, I am given free eye tests, and money off the cost of my lenses.)

Even those in Britain who pay to access private health care, either through work or through their personal wealth, use the NHS: their doctors are trained in the NHS, and specialist care is often available only through the NHS for rare and complex diseases. If you need to visit the emergency room, you’ll be taken to an NHS hospital. People might complain about certain aspects of the NHS, such as waiting times, or personal treatment when they disagree with a doctor, but these are minor gripes, and few would claim that charging people would improve matters. Most problems with the NHS are caused by underfunding at the hands of governments that will happily finance wars while cutting funding for nurses. The United Kingdom spends only about $4,000 per capita on health — the lowest of any G7 country save Italy — compared to more than $10,000 in the United States.

by Dawn Foster, Jacobin |  Read more:
Image: Christopher Furlong / Getty Images
[ed. See also: Does Anyone Really ‘Love’ Their Private Health Insurance? (NY Times).]

The Smug Style in American Liberalism

There is a smug style in American liberalism. It has been growing these past decades. It is a way of conducting politics, predicated on the belief that American life is not divided by moral difference or policy divergence — not really — but by the failure of half the country to know what's good for them.

In 2016, the smug style has found expression in media and in policy, in the attitudes of liberals both visible and private, providing a foundational set of assumptions above which a great number of liberals comport their understanding of the world.

It has led an American ideology hitherto responsible for a great share of the good accomplished over the past century of our political life to a posture of reaction and disrespect: a condescending, defensive sneer toward any person or movement outside of its consensus, dressed up as a monopoly on reason.

The smug style is a psychological reaction to a profound shift in American political demography.

Beginning in the middle of the 20th century, the working class, once the core of the coalition, began abandoning the Democratic Party. In 1948, in the immediate wake of Franklin Roosevelt, 66 percent of manual laborers voted for Democrats, along with 60 percent of farmers. In 1964, it was 55 percent of working-class voters. By 1980, it was 35 percent.

The white working class in particular saw even sharper declines. Despite historic advantages with both poor and middle-class white voters, by 2012 Democrats possessed only a 2-point advantage among poor white voters. Among white voters making between $30,000 and $75,000 per year, the GOP has taken a 17-point lead.

The consequence was a shift in liberalism's intellectual center of gravity. A movement once fleshed out in union halls and little magazines shifted into universities and major press, from the center of the country to its cities and elite enclaves. Minority voters remained, but bereft of the material and social capital required to dominate elite decision-making, they were largely excluded from an agenda driven by the new Democratic core: the educated, the coastal, and the professional.

It is not that these forces captured the party so much as it fell to them. When the laborer left, they remained.

The origins of this shift are overdetermined. Richard Nixon bears a large part of the blame, but so does Bill Clinton. The Southern Strategy, yes, but the destruction of labor unions, too. I have my own sympathies, but I do not propose to adjudicate that question here.

Suffice it to say, by the 1990s the better part of the working class wanted nothing to do with the word liberal. What remained of the American progressive elite was left to puzzle: What happened to our coalition?

Why did they abandon us?

What's the matter with Kansas?

The smug style arose to answer these questions. It provided an answer so simple and so emotionally satisfying that its success was perhaps inevitable: the theory that conservatism, and particularly the kind embraced by those out there in the country, was not a political ideology at all.

The trouble is that stupid hicks don't know what's good for them. They're getting conned by right-wingers and tent revivalists until they believe all the lies that've made them so wrong. They don't know any better. That's why they're voting against their own self-interest.

As anybody who has gone through a particularly nasty breakup knows, disdain cultivated in the aftermath of a divide quickly exceeds the original grievance. You lose somebody. You blame them. Soon, the blame is reason enough to keep them at a distance, the excuse to drive them even further away.

Finding comfort in the notion that their former allies were disdainful, hapless rubes, smug liberals created a culture animated by that contempt. The result is a self-fulfilling prophecy.

Financial incentive compounded this tendency — there is money, after all, in reassuring the bitter. Over 20 years, an industry arose to cater to the smug style. It began in humor, and culminated for a time in The Daily Show, a program that more than any other thing advanced the idea that liberal orthodoxy was a kind of educated savvy and that its opponents were, before anything else, stupid. The smug liberal found relief in ridiculing them.

The internet only made it worse. (...)

Of course, there is a smug style in every political movement: elitism among every ideology believing itself in possession of the solutions to society's ills. But few movements have let the smug tendency so corrupt them, or make so tenuous its case against its enemies.

"Conservatives are always at a bit of a disadvantage in the theater of mass democracy," the conservative editorialist Kevin Williamson wrote in National Review last October, "because people en masse aren't very bright or sophisticated, and they're vulnerable to cheap, hysterical emotional appeals."

The smug style thinks Williamson is wrong, of course, but not in principle. It's only that he's confused about who the hordes of stupid, hysterical people are voting for. The smug style reads Williamson and says, "No! You!"

Elites, real elites, might recognize one another by their superior knowledge. The smug recognize one another by their mutual knowing.

Knowing, for example, that the Founding Fathers were all secular deists. Knowing that you're actually, like, 30 times more likely to shoot yourself than an intruder. Knowing that those fools out in Kansas are voting against their own self-interest and that the trouble is Kansas doesn't know any better. Knowing all the jokes that signal this knowledge.

The studies, about Daily Show viewers and better-sized amygdalae, are knowing. It is the smug style's first premise: a politics defined by a command of the Correct Facts and signaled by an allegiance to the Correct Culture. A politics that is just the politics of smart people in command of Good Facts. A politics that insists it has no ideology at all, only facts. No moral convictions, only charts, the kind that keep them from "imposing their morals" like the bad guys do.

Knowing is the shibboleth into the smug style's culture, a cultural that celebrates hip commitments and valorizes hip taste, that loves nothing more than hate-reading anyone who doesn't get them. A culture that has come to replace politics itself.

The knowing know that police reform, that abortion rights, that labor unions are important, but go no further: What is important, after all, is to signal that you know these things. What is important is to launch links and mockery at those who don't. The Good Facts are enough: Anybody who fails to capitulate to them is part of the Problem, is terminally uncool. No persuasion, only retweets. Eye roll, crying emoji, forward to John Oliver for sick burns.

by Emmett Rensin, Vox | Read more:
Image: Brittany Holloway-Brown

Monday, September 16, 2019

How Long Before The Salmon Are Gone?


How Long Before These Salmon Are Gone? ‘Maybe 20 Years’ (NY Times)
Image: Leon Werdinger, via Alamy

The Deep-Pocket Push to Preserve Surprise Medical Billing

As proposals to ban surprise medical bills move through Congress and state legislatures with rare bipartisan support, physician groups have emerged as the loudest opponents.

Often led by doctors with the veneer of noble concern for patients, physician-staffing firms—third-party companies that employ doctors and assign them out to health care facilities—have opposed efforts to limit the practice known as balance billing. They claim such bans would rob doctors of their leverage in negotiating, drive down their payments and push them out of insurance networks.

Opponents have been waging well-financed campaigns. Slick TV ads and congressional lobbyists seek to stop legislation that had widespread support from voters. Nearly 40% of patients said they were “very worried” about surprise medical bills, which generally arise when an insured individual inadvertently receives care from an out-of-network provider.

But as lobbyists purporting to represent doctors and hospitals fight the proposals, it has become increasingly clear that the force behind the multimillion-dollar crusade is not only medical professionals, but also investors in private equity and venture capital firms.

In the past eight years, in such fields as emergency medicine and anesthesia, investors have bought and now operate many large physician-staffing companies. And key to their highly profitable business strategy is to not participate in insurance networks, allowing them to send surprise bills and charge patients a price they set—with few limitations.

“We’ve started to realize it’s not us versus the hospitals or the doctors, it’s us versus the hedge funds,” said James Gelfand, senior vice president of health policy at ERIC, a group that represents large employers. (...)

To understand the power and size of private equity in the U.S. health care system, one must first understand physician-staffing firms.

Increasingly, hospitals have turned to third-party companies to fill their facilities with doctors. Among driving factors: physician shortages, a bigger insured population because of the Affordable Care Act and an aging population, according to research from the investment firm Harris Williams & Co.

In some areas, doctors have few options but to contract with a staffing service, which hires them out and helps with the billing and other administrative headaches that occupy much of a doctor’s time. Staffing companies often have profit-sharing agreements with hospitals, so some of the money from billing patients is passed back to the hospitals.

The two largest staffing firms, EmCare and TeamHealth, together make up about 30% of the physician-staffing market.

That’s where private equity comes in. A private equity firm buys companies and passes on the profits they squeeze out of them to the firm’s investors. Private equity deals in health care have doubled in the past 10 years. TeamHealth is owned by Blackstone, a private equity firm. Envision and EmCare are owned by KKR, another private equity firm.

With affiliates in every state, these privately owned, profit-driven companies staff emergency rooms, own dialysis facilities and operate physician practices. Research from 2017 shows that when EmCare entered a market, out-of-network billing rates went up between 81 and 90 percentage points. When TeamHealth began working with a hospital, its rates increased by 33 percentage points.

by Rachel Bluth and Emmarie Huetteman, Kaiser Health News via Daily Beast |  Read more:
Image: Shutterstock
[ed. Hedge funds. Again. They're an economic virus. See also: Kaiser healthcare workers plan for nation's largest strike since 1997 (Salon)]

Kabul Relieves Traffic Congestion By Creating Car Bomb Lane

KABUL, Afghanistan — Residents of Kabul are enjoying shorter commute times on the Kandahar–Kabul Highway thanks to the recent completion of a designated car bomb toll lane, sources report.

“For over 18 years motorists had to endure expressways choked with vehicle-borne improvised explosive devices (VBIEDs), resulting in driver frustration, spilled coffee, and premature detonations due to excessive delays,” said Minster of Transportation and Civil Aviation Muhammad Hamid Tahmasi.

“Now,” continued Tahamsi, “with the patent-pending FastBlast® app, drivers can prepay their tolls and rest assured that they will reach their destination on-time and on-target.”

In addition to helping jihadists deliver their payloads in record time, the $2 billion project funded by the US Army Corps of Engineers is a surprising new stream of revenue for both the Afghan government and local businesses in the postwar draw down.

“We are definitely seeing a lot of new foreign investment in the fertilizer and ball bearing industries,” said Minister of Commerce and Industries Anwar ul-Haq Ahady. “Plus, we are providing generous electric car bomb incentives to help aspiring domestic terrorists ‘go green.'”

by Jack S. McQuack, DuffleBlog | Read more:
Image: MichelleWalz CC2.0 license

Overtourism

Saturday, September 14, 2019

Friday, September 13, 2019

The Zollar


The 100 trillion dollar bank note that is nearly worthless (CNN)
Image: uncredited
[ed. The things you learn every day! For example, I knew Zimbabwe suffers from hyper-inflation, but didn't know that it's currency - even in trillion dollar denominations - was still insufficient. So they invented - the Zollar!]

The 100 Best


The 100 best films of the 21st century (The Guardian)
Image: uncredited
[ed. See also: The 100 best albums of the 21st century (The Guardian).]

Ken Burns’s ‘Country Music’ Traces the Genre’s Victories, and Reveals Its Blind Spots

Tell a lie long enough and it begins to smell like the truth. Tell it even longer and it becomes part of history.

Throughout “Country Music,” the new omnibus genre documentary from Ken Burns and Dayton Duncan, there are moments of tension between the stories Nashville likes to tell about itself — some true, some less so — and the way things actually were.

And while from a distance, this doggedly thorough eight-part, 16-hour series — which begins Sunday on PBS — hews to the genre’s party line, viewed up close it reveals the ruptures laid out in plain sight.

Anxiety about race has been a country music constant for decades, right up through this year’s Lil Nas X kerfuffle. In positioning country music as, essentially, the music of the white rural working class, Nashville streamlined — make that steamrollered — the genre’s roots, and the ways it has always been engaged in wide-ranging cultural dialogue.

But right at the beginning of “Country Music” is an acknowledgment that slave songs formed part of early country’s raw material. And then a reminder that the banjo has its roots in West African stringed gourd instruments. The series covers how A.P. Carter, a founder of the Carter Family, traveled with Lesley Riddle, a black man, to find and write down songs throughout Appalachia. And it explores how Hank Williams’s mentor was Rufus Payne, a black blues musician.

It goes on and on, tracing an inconvenient history for a genre that has generally been inhospitable to black performers, regardless of the successes of Charley Pride, Darius Rucker or DeFord Bailey, the first black performer on the Grand Ole Opry. Over and again, “Country Music” lays bare what is too often overlooked: that country music never evolved in isolation.

Each episode of this documentary tackles a different time period, from the first Fiddlin’ John Carson recordings in the 1920s up through the pop ascent of Garth Brooks in the 1990s. Burns has used this multi-episode approach on other American institutions and turning-point historical events: “The Civil War,” “The Vietnam War” and “Jazz.” These are subjects that merit rigor and also patience — hence the films’ length. But country music, especially, demands an approach that blends reverence and skepticism, because so often its story is one in which those in control try to squelch counternarratives while never breaking a warm smile.

“Country Music” rolls its eyes at the tension between the genre imagining itself as an unvarnished platform for America’s rural storytelling and being an extremely marketable racket where people from all parts of the country, from all class levels, do a bit of cosplay.

Minnie Pearl, from “Hee Haw,” came from a wealthy family and lived in a stately home next to the governor’s mansion. Nudie Cohn, the tailor whose vividly embroidered suits became country superstar must-haves in the 1960s and beyond, was born Nuta Kotlyarenko in Kiev, and worked out of a shop in Hollywood. The number of life insurance advertisements sprinkled throughout the photos in the early episodes serve as a reminder of just how contingent the spread of country music was on its sponsors. One salesman recalled determining which homes were tuning in to the Grand Ole Opry on the weekend, and then going to try to sell them insurance on Monday morning.

The only constant in this film is Nashville’s repeated efforts to fend off new ideas like a body rejecting an organ transplant. Merle Haggard, Willie Nelson, Charlie Pride, Hank Williams Jr. — they’re all genre icons who first met resistance because of their desire to make music different from the norm of their day, then ended up establishing new norms.

by Jon Caramanica, NY Times | Read more:
Image: Les Leverett Collections

The Distinctly American Ethos of the Grifter


The Distinctly American Ethos of the Grifter (NY Times)
Image: Stan Douglas, “Two Friends, 1975”

Thursday, September 12, 2019


Unknown, Astronomical Photos, 1863
via:

What Happened to Urban Dictionary?

On January 24, 2017, a user by the name of d0ughb0y uploaded a definition to Urban Dictionary, the popular online lexicon that relies on crowdsourced definitions. Under Donald Trump—who, four days prior, was sworn in as the 45th president of the United States, prompting multiple Women's Marches a day later—he wrote: "The man who got more obese women out to walk on his first day in office than Michelle Obama did in eight years." Since being uploaded, it has received 25,716 upvotes and is considered the top definition for Donald Trump. It is followed by descriptions that include: "He doesn't like China because they actually have a great wall"; "A Cheeto… a legit Cheeto"; and "What all hispanics refer to as 'el diablo.'" In total, there are 582 definitions for Donald Trump—some are hilarious, others are so packed with bias you wonder if the president himself actually wrote them, yet none of them are completely accurate.

The site, now in its 20th year, is a digital repository that contains more than 8 million definitions and famously houses all manner of slang and cultural expressions. Founded by Aaron Peckham in 1999—then a computer science major at Cal Poly—Urban Dictionary became notorious for allowing what sanctioned linguistic gatekeepers, such as the Oxford English Dictionary and Merriam-Webster, would not: a plurality of voice. In interviews, Peckham has said the site began as a joke, as a way to mock Dictionary.com, but it eventually ballooned into a thriving corpus.

Today, it averages around 65 million visitors a month, according to data from SimilarWeb, with almost 100 percent of its traffic originating via organic search. You can find definitions for just about anything or anyone: from popular phrases like Hot Girl Summer ("a term used to define girls being unapologetically themselves, having fun, loving yourself, and doing YOU") and In my bag ("the act of being in your own world; focused; being in the zone; on your grind") to musicians like Pete Wentz ("an emo legend. his eyeliner could literally kill a man"); even my name, Jason, has an insane 337 definitions (my favorite one, which I can attest is 1,000 percent true: "the absolute greatest person alive").

In the beginning, Peckham's project was intended as a corrective. He wanted, in part, to help map the vastness of the human lexicon, in all its slippery, subjective glory (a message on the homepage of the site reads: "Urban Dictionary Is Written By You"). Back then, the most exciting, and sometimes most culture-defining, slang was being coined constantly, in real time. What was needed was an official archive for those evolving styles of communication. "A printed dictionary, which is updated rarely," Peckham said in 2014, "tells you what thoughts are OK to have, what words are OK to say." That sort of one-sided authority did not sit well with him. So he developed a version that ascribed to a less exclusionary tone: local and popular slang, or what linguist Gretchen McCulloch might refer to as "public, informal, unselfconscious language" now had a proper home.

In time, however, the site began to espouse the worst of the internet—Urban Dictionary became something much uglier than perhaps what Peckham set out to create. It transformed into a harbor for hate speech. By allowing anyone to post definitions (users can up or down vote their favorite ones) Peckham opened the door for the most insidious among us. Racism, homophobia, xenophobia, and sexism currently serve as the basis for some of the most popular definitions on the site. One of the site's definitions for sexism details it as "a way of life like welfare for black people. now stop bitching and get back to the kitchen." Under Lady Gaga, one top entry describes her as the embodiment of "a very bad joke played on all of us by Tim Burton." For LeBron James, it reads: "To bail out on your team when times get tough." (...)

Early on, the beauty of the site was its deep insistence on showing how slang is socialized based on a range of factors: community, school, work. How we casually convey meaning is a direct reflection of our geography, our networks, our worldviews. At its best, Urban Dictionary crystallized that proficiency. Slang is often understood as a less serious form of literacy, as deficient or lacking. Urban Dictionary said otherwise. It let the cultivators of the most forward-looking expressions of language speak for themselves. It believed in the splendor of slang that was deemed unceremonious and paltry.

In her new book, Because Internet: Understanding the New Rules of Language, McCulloch puts forward a question: "But what kind of net can you use to capture living language?" She tells the story of German dialectologist Georg Wenker, who mailed postal surveys to teachers and asked them to translate sentences. French linguist Jules Gilliéron later innovated on Wenker's method: He sent a trained worker into the field to oversee the surveys. This practice was known as dialect mapping. The hope was to identify the rich, varied characteristics of a given language: be it speech patterns, specific terminology, or the lifespan of shared vocabulary. For a time, field studies went on like this. Similar to Wikipedia and Genius, Urban Dictionary inverted that approach through crowdsourcing: the people came to it.

"In the early years of Urban Dictionary we tried to keep certain words out," Peckham once said. "But it was impossible—authors would re-upload definitions, or upload definitions with alternate spellings. Today, I don't think it's the right thing to try to remove offensive words." (Peckham didn't respond to emails seeking comment for this story.) One regular defense he lobbed at critics was that the site, and its cornucopia of definitions, was not meant to be taken at face value. Its goodness and its nastiness, instead, were a snapshot of a collective outlook. If anything, Peckham said, Urban Dictionary tapped into the pulse of our thinking.

But if the radiant array of terminology uploaded to the site was initially meant to function as a possibility of human speech, it is now mostly a repository of vile language. In its current form, Urban Dictionary is a cauldron of explanatory excess and raw prejudice. "The problem for Peckham's bottom line is that derogatory content—not the organic evolution of language in the internet era—may be the site's primary appeal," Clio Chang wrote in The New Republic in 2017, as the site was taking on its present identity.

by Jason Parham, Wired |  Read more:
Image: Elena Lacey/Getty

Homeless

The word is that John Bolton is not going quietly after President Trump’s ostentatious slam-dunking of him on Twitter. Maybe he won’t. But there’s a part of this equation I doubt we’ll see discussed much in the press coverage of this story. Bolton isn’t really a foreign policy guy and hasn’t been for more than a decade. Yes, he still discusses foreign policy and for the last year or so he had what is basically the top foreign policy job in the U.S. government. But since the end of the Bush years Bolton has really been a public politics guy and a consummate player in the GOP buck-raking industrial complex.

Bolton had a $500,000 a year gig with Fox News. But he also had a slew of PACs and fundraising entities dedicated to sounding the alarm about bad acting regimes and sending money into John Bolton’s pocket. He became one of the GOP’s many professional yakkers and scaremongers who make big dollars raising money off the folks who watch Fox News.

Just for kicks, here’s some of the fundraising emails I pulled up in my inbox, each with links to give money by this or that deadline.


Just one sample of the sort of stuff you’d find in those emails:


But here’s the thing. Donald Trump owns the Republican party. Just ask Justin Amash and Mark Sanford and Bob Corker and a number of others. Trump is the first, second and third rail of Republican politics. You can’t be anti-Trump and be anywhere in the GOP/Fox News funding system, let alone in elected office. If you want to stay in, you have to do what Sen. Ben Sasse did and give Trump full custody of your dignity with maybe the hope of occasional visitation rights.

I have no doubt that Bolton wants to roast Trump alive. Partly it’s just payback for canning and humiliating him. But Bolton must also be horrified by what Trump appears to want to do in Afghanistan, Iran, North Korea and various other places. But if Bolton goes full Trump critic, it’s very hard to see how he’s ever going to make the massive paydays he was before Trump picked him.

Not that that’s the end of the world. I’m sure he’s a wealthy man and he could find other ways to make money. But that’s a Fox News world. And if he goes anti-Trump, that world will be closed to John Bolton. And that big money is going to be really hard to forego.

by Josh Marshall, TPM |  Read more:
Images: TPM
[ed. See also: Trump Finally Fired John Bolton, but Does It Really Matter? (New Yorker).]

Wednesday, September 11, 2019

Seeing What the Fighting Is All About on Alaska’s Coastal Plain

Mud Maker: The Man Behind MLB’s Essential Secret Sauce

Jim Bintliff’s collection of lies is small and sharply curated, each one loose enough to be plausible and mundane enough to limit interest in verifying it. They work like this: Bintliff will be out on the banks of a tributary of the Delaware River, in his personal uniform of denim cutoffs and disintegrating sneakers, using a shovel to harvest buckets of mud. Someone will come along and ask what he’s doing. Bintliff sizes up the questioner, usually a boater or swimmer or fisherman, then picks from his collection. I’ve been sent by the Environmental Protection Agency, and I’m surveying the soil. Or: I’m helping the Port Authority, looking into pollution. Or, if it’s a group of young folks who look like they’ve only come out on the water for a good time: I take this mud, and I put it on my pot plants. They grow like trees.

This always does the trick. It prevents anyone from exploring what he’s actually doing, which is what he’s done for decades, what his father did before him, and his grandfather before him: Bintliff is collecting the mud that is used to treat every single regulation major league baseball, roughly 240,000 per season.

Mud is a family business; it has been for more than half a century. For decades, baseball’s official rule book has required that every ball be rubbed before being used in a game. Bintliff’s mud is the only substance allowed. Originally marketed as “magic,” it’s just a little thicker than chocolate pudding—a tiny dab is enough to remove the factory gloss from a new ball without mucking up the seams or getting the cover too filthy. Equipment managers rub it on before every game, allowing pitchers to get a dependable grip. The mud is found only along a short stretch of that tributary of the Delaware, with the precise location kept secret from everyone, including MLB.

The business is small and fundamentally unglamorous. Bintliff harvests the mud himself, using only a shovel and a few buckets, as he has for his entire adult life. The 62-year-old has recently begun bringing a trusted assistant to help him carry the load, but other than that, the process is the same as it has always been. After he collects the mud, he hauls it back to his yard in southern New Jersey, where it sits until he’s ready to pack it up in his garage and ship it out to teams. His wife, Joanne, takes orders and does invoicing. That’s it. There’s no one and nothing else to the operation. It’s increasingly out of place in a hyper-controlled, ultra-competitive, high-tech league, where every detail is calibrated for peak efficiency.

So it shouldn’t be surprising that MLB has recently tried to eliminate Bintliff, teaming with Rawlings to develop a ball that doesn’t need to be enhanced by mud. But baseball is realizing that it isn’t so easy to replace him, and, in fact, it might not be possible at all.

by Emma Baccellieri, Sports Illustrated |  Read more:
Image: LEBRECHTMEDIA

Face Recognition, Bad People and Bad Data

  • We worry about face recognition just as we worried about databases - we worry what happens if they contain bad data and we worry what bad people might do with them
  • It’s easy to point at China, but there are large grey areas where we don't yet have a clear consensus of what ‘bad’ would actually mean, and how far we worry because this is different rather than just because it’s just new and unfamiliar
  • Like much of machine learning, face recognition is quickly becoming a commodity tech that many people can and will use to build all sorts of things. ‘AI Ethics’ boards can go a certain way but can’t be a complete solution, and regulation (which will take many forms) will go further. But Chinese companies have their own ethics boards and are already exporting their products.
Way back in the 1970s and early 1980s, the tech industry created a transformative new technology that gave governments and corporations an unprecedented ability to track, analyse and understand all of us. Relational databases meant that for the first time things that had always been theoretically possible on a small scale became practically possible on a massive scale. People worried about this, a lot, and wrote books about it, a lot.


Specifically, we worried about two kinds of problem:
  • We worried that these databases would contain bad data or bad assumptions, and in particular that they might inadvertently and unconsciously encode the existing prejudices and biases of our societies and fix them into machinery. We worried people would screw up.
  • And, we worried about people deliberately building and using these systems to do bad things
That is, we worried what would happen if these systems didn’t work and we worried what would happen if they did work.

We’re now having much the same conversation about AI in general (or more properly machine learning) and especially about face recognition, which has only become practical because of machine learning. And, we’re worrying about the same things - we worry what happens if it doesn’t work and we worry what happens if it does work. We’re also, I think, trying to work out how much of this is a new problem, and how much of it we’re worried about, and why we’re worried.

First, ‘when people screw up’.

When good people use bad data

People make mistakes with databases. We’ve probably all heard some variant of the old joke that the tax office has misspelled your name and it’s easier to change your name than to get the mistake fixed. There’s also the not-at-all-a-joke problem that you have the same name as a wanted criminal and the police keep stopping you, or indeed that you have the same name as a suspected terrorist and find yourself on a no-fly list or worse. Meanwhile, this spring a security researcher claimed that he’d registered ‘NULL’ as his custom licence place and now gets hundreds of random misdirected parking tickets.

These kinds of stories capture three distinct issues:
  • The system might have bad data (the name is misspelled)…
  • Or have a bug or bad assumption in how it processes data (it can’t handle ‘Null’ as a name, or ‘Scunthorpe’ triggers an obscenity filter)
  • And, the system is being used by people who don’t have the training, processes, institutional structure or individual empowerment to recognise such a mistake and react appropriately.
Of course, all bureaucratic processes are subject to this set of problems, going back a few thousand years before anyone made the first punch card. Databases gave us a new way to express it on a different scale, and so now does machine learning. But ML brings different kinds of ways to screw up, and these are inherent in how it works.

So: imagine you want a software system that can recognise photos of cats. The old way to do this would be to build logical steps - you’d make something that could detect edges, something that could detect pointed ears, an eye detector, a leg counter and so on… and you’d end up with several hundred steps all bolted together and it would never quite work. Really, this was like trying to make a mechanical horse - perfectly possible in theory, but in practice the complexity was too great. There’s a whole class of computer science problems like this - thing that are easy for us to do but hard or impossible for us to explain how we do. Machine learning changes these from logic problems to statistics problems. Instead of writing down how you recognise a photo of X, you take a hundred thousand examples of X and a hundred thousand examples of not-X and use a statistical engine to generate (‘train’) a model that can tell the difference to a given degree of certainty. Then you give it a photo and it tells you whether it matched X or not-X and by what degree. Instead of telling the computer the rules, the computer works out the rules based on the data and the answers (‘this is X, that is not-X) that you give it. (...)

This works fantastically well for a whole class of problem, including face recognition, but it introduces two areas for error.

First, what exactly is in the training data - in your examples of X and Not-X? Are you sure? What ELSE is in those example sets?

My favourite example of what can go wrong here comes from a project for recognising cancer in photos of skin. The obvious problem is that you might not have an appropriate distribution of samples of skin in different tones. But another problem that can arise is that dermatologists tend to put rulers in the photo of cancer, for scale - so if all the examples of ‘cancer’ have a ruler and all the examples of ‘not-cancer’ do not, that might be a lot more statistically prominent than those small blemishes. You inadvertently built a ruler-recogniser instead of a cancer-recogniser.

The structural thing to understand here is that the system has no understanding of what it’s looking at - it has no concept of skin or cancer or colour or gender or people or even images. It doesn’t know what these things are any more than a washing machine knows what clothes are. It’s just doing a statistical comparison of data sets. So, again - what is your data set? How is it selected? What might be in it that you don’t notice - even if you’re looking? How might different human groups be represented in misleading ways? And what might be in your data that has nothing to do with people and no predictive value, yet affects the result? Are all your ‘healthy’ photos taken under incandescent light and all your ‘unhealthy’ pictures taken under LED light? You might not be able to tell, but the computer will be using that as a signal.

Second, a subtler point - what does ‘match’ mean? The computers and databases that we’re all familiar with generally give ‘yes/no’ answers. Is this licence plate reported stolen? Is this credit card valid? Does it have available balance? Is this flight booking confirmed? How many orders are there for this customer number? But machine learning doesn’t give yes/no answers. It gives ‘maybe’, ‘maybe not’ and ‘probably’ answers. It gives probabilities. So, if your user interface presents a ‘probably’ as a ‘yes’, this can create problems.

You can see both of these issues coming together in a couple of recent publicity stunts: train a face recognition system on mugshots of criminals (and only criminals), and then take a photo of an honest and decent person (normally a politician) and ask if there are any matches, taking care to use a fairly low confidence level, and the system says YES! - and this politician is ‘matched’ against a bank robber.

To a computer scientist, this can look like sabotage - you deliberately use a skewed data set, deliberately set the accuracy too low for the use case and then (mis)represent a probabilistic result as YES WE HAVE A MATCH. You could have run the same exercise with photos of kittens instead of criminals, or indeed photos of cabbages - if you tell the computer ‘find the closest match for this photo of a face amongst these photos of cabbages’, it will say ‘well, this cabbage is the closest.’ You’ve set the system up to fail - like driving a car into a wall and then saying ‘Look! It crashed!’ as though you’ve proved something.

But of course, you have proved something - you’ve proved that cars can be crashed. And these kinds of exercises have value because people hear ‘artificial intelligence’ and think that it’s, well, intelligence - that it’s ‘AI’ and ‘maths’ and a computer and ‘maths can’t be biased’. The maths can’t be biased but the data can be. There’s a lot of value to demonstrating that actually, this technology can be screwed up, just as databases can be screwed up, and they will be. People will build face recognition systems in exactly this way and not understand why they won’t produce reliable results, and then sell those products to small police departments and say ‘it’s AI - it can never be wrong’.

These issues are fundamental to machine learning, and it’s important to repeat that they have nothing specifically to do with data about people. You could build a system that recognises imminent failure in gas turbines and not realise that your sample data has biased it against telemetry from Siemens sensors. Equally, machine learning is hugely powerful - it really can recognise things that computers could never recognise before, with a huge range of extremely valuable uses cases. But, just as we had to understand that databases are very useful but can be ‘wrong’, we also have to understand how this works, both to try to avoid screwing up and to make sure that people understand that the computer could still be wrong. Machine learning is much better at doing certain things than people, just as a dog is much better at finding drugs than people, but we wouldn’t convict someone on a dog’s evidence. And dogs are much more intelligent than any machine learning.

by Benedict Evans |  Read more:
Image: uncredited