Monday, October 30, 2017

Hillary Clinton Releases Thousands of Pythons in Florida to Win the 2016 Election

How much more evidence do we need?!

When today’s news broke, I was dumbfounded and horrified. Not because I didn’t expect it, but because the mainstream media has once again totally botched the biggest story of the 2016 election. Of course, I am referring to the new allegations that, during the presidential election, Hillary Clinton personally oversaw an effort to set giant pythons loose in Florida to eat all of the Trump voters.

Also, don’t listen to the Mueller stuff. Just stay away from that, okay?

The facts of today’s news are incontrovertible. In 2016, Hillary Clinton visited Florida more than any other state except for Ohio and Pennsylvania. Fact. Also, Florida currently has an infestation of Burmese pythons that are causing chaos in the Everglades. Fact. Finally, there could be a recording out there of Hillary Clinton saying, and I quote, “The Clinton Foundation is a front for raising thousands of snakes that I train to consume people who are likely to vote for my political opponents.” Fact.

Do you know what’s not a fact? The new indictments in the Russia investigation. You can be indicted for anything these days — it doesn’t necessarily mean that you compromised American democracy in cahoots with a foreign power. That being said, Hillary Clinton should be indicted.

We only need one reason to see why: every time she visited Florida during the 2016 campaign, she was completely out of the public eye for literally minutes at a time. During each of those episodes, she had every chance to quietly slip away, creep to a warm nest of mangrove roots or marsh grass, and empty a bag full of baby pythons carefully bred to devour Republican voters in swing districts.

It’s obvious that the entire Mueller investigation is a total charade. This is the real story: Hillary Clinton may very well have personally deposited thousands of pythons throughout Florida with the express intent of murdering thousands of Americans and replacing them with, I assume, liberal robots.

These claims should be given even more weight due to the eerily suspicious timing of the whole so-called “official inquiry.” Do you really think it’s a coincidence that a federal grand jury approved the charges in the special investigation just days before the news broke about the Clinton campaign’s Python Strategy? Seems to me that we’ve got a 2016 presidential candidate spewing out random nonsense in an effort to distract Americans, cast doubt on perfectly legitimate investigations, and slither to safety. That candidate’s name is Hillary Clinton, and do you know what else slithers? Giant man-eating pythons. I am deliberate in my word choice.

If you still doubt this real story that has been reported by numerous real sources, then try proving to yourself that Hillary Clinton didn’t spend the past four years working to build a self-sustaining reptile colony that has been genetically engineered to find Trump supporters tasty. You can’t, because that would require proving a negative, which is impossible, but also because she did it. Maybe she even did it with Russia. Maybe they’re Russian Burmese pythons.

by Matthew Disler, McSweeny's |  Read more:
Image: via

Sunday, October 29, 2017


Tom Gauld
via:

Who Killed Reality? And Can We Bring It Back?

It has taken me nearly forever to notice a stupid and obvious fact about Donald Trump. He rose to fame as a reality TV star, and the one thing everyone understands about reality TV — people who love it and people who hate it — is that reality TV is not reality. It’s something else: the undermining of reality, the pirated version of reality, the perverted simulation of reality. If reality is Hawkins, Indiana, then reality TV is the Upside Down.

So I’m not sure how many of the people who voted for Trump actually thought they were getting a real president of a real country in the real world. (I feel badly for those people, though not as badly as they should feel about themselves.) That whole "real world" thing has sort of worn out its appeal. They wanted a devious goblin-troll from another dimension who would make the libtards howl and pee their panties, and so far they have had no reason to be disappointed.

Zero legislative accomplishments, an utterly incoherent foreign policy, a wink-and-nod acquaintanceship with neo-Nazis and white supremacists and an ever-lengthening list of broken promises and blatant falsehoods? Whatever, Poindexter: Fake news! Anyway, it’s all worth it to watch people in suits with Ivy League educations turn red on TV and start talking about history and the Constitution and all that other crap.

A year ago last August, in what felt like a noxious political environment but now looks like a different nation, a different historical era and perhaps a different reality, I wrote a mid-campaign Zeitgeist report that contained a strong premonition of what was coming. It wasn’t the only premonition I had while covering the 2016 presidential campaign. But I’m honestly not congratulating myself here, because like many other people who write about politics, I covered up my moments of dark insight with heaping doses of smug and wrong. (...)

Since then, I have come to the conclusion that the real innovators and disruptors in this dynamic were not the Bannon-Hannity Trump enablers in the media but the Trump demographic itself, which was more substantial and more complicated than we understood at the time. Trump’s supporters are mostly either studied with anthropological condescension or mocked as a pack of delusional racists hopped up on OxyContin and Wendy’s drive-through, who have halfway convinced themselves that their stagnant family incomes and sense of spiritual aimlessness are somehow the fault of black people and Muslims and people with PhDs. But in some ways they were ahead of the rest of us.

Don’t get me wrong: A lot of them are delusional racists who believe all sorts of untrue and unsavory things. But MAGAmericans have also imbibed a situational or ontological relativism that would impress the philosophy faculty at those coastal universities their grandkids will not be attending. They have grasped something important about the nature of reality in the 21st century — which is that reality isn’t important anymore. (...)

When Trump exults on Twitter over the perceived defeat of his enemies, Republican or Democrat or whomever, it often appears ludicrous and self-destructive to those of us out here in the realm of reality. But he’s making the same point over and over, and I think his followers get it: I’m down here in the labyrinth gnawing on the bones, and you haven’t even figured out how to fight me! To get back to the “Stranger Things” references, there must be a rule in Dungeons & Dragons that covers this scenario: There’s no point in attacking an imaginary creature with a real sword. (...)

Repeatedly hitting people over the head with a rolled-up newspaper, as if they were disobedient doggies, while telling them that Donald Trump is a liar and a fraud is pretty much the apex state of liberal self-parody. They know that. That’s why they like him.

Trump is a prominent symbol of the degradation or destruction of reality, but he didn’t cause it. He would not conceivably be president today — an eventuality that will keep on seeming fictional, as long as it lasts — if all of us, not just Republicans or the proverbial white working class, hadn’t traveled pretty far down the road into the realm of the not-real. Reality just wasn’t working out that well. God is dead, or at least he moved really far away with no phone and no internet, and a lot of reassuring old-time notions of reality loaded in his van. The alternative for many Americans is dead-end service jobs, prescription painkillers and blatantly false promises that someday soon technology and entrepreneurship will make everything better.

by Andrew O'Hehir, Salon |  Read more:
Image: AP/Getty/Shutterstock/Salon

How Big Medicine Can Ruin Medicare for All

In 2013, Senator Bernie Sanders, a self-described “democratic socialist,” couldn’t find a single co-sponsor for his healthcare plan, which would replace private insurance with Medicare-like coverage for all Americans regardless of age or income.

Today, the roll call of supporters for his latest version includes the leading lights of the Democratic party, including many with plausible presidential aspirations. It’s enough to make an exasperated Dana Milbank publish a column in the Washington Post under the headline ‘The Democrats have become socialists’.

But have they? Actually, no.

Real socialized medicine might work brilliantly, as it has in some other countries. In the United Kingdom, the socialist government of Labour’s Clement Attlee nationalized the healthcare sector after the second world war, and today the British government still owns and operates most hospitals and directly employs most healthcare professionals.

The UK’s National Health Service has it problems, but it produces much more health per dollar than America’s – largely because it doesn’t overpay specialists or waste money on therapies and technologies of dubious clinical value. Though they smoke and drink more, Britons live longer than Americans while paying 40% less per capita for healthcare.

But what advocates of “single payer” healthcare in this country are talking about, often without seeming to realize it, is something altogether different. What they are calling for, instead, is vastly expanding eligibility for the existing Medicare program, or for a new program much like it.

So, what does Medicare do? It doesn’t produce healthcare. Rather, it pays bills submitted by private healthcare providers.

What would happen if such a system replaced private healthcare insurance in the United States by becoming the “single payer” of all healthcare bills? If adopted here a generation ago, it could have led to substantial healthcare savings.

After Canada adopted such a system in the early 1970s, each of its provincial governments became the sole purchaser of healthcare within its own borders. These governments then used their concentrated purchasing power to negotiate fee schedules with doctors and fixed budgets with hospitals and medical suppliers that left Canadians with a far thriftier, more efficient system than the United States even as they gained access to more doctors’ visits per capita, better health, and longer lives.

The same might well have happened in the United States if we had adapted a single-payer then. But we didn’t. Instead, we created something more akin to a “single-provider” system by allowing vast numbers of hospital mergers and other corporate combinations that have left most healthcare markets in the United States highly monopolized. And what happens when a single payer meets a single provider? It’s not pretty.

Healthcare delivery in the United States a generation ago was still in many ways a cottage industry, but not any more. Not only have 60 drug companies combined into 10, but hospitals, outpatient facilities, physician practices, labs, and other healthcare providers have been merging into giant, integrated, corporate healthcare platforms that increasingly dominate the supply side of medicine in most of the country.

According to a study headed by Harvard economist David M Cutler, between 2003 and 2013 the share of hospitals controlled by large holding companies increased from 7% to 60%. A full 40% of all hospital stays now occur in healthcare markets where a single entity controls all hospitals.

If you want a hint of what a single-payer healthcare system would look like today if grafted on to our currently highly monopolized system, think about how well our “single-payer” Pentagon procurement system does when it comes to bargaining with sole-source defense contractors.

In theory, the government could just set the price it’s willing to pay for the next generation of fighter jets or aircraft carriers and refuse to budge. But in practice, a highly consolidated military-industrial complex has enough economic and political muscle to ensure not only that it is paid well, but also that Congress appropriates money for weapons systems the Pentagon doesn’t even want.

The dynamic would be much the same if a single-payer system started negotiating with the monopolies that control America’s healthcare delivery systems.

by Phillip Longman, The Guardian | Read more:
Image: Michael Reynolds/EPA
[ed. A longer and more detailed version of this article can be found in the Washington Monthly.] 

Saturday, October 28, 2017

Smile and Say, "Money"

Julie Andrews is one of the most iconic and remarkable performers of our time. She’s played some of our most favorite roles and has become part of our childhoods. Who among us hasn’t sat in front of Mary Poppins hundreds of times or watched The Princess Diaries over and over?

Not only is she a talented performer and accomplished actress, she is also a delightful and beautiful person. Julie Andrews appeared on The Late Show with Stephen Colbert and offered some very handy advice for the host. In the age of constant selfies, this is advice we can all use.


Andrews gifts Stephen Colbert with a bit of advice about how to appear natural and effortless in photos. Much like she always does. The trick is to say “money” instead of cheese, and it works every time. Andrews says it’s foolproof.

“There’s something about it,” she adds. “It drops the jaw a bit and makes you smile nicely.”(...)

You must count down from three and then say the word “money” out loud. For it to really work, however, you should probably think about money too. And just for good measure, you could think about all the new shoes and tacos money could buy you. Because, like Colbert says, “it puts you in a good mood.”

by Sundi Rose, Hello Giggles |  Read more:
Image: YouTube

Cryptonomicon

Neal Stephenson came along a little late in the game to be considered one of the Web's genuine prophets. His first novel, a spiky academic satire called ''The Big U,'' was published in 1984 -- the same year William Gibson, in ''Neuromancer,'' coined the term ''cyberspace.'' Stephenson didn't plant both feet in science fiction until 1992, when his novel ''Snow Crash'' -- a swashbuckling fantasy that largely takes place in a virtual world called the Metaverse -- became an instant, and deserved, cyberpunk classic.

''Snow Crash'' remains the freshest and most fully realized exploration of the hacker mythology. Set in a futuristic southern California, the novel rolls out a society in ruins. Citizens live in heavily fortified ''Burbclaves,'' pizza delivery services are run with military precision by a high-tech Cosa Nostra, and the Library of Congress (now comprising digital information uploaded by swarms of freelance Matt Drudge types) has merged with the C.I.A. and ''kicked out a big fat stock offering.'' In reality, Stephenson's hero delivers pizza. In the Metaverse, he's a warrior on a mission to squelch a deadly computer virus.

Sounds perfectly preposterous -- and it is. But Stephenson, a former code writer, has such a crackling high style and a feel for how computers actually function that he yanks you right along. Despite all the high-tech frippery, there's something old-fashioned about Stephenson's work. He cares as much about telling good stories as he does about farming out cool ideas. There's a strong whiff of moralism in his books, too. The bad guys in his fiction -- that is, anyone who stands in a well-intentioned hacker's way -- meet bad ends. In ''Snow Crash,'' one nasty character tries to rape a young woman, only to find out she's installed an intrauterine device called a ''dentata.''

Stephenson's antiquated commitment to narrative, his Dickensian brio, is part of what makes his gargantuan new novel, ''Cryptonomicon,'' distinct from the other outsize slabs of post-modern fiction we've seen recently -- David Foster Wallace's ''Infinite Jest,'' Don DeLillo's ''Underworld,'' Thomas Pynchon's ''Mason & Dixon.'' For all the pleasures scattered throughout those books, they're dry, somewhat forbidding epics that beckon industrious graduate students while checking the riffraff at the door. ''Cryptonomicon,'' on the other hand, is a wet epic -- as eager to please as a young-adult novel, it wants to blow your mind while keeping you well fed and happy. For the most part, it succeeds. It's brain candy for bitheads.

''Cryptonomicon'' could have easily been titled ''Incoming Mail.'' It's a sprawling, picaresque novel about code making and code breaking, set both during World War II, when the Allies were struggling to break the Nazis' fabled Enigma code, and during the present day, when a pair of entrepreneurial hackers are trying to create a ''data haven'' in the Philippines -- a place where encrypted data (and an encrypted electronic currency) can be kept from the prying eyes of Big Brother. It is, at heart, a book about people who want to read one another's mail.

''Cryptonomicon'' is so crammed with incident -- there are dozens of major characters, multiple plots and subplots, at least three borderline-sentimental love stories and discursive ruminations on everything from Bach's organ music and Internet start-ups to the best way to eat Cap'n Crunch cereal -- that it defies tidy summary. Suffice it to say that some early scenes are set at Princeton University in the 1940's, where an awkward young mathematical genius named Lawrence Pritchard Waterhouse befriends the computer pioneer Alan Turing. (Turing's interest in Waterhouse goes beyond their bicycle rides and theoretical discussions; he makes ''an outlandish proposal to Lawrence involving penises. It required a great deal of methodical explanation, which Alan delivered with lots of blushing and stuttering.'') When war breaks out, Turing is dispatched to Bletchley Park in Britain, where he helps break Enigma. Waterhouse is ultimately assigned to a top-secret outfit called Detachment 2702, led by a gung-ho marine named Bobby Shaftoe, whose mission it is to prevent the Nazis from discovering that their code has been cracked wide open.

Stephenson cheerfully stretches historical plausibility -- there are some absurdly heroic (if electrifying) battle scenes, hilarious cameos by Ronald Reagan and Douglas MacArthur, and shadowy conspiracies involving U-boat captains and fallen priests -- but plays the mathematics of code breaking straight. We witness, in detail, Turing's early attempts to create a bare-bones computer that will help decode Nazi messages, and are plunged into the organized chaos at Bletchley Park, where ''demure girls, obediently shuffling reams of gibberish through their machines, shift after shift, day after day, have killed more men than Napoleon.'' (...)

Stephenson intercuts these wartime scenes with chapters about Waterhouse's grandson, Randy, who works for a start-up company called Epiphyte that plans not only to create a data haven but also to use a cache of gold buried by the Japanese Army during World War II to back an electronic currency protected by state-of-the-art encryption. These chapters are the book's shaggiest and most winsome, if only because Stephenson is so plugged into Randy's hacker sensibilities. As he skims along, Stephenson riffs on everything from Wired magazine -- here called Turing, with the motto ''So Hip, We're Stupid!'' -- and Microsoft's legal team's ''state-of-the-art hellhounds'' to pretentious cultural-theory academics and Silicon Valley's interest in cryogenics. (Some hackers here wear bracelets offering a $100,000 reward to medics who freeze their dead bodies.) That ''Cryptonomicon'' contains the greatest hacking scene ever put to paper, performed by Randy while under surveillance in a Philippine prison, should further endear this novel to computer freaks on both coasts. I expect to see, for the next decade or so, dogeared copies of this novel rattling around on the floorboards of the Toyotas (or, increasingly, Range Rovers) that fill Silicon Valley parking lots.

Should anyone else bother with it? My answer is a guarded yes. Stephenson could have easily cut this novel by a third, and it's terrifying that he imagines this 900-plus-page monster to be the ''first volume'' in an even longer saga. Worse, he strains too hard at reconciling the book's multiple plot strands. We can understand the subtle links between World War II code breaking and today's politicized encryption battles -- and the spiritual links between cryptographers in the 1940's and hackers in the 1990's -- without the nonsense about secret gold deposits and coded messages that filter down (improbably) through generations. Stephenson, I suspect, simply can't help himself; he's having too good a time to ever consider applying the brakes.

by Dwight Garner, NY Times |  Read more:
Image: via:
[ed. I stumbled across Snow Crash and Neil Stephenson just a few months ago and am now deeply into Cryptonomicon. It's a wonderful (and wonderfully dense) novel, full of intrigue and history. So delighted to find someone of this literary talent that I'd somehow overlooked all these years.]  

Friday, October 27, 2017

The End of an Error?

In 1998 the Lancet, one of Elsevier’s most prestigious journals, published a paper by Andrew Wakefield and twelve colleagues that suggested a link between the MMR vaccine and autism. Further studies were quickly carried out, which failed to confirm such a link. In 2004, ten of the twelve co-authors publicly dissociated themselves from the claims in the paper, but it was not until 2010 that the paper was formally retracted by the Lancet, soon after which Wakefield was struck off the UK medical register.

A few years after Wakefield’s article was published, the Russian mathematician Grigori Perelman claimed to have proved Thurston’s geometrization conjecture, a result that gives a complete description of mathematical objects known as 3-manifolds, and in the process proves a conjecture due to Poincaré that was considered so important that the Clay Mathematics Institute had offered a million dollars for a solution. Perelman did not submit a paper to a journal; instead, in 2002 and 2003 he ­simply posted three preprints to the arXiv, a preprint server used by many theoretical ­physicists, mathematicians and computer ­scientists. It was difficult to understand what he had written, but such was his reputation, and such was the importance of his work if it was to be proved right, that a small team of experts worked heroically to come to grips with it, ­correcting minor errors, filling in parts of the argument where Perelman had been somewhat sketchy, and tidying up the presentation until it finally became possible to say with complete confidence that a solution had been found. For this work Perelman was offered a Fields Medal and the million dollars, both of which he declined.

A couple of months ago, Norbert Blum, a theoretical computer scientist from Bonn, posted to the arXiv a preprint claiming to have answered another of the Clay Mathematics Institute’s million-dollar questions. Like Perelman, Blum was an established and respected researcher. The preprint was well written, and Blum made clear that he was aware of many of the known pitfalls that await anybody who tries to solve the problem, giving careful explanations of how he had avoided them. So the preprint could not simply be dismissed as the work of a crank. After a few days, however, by which time several people had pored over the paper, a serious problem came to light: one of the key statements on which Blum’s argument depended directly contradicted a known (but not at all obvious) result. Soon after that, a clear understanding was reached of exactly where he had gone wrong, and a week or two later he retracted his claim.

These three stories are worth bearing in mind when people talk about how heavily we rely on the peer review system. It is not easy to have a paper published in the Lancet, so Wakefield’s paper presumably underwent a stringent process of peer review. As a result, it received a very strong endorsement from the scientific community. This gave a huge impetus to anti-vaccination campaigners and may well have led to hundreds of preventable deaths. By contrast, the two mathematics ­preprints were not peer reviewed, but that did not stop the correctness or otherwise of their claims being satisfactorily established.

An obvious objection to that last sentence is that the mathematics preprints were in fact peer-reviewed. They may not have been sent to referees by the editor of a journal, but they certainly were carefully scrutinized by peers of the authors. So to avoid any confusion, let me use the phrase “formal peer review” for the kind that is organized by a journal and “informal peer review” for the less official scrutiny that is carried out whenever an academic reads an article and comes to some sort of judgement on it. My aim here is to question whether we need formal peer review. It goes without saying that peer review in some form is essential, but it is much less obvious that it needs to be organized in the way it usually is today, or even that it needs to be organized at all.

What would the world be like without formal peer review? One can get some idea by looking at what the world is already like for many mathematicians. These days, the arXiv is how we disseminate our work, and the arXiv is how we establish priority. A typical pattern is to post a preprint to the arXiv, wait for feedback from other mathematicians who might be interested, post a revised version of the ­preprint, and send the revised version to a journal. The time between submitting a paper to a journal and its appearing is often a year or two, so by the time it appears in print, it has already been thoroughly assimilated. Furthermore, looking a paper up on the arXiv is much simpler than grappling with most journal websites, so even after publication it is often the arXiv preprint that is read and not the journal’s formatted version. Thus, in mathematics at least, journals have become almost irrelevant: their main purpose is to provide a stamp of approval, and even then one that gives only an imprecise and unreliable indication of how good a paper actually is. (...)

Defences of formal peer review tend to focus on three functions it serves. The first is that it is supposed to ensure reliability: if you read something in the peer-reviewed literature, you can have some confidence that it is correct. This confidence may fall short of certainty, but at least you know that experts have looked at the paper and not found it ­obviously flawed.

The second is a bit like the function of film reviews. We do not want to endure a large number of bad films in order to catch the occasional good one, so we leave that to film critics, who save us time by identifying the good ones for us. Similarly, a vast amount of academic literature is being produced all the time, most of it not deserving of our attention, and the peer-review system saves us time by selecting the most important articles. It also enables us to make quick judgements about the work of other academics: instead of actually reading the work, we can simply look at where it has been published.

The third function is providing feedback. If you submit a serious paper to a serious journal, then whether or not it is accepted, it has at least been read, and if you are lucky you receive valuable advice about how to improve it. (...)

It is not hard to think of other systems that would provide feedback, but it is less clear how they could become widely adopted. For example, one common proposal is to add (suitably moderated) comment pages to preprint servers. This would allow readers of articles to correct mistakes, make relevant points that are missing from the articles, and so on. Authors would be allowed to reply to these comments, and also to update their preprints in response to them. However, attempts to introduce systems like this have not, so far, been very successful, because most articles receive no comments. This may be partly because only a small minority of preprints are actually worth commenting on, but another important reason is that there is no moral pressure to do so. Throwing away the current system risks throwing away all the social capital associated with it and leaving us impoverished as a result. (...)

Why does any of this matter? Defenders of formal peer review usually admit that it is flawed, but go on to say, as though it were obvious, that any other system would be worse. But it is not obvious at all. If academics put their writings directly online and systems were developed for commenting on them, one immediate advantage would be a huge amount of money saved. Another would be that we would actually get to find out what other people thought about a paper, rather than merely knowing that somebody had judged it to be above a certain not very precise threshold (or not knowing anything at all if it had been rejected). We would be pooling our efforts in useful ways: for instance, if a paper had an error that could be corrected, this would not have to be rediscovered by every single reader.

by Timothy Gowers, TLS |  Read more:
Image: “Perelman-Poincaré” by Roberto Bobrow, 2010
[ed. Crowdsourcing peer reviews. Why not? Possibly because the current dysfunctional scientific journal business and its outsized influence on what gets published and therefore deemed important might be threatened?]

Thursday, October 26, 2017

Dallas Killers Club

There were three horrible public executions in 1963. The first came in February, when the prime minister of Iraq, Abdul Karim Qassem, was shot by members of the Ba’ath party, to which the United States had furnished money and training. A film clip of Qassem’s corpse, held up by the hair, was shown on Iraqi television. “We came to power on a CIA train,” said one of the Ba’athist revolutionaries; the CIA’s Near East division chief later boasted, “We really had the Ts crossed on what was happening.”

The second execution came in early November 1963: the president of Vietnam, Ngo Dinh Diem, was shot in the back of the head and stabbed with a bayonet, in a coup that was encouraged and monitored by the United States. President Kennedy was shocked at the news of Diem’s gruesome murder. “I feel we must bear a good deal of responsibility for it,” he said. “I should never have given my consent to it.” But Kennedy sent a congratulatory cable to Henry Cabot Lodge Jr., the ambassador to South Vietnam, who had been in the thick of the action. “With renewed appreciation for a fine job,” he wrote.

The third execution came, of course, later that month, on November 22. I was six when it happened. I wasn’t in school because we were moving to a new house with an ivy-covered tree in front. My mother told me that somebody had tried to kill the president, who was at the hospital. I asked how, and she said that a bullet had hit the president’s head, probably injuring his brain. She used the word “brain.” I asked why, and she said she didn’t know. I sat on a patch of carpeting in an empty room, believing that the president would still get better, because doctors are good and wounds heal. A little while later I learned that no, the president was dead.

Since that day, till very recently, I’ve avoided thinking about this third assassination. Any time I saw the words “Lee Harvey Oswald” or “grassy knoll” or “Jack Ruby,” my mind quickly skipped away to other things. I didn’t go to see Oliver Stone’s JFK when it came out, and I didn’t read DeLillo’s Libra, or Gaeton Fonzi’s The Last Investigation, or Posner’s Case Closed, or any of the dozens of mass-market paperbacks—many of them with lurid black covers and red titles—that I saw reviewed, blamed, praised.

But eventually you have to face up to it somehow: a famous, smiling, waving New Englander, wearing a striped, monogrammed shirt, sitting in a long blue Lincoln Continental next to his smiling, waving wife, has his head blown open during a Texas parade. How could it happen? He was a good-looking person, with an attractive family and an incredible plume of hair, and although he wasn’t a very effective or even, at times, a very well-intentioned president—he increased the number of thermonuclear warheads, more than doubled the budget for chemical and biological weapons, tripled the draft, nearly got us into an end-time war with Russia, and sent troops, napalm, and crop defoliants into Vietnam—some of his speeches were, even so, noble and true and ringingly delivered and permanently inspiring. He was a star; they loved him in Europe. And then suddenly he was just a dead, naked man in a hospital, staring fixedly upward, with a dark hole in his neck. Autopsy doctors were poking their fingers in his wounds and taking pictures and measuring, and burning their notes afterward and changing their stories. “I was trying to hold his hair on,” Jacqueline Kennedy told the Warren Commission when they asked her to describe her experience in the limousine. She saw, she said, a wedge-shaped piece of his skull: “I remember it was flesh colored with little ridges at the top.” The president, the motorcade he rode in, the whole country, had been, to use a postmortem word, “avulsed”—blasted inside out.

Who or what brought this appalling crime into being? Was it a mentally unstable ex-Marine and lapsed Russophile named Oswald, aiming down at the back of Kennedy’s head through leafy foliage from the book depository, all by himself, with no help? Many bystanders and eyewitnesses—including Jean Hill, whose interview was broadcast on NBC about a half an hour after the shooting, and Kennedy advisers Kenny O’Donnell and Dave Powers, who rode in the presidential motorcade—didn’t think so: hearing the cluster of shots, they looked first toward a little slope on the north side of Dealey Plaza, and not back at the alleged sniper’s window.

A young surgeon at Parkland Memorial Hospital, Charles Crenshaw, who watched Kennedy’s blood and brains drip into a kick bucket in Trauma Room 1, also knew immediately that the president had been fatally wounded from a location toward the front of the limousine, not from behind it. “I know trauma, especially to the head,” Crenshaw writes in JFK Has Been Shot, published in 1992, republished with updates in 2013. “Had I been allowed to testify, I would have told them”—that is, the members of the Warren Commission—“that there is absolutely no doubt in my mind that the bullet that killed President Kennedy was shot from the grassy knoll area.”

No, the convergent gunfire leads one to conclude that the shooting had to have been a group effort of some kind, a preplanned, coordinated crossfire: a conspiracy. But if it was a group effort, what affiliation united the participants? Did the CIA and its hypermilitaristic confederates—Cold Warrior bitter-enders—engineer it? That’s what Mark Lane, James DiEugenio, Gerald McKnight, and many other sincere, brave, long-time students of the assassination believe. “Kennedy was removed from office by powerful and irrational forces who opposed his revisionist Cuba policy,” writes McKnight in Breach of Trust, a closely researched book about the blind spots and truth-twistings of the Warren Commission. James Douglass argues that Kennedy was killed by “the Unspeakable”—a term from Thomas Merton that Douglass uses to describe a loose confederacy of nefarious plotters who opposed Kennedy’s “turn” towards reconciliatory back-channel negotiation. “Because JFK chose peace on earth at the height of the Cold War, he was executed,” Douglass writes.

This is the message, also, of Oliver Stone’s artful, fictionalized epic JFK: Kennedy shied away from the invasion of Cuba, he wanted us out of Vietnam, he wouldn’t bow to the military-industrial combine, and none of that was acceptable to the hard-liners who surrounded him—so they had him killed. “The war is the biggest business in America, worth $80 billion a year,” Kevin Costner says, in JFK’s big closing speech. “President Kennedy was murdered by a conspiracy that was planned in advance at the highest levels of our government, and it was carried out by fanatical and disciplined cold warriors in the Pentagon and CIA’s covert-operation apparatus.”

Well, there’s no question that the CIA was and is an invasive weed, an eyes-only historical horror show that has, through plausibly deniable covert action, brought generations of instability and carnage into the world. There is no question, either, that under presidents Truman, Eisenhower, and Kennedy, the CIA’s string of pre-Dallas coups d’état—in Africa, in the Middle East, in Southeast Asia, in Latin America—contributed to an international climate of political upheaval and bad karma that made Kennedy’s own violent death a more conceivable outcome. There’s also no question that the CIA enlisted mobsters to kill Castro—Richard Bissell, who did the enlisting, later conceded that it was “a great mistake to involve the Mafia in an assassination attempt”—and no question that the CIA’s leading lights have, for fifty years, distorted and limited the available public record of the Kennedy assassination, doing whatever they could to distance the agency from its demonstrable interest in the accused killer, Oswald. It’s also true, I think, that there were some CIA extremists, fans of “executive action,” including William Harvey and, perhaps, James Jesus Angleton, that orchid-growing Anubis of spookitude, who were secretly relieved that Kennedy was shot, and may even have known in advance that he was probably going to die down south. (“I don’t want to sober up today,” Harvey reportedly told a colleague in Rome. “This is the day the goddamned president is gonna get himself killed!” Harvey also was heard to say: “This was bound to happen, and it’s probably good that it did.”) We are in debt to the CIA-blamers for their five decades of work, often in the face of choreographed media smears. They have brought us closer to the truth. But, having now read less than one-tenth of one percent of the available books on the subject, I believe, with full consciousness that I’m only a newcomer, that they’re barking up the wrong conspiracy. I think it was basically a Mafia hit: Kennedy’s death wouldn’t have happened without Carlos Marcello.

The best, saddest, fairest assassination book I’ve read, David Talbot’s Brothers, provides an important beginning clue. Robert Kennedy, who was closer to his brother and knew more about his many enraged detractors than anyone else, told a friend that the Mafia was principally responsible for what happened November 22. In public, for the five years that remained of his life, Bobby Kennedy made no criticisms of the nine-hundred-page Warren Report, which pinned the murder on a solo killer, a “nut” (per Hoover) and “general misanthropic fella” (per Warren Committee member Richard Russell) who had dreams of eternal fame. Attorney general Kennedy said, when reporters asked, that he had no intention of reading the report, but he endorsed it in writing and stood by it. Yet on the very night of the assassination, as Bobby began his descent into a near-catatonic depression, he called one of his organized-crime experts in Chicago and asked him to find out whether the Mafia was involved. And once, when friend and speechwriter Richard Goodwin (who had worked closely with JFK) asked Bobby what he really thought, Bobby replied, “If anyone was involved it was organized crime.”

To Arthur Schlesinger, Bobby was (according to biographer Jack Newfield) even more specific, ascribing the murder to “that guy in New Orleans”—meaning Carlos Marcello, the squat, tough, smart, wealthy mobster and tomato salesman who controlled slot machines, jukebox concessions, narcotics shipments, strip clubs, bookie networks, and other miscellaneous underworldy activities in Louisiana, in Mississippi, and, through his Texas emissary Joe Civello, in Dallas. In the early sixties, the syndicate run by Marcello and his brothers made more money than General Motors; the Marcellos owned judges, police departments, and FBI bureau chiefs. And when somebody failed to honor a debt, they killed him, or they killed someone close to him.

According to an FBI informant, Carlos Marcello confessed to the assassination. Some years before he died in 1993, Marcello said—as revealed by Lamar Waldron in three confusingly thorough books, the latest and best of which is The Hidden History of the JFK Assassination—“Yeah, I had the little son of a bitch killed,” meaning President Kennedy. “I’m sorry I couldn’t have done it myself.” As for Jack Ruby, the irascible strip-club proprietor and minor Marcello operative who silenced Lee Harvey Oswald in the Dallas police station, Bobby Kennedy exclaimed, on looking over the record of Ruby’s pre-assassination phone calls, “The list was almost a duplicate of the people I called before the Rackets Committee.” And then in 1968, Bobby Kennedy himself, having just won the California primary, was shot to death in a hotel kitchen in Los Angeles by an anti-Zionist cipher with gambling debts who had been employed as a groom at the Santa Anita racetrack. The racetrack was controlled by Carlos Marcello’s friend Mickey Cohen. The mob’s palmprints were, it seems, all over the war on the Kennedy brothers. Senator John Kennedy, during the labor-racketeering hearings in 1959, said, “If they’re crooks, we don’t wound them, we kill them.” Ronald Goldfarb, who worked for Bobby Kennedy’s justice department, wrote in 1995, “There is a haunting credibility to the theory that our organized crime drive prompted a plan to strike back at the Kennedy brothers.”

Lamar Waldron’s Hidden History is a primary source for a soon-to-be-produced movie, with Robert De Niro reportedly signed to play Marcello and Leonardo DiCaprio in the part of jailhouse informant Jack Van Laningham. Other new books that offer the Mafia-did-it view are Mark Shaw’s The Poison Patriarch—which contains an interesting theory about Ruby’s celebrity lawyer, Melvin Belli, and fingers “Marcello in collusion with Trafficante, while Hoffa cheered from the sidelines”—and Stefano Vaccara’s Carlos Marcello: The Man Behind the JFK Assassination, which has just been translated. “Dallas was a political assassination because it was a Mafia murder,” writes Vaccara, an authority on the Sicilian Mafia. “The Mafia went ahead with the hit once it understood that the power structure or the ‘establishment’ would not be displeased by the possibility.” Burton Hersh, in his astute and effortlessly well-written Bobby and J. Edgar, a revised version of which appeared in 2013, calls the Warren Commission Report a “sloppily executed magic trick, a government-sponsored attempt to stuff a giant wardrobe of incongruous information into a pitifully small valise.” Carlos Marcello, Hersh is convinced, was “the organizing personality behind the murder of John Kennedy.”

by Nicholson Baker, The Baffler |  Read more:
Image: Michael Duffy
[ed. Sorry for all The Baffler articles lately... they've been putting out some really great stuff (lately and pastly). See also: Stephen King's 11/22/63, one of his best.]

Suboxone

I am the last person with a right to complain about Internet articles being too long. But if I did have that right, I think I would exercise it on Dying To Be Free, the Huffington Post’s 20,000-word article on the current state of heroin addiction treatment. I feel like it could have been about a quarter the size without losing much.

It’s too bad that most people will probably shy away from reading it, because it gets a lot of stuff really right.

The article’s thesis is also its subtitle: “There’s a treatment for heroin addiction that actually works; why aren’t we using it?” To save you the obligatory introductory human interest story: that treatment is suboxone. Its active ingredient is the drug buprenorphine, which is kind of like a safer version of methadone. Suboxone is slow-acting, gentle, doesn’t really get people high, and is pretty safe as long as you don’t go mixing it with weird stuff. People on suboxone don’t experience opiate withdrawal and have greatly decreased cravings for heroin. I work at a hospital that’s an area leader in suboxone prescription, I’ve gotten to see it in action, and it’s literally a life-saver.

Conventional heroin treatment is abysmal. Rehab centers aren’t licensed or regulated and most have little interest in being evidence-based. Many are associated with churches or weird quasi-religious groups like Alcoholics Anonymous. They don’t necessarily have doctors or psychologists, and some actively mistrust them. All of this I knew. What I didn’t know until reading the article was that – well, it’s not just that some of them try to brainwash addicts. It’s more that some of them try to cargo cult brainwashing, do the sorts of things that sound like brainwashing to them, without really knowing how brainwashing works assuming it’s even a coherent goal to aspire to. Their concept of brainwashing is mostly just creating a really unpleasant environment, yelling at people a lot, enforcing intentionally over-strict rules, and in some cases even having struggle-session-type-things where everyone in the group sits in a circle, scream at the other patients, and tell them they’re terrible and disgusting. There’s a strong culture of accusing anyone who questions or balks at any of it of just being an addict, or “not really wanting to quit”.

I have no problem with “tough love” when it works, but in this case it doesn’t. Rehab programs make every effort to obfuscate their effectiveness statistics – I blogged about this before in Part II here – but the best guesses by outside observers is that for a lot of them about 80% to 90% of their graduates relapse within a couple of years. Even this paints too rosy a picture, because it excludes the people who gave up halfway through.

Suboxone treatment isn’t perfect, and relapse is still a big problem, but it’s a heck of a lot better than most rehabs. Suboxone gives people their dose of opiate and mostly removes the biological half of addiction. There’s still the psychological half of addiction – whatever it was that made people want to get high in the first place – but people have a much easier time dealing with that after the biological imperative to get a new dose is gone. Almost all clinical trials have found treatment with methadone or suboxone to be more effective than traditional rehab. Even Cochrane Review, which is notorious for never giving a straight answer to anything besides “more evidence is needed”, agrees that methadone and suboxone are effective treatments.

Some people stay on suboxone forever and do just fine – it has few side effects and doesn’t interfere with functioning. Other people stay on it until they reach a point in their lives when they feel ready to come off, then taper down slowly under medical supervision, often with good success. It’s a good medication, and the growing suspicion it might help treat depression is just icing on the cake.

There are two big roadblocks to wider use of suboxone, and both are enraging.

The first roadblock is the #@$%ing government. They are worried that suboxone, being an opiate, might be addictive, and so doctors might turn into drug pushers. So suboxone is possibly the most highly regulated drug in the United States. If I want to give out OxyContin like candy, I have no limits but the number of pages on my prescription pad. If I want to prescribe you Walter-White-level quantities of methamphetamine for weight loss, nothing is stopping me but common sense. But if I want to give even a single suboxone prescription to a single patient, I have to take a special course on suboxone prescribing, and even then I am limited to only being able to give it to thirty patients a year (eventually rising to one hundred patients when I get more experience with it). The (generally safe) treatment for addiction is more highly regulated than the (very dangerous) addictive drugs it is supposed to replace. Only 3% of doctors bother to jump through all the regulatory hoops, and their hundred-patient limits get saturated almost immediately. As per the laws of suppy and demand, this makes suboxone prescriptions very expensive, and guess what social class most heroin addicts come from? Also, heroin addicts often don’t have access to good transportation, which means that if the nearest suboxone provider is thirty miles from their house they’re out of luck. The List Of Reasons To End The Patient Limits On Buprenorphine expands upon and clarifies some of these points.

(in case you think maybe the government just honestly believes the drug is dangerous – nope. You’re allowed to prescribe without restriction for any reason except opiate addiction)

The second roadblock is the @#$%ing rehab industry. They hear that suboxone is an opiate, and their religious or quasi-religious fanaticism goes into high gear. “What these people need is Jesus and/or their Nondenominational Higher Power, not more drugs! You’re just pushing a new addiction on them! Once an addict, always an addict until they complete their spiritual struggle and come clean!” And so a lot of programs bar suboxone users from participating.

This doesn’t sound so bad given the quality of a lot of the programs. Problem is, a lot of these are closely integrated with the social services and legal system. So suppose somebody’s doing well on suboxone treatment, and gets in trouble for a drug offense. Could be that they relapsed on heroin one time, could be that they’re using something entirely different like cocaine. Judge says go to a treatment program or go to jail. Treatment program says they can’t use suboxone. So maybe they go in to deal with their cocaine problem, and by the time they come out they have a cocaine problem and a heroin problem.

And…okay, time for a personal story. One of my patients is a homeless man who used to have a heroin problem. He was put on suboxone and it went pretty well. He came back with an alcohol problem, and we wanted to deal with that and his homelessness at the same time. There are these organizations called three-quarters houses – think “halfway houses” after inflation – that take people with drug problems and give them an insurance-sponsored place to live. But the catch is you can’t be using drugs. And they consider suboxone to be a drug. So of about half a dozen three-quarters houses in the local area, none of them would accept this guy. I called up the one he wanted to go to, said that he really needed a place to stay, said that without this care he was in danger of relapsing into his alcoholism, begged them to accept. They said no drugs. I said I was a doctor, and he had my permission to be on suboxone. They said no drugs. I said that seriously, they were telling me that my DRUG ADDICTED patient who was ADDICTED TO DRUGS couldn’t go to their DRUG ADDICTION center because he was on a medication for treating DRUG ADDICTION? They said that was correct. I hung up in disgust.

So I agree with the pessimistic picture painted by the article. I think we’re ignoring our best treatment option for heroin addiction and I don’t see much sign that this is going to change in the future.

But the health care system not being very good at using medications effectively isn’t news. I also thought this article was interesting because it touches on some of the issues we discuss here a lot:

by Scott Alexander, Slate Star Codex |  Read more:
[ed. See also: Against Rat Park.]

Wednesday, October 25, 2017


Joan RabascallIBM 360, 1967.
via:

I Interviewed at Five Top Companies in Silicon Valley, and Got Five Job Offers

In the five days from July 24th to 28th 2017, I interviewed at LinkedIn, Salesforce Einstein, Google, Airbnb, and Facebook, and got all five job offers.

It was a great experience, and I feel fortunate that my efforts paid off, so I decided to write something about it. I will discuss how I prepared, review the interview process, and share my impressions about the five companies.

How it started

I had been at Groupon for almost three years. It’s my first job, and I have been working with an amazing team and on awesome projects. We’ve been building cool stuff, making impact within the company, publishing papers and all that. But I felt my learning rate was being annealed (read: slowing down) yet my mind was craving more. Also as a software engineer in Chicago, there are so many great companies that all attract me in the Bay Area.

Life is short, and professional life shorter still. After talking with my wife and gaining her full support, I decided to take actions and make my first ever career change.

Preparation

Although I’m interested in machine learning positions, the positions at the five companies are slightly different in the title and the interviewing process. Three are machine learning engineer (LinkedIn, Google, Facebook), one is data engineer (Salesforce), and one is software engineer in general (Airbnb). Therefore I needed to prepare for three different areas: coding, machine learning, and system design.

Since I also have a full time job, it took me 2–3 months in total to prepare. Here is how I prepared for the three areas.

Coding

While I agree that coding interviews might not be the best way to assess all your skills as a developer, there is arguably no better way to tell if you are a good engineer in a short period of time. IMO it is the necessary evil to get you that job.

I mainly used Leetcode and Geeksforgeeks for practicing, but Hackerrank and Lintcode are also good places. I spent several weeks going over common data structures and algorithms, then focused on areas I wasn’t too familiar with, and finally did some frequently seen problems. Due to my time constraints I usually did two problems per day.

Here are some thoughts:
  1. Practice, a lot. There is no way around it.
  2. But rather than doing all 600 problems on Leetcode, cover all types and spend time understanding each problem thoroughly. I did around 70 problems in total and felt that was enough for me. My thought is that if 70 problems isn’t helpful then you may not be doing it right and 700 won’t be helpful either.
  3. Go for the hardest ones. After those the rest all become easy.
  4. If stuck on one problem for over two hours, check out the solutions. More time might not be worth it.
  5. After solving one problem, check out the solutions. I was often surprised by how smart and elegant some solutions are, especially the Python one-liners.
  6. Use a language that you are most familiar with and that is common enough to be easily explained to your interviewer.
System design

This area is more closely related to the actual working experience. Many questions can be asked during system design interviews, including but not limited to system architecture, object oriented design, database schema design, distributed system design, scalability, etc.

There are many resources online that can help you with the preparation. For the most part I read articles on system design interviews, architectures of large-scale systems, and case studies.

Here are some resources that I found really helpful:
  1. Understand the requirements first, then lay out the high-level design, and finally drill down to the implementation details. Don’t jump to the details right away without figuring out what the requirements are.
  2. There are no perfect system designs. Make the right trade-off for what is needed.
With all that said, the best way to practice for system design interviews is to actually sit down and design a system, i.e. your day-to-day work. Instead of doing the minimal work, go deeper into the tools, frameworks, and libraries you use. For example, if you use HBase, rather than simply using the client to run some DDL and do some fetches, try to understand its overall architecture, such as the read/write flow, how HBase ensures strong consistency, what minor/major compactions do, and where LRU cache and Bloom Filter are used in the system. You can even compare HBase with Cassandra and see the similarities and differences in their design. Then when you are asked to design a distributed key-value store, you won’t feel ambushed.

Many blogs are also a great source of knowledge, such as Hacker Noon and engineering blogs of some companies, as well as the official documentation of open source projects.

The most important thing is to keep your curiosity and modesty. Be a sponge that absorbs everything it is submerged into.

Machine learning

Machine learning interviews can be divided into two aspects, theory and product design.

Unless you are have experience in machine learning research or did really well in your ML course, it helps to read some textbooks. Classical ones such as the Elements of Statistical Learning and Pattern Recognition and Machine Learning are great choices, and if you are interested in specific areas you can read more on those.

Make sure you understand basic concepts such as bias-variance trade-off, overfitting, gradient descent, L1/L2 regularization,Bayes Theorem,bagging/boosting,collaborative filtering,dimension reduction, etc. Familiarize yourself with common formulas such as Bayes Theorem and the derivation of popular models such as logistic regression and SVM. Try to implement simple models such as decision trees and K-means clustering. If you put some models on your resume, make sure you understand it thoroughly and can comment on its pros and cons.

by Xiaohan Zeng, Medium | Read more:
[ed. If you're considering a degree in computer programming/engineering and feel drawn to Silicon Valley, good luck! (... then you have to find a place to live.)]

A helicopter lands on the Pan Am roof
Like a dragonfly on a tomb
And business men in button downs
Press into conference rooms
Battalions of paper minded males
Talking commodities and sales
While at home their paper wives
And paper kids
Paper the walls to keep their gut reactions hid
(lyrics)
   
    ~ Joni Mitchell, Harry's House/Centerpiece (youtube)

Image via:

Tropical Depressions

"I don't know how to be human any more."

On a wretched December afternoon in 2015, as raindrops pattered a planetary threnody on grayed-out streets, five thousand activists gathered around Paris’s Arc de Triomphe, hoping to force world leaders to do something, anything, that would save the future. Ellie was there. But what she remembers most from that afternoon during the UN’s Climate Change Conference wasn’t what happened in the open, in front of cameras and under the sky. As they took the Metro together, activists commiserated, briefly, before the moment of struggle and the need to be brave, over just how hopeless it could sometimes feel. People talked about bafflement, rage, despair; the sense of having discovered a huge government conspiracy to wipe out the human race—but one that everybody knows about and nobody seems willing to stop.

Twenty meters beneath the Paris streets, the Metro became a cocoon, tight and terrified, in which a brief moment of honest release was possible. Eventually someone expressed the psychic toll in words that have stuck with Ellie since. It was a chance remark: “I don’t know how to be human any more.”

Climate change means, quite plausibly, the end of everything we now understand to constitute our humanity. If action isn’t taken soon, the Amazon rainforest will eventually burn down, the seas will fester into sludge that submerges the world’s great cities, the Antarctic Ice Sheet will fragment and wash away, acres of abundant green land will be taken over by arid desert. A 4-degree Celsius rise in global temperatures would, within a century, produce a world as different from the one we have now as ours is from that of the Ice Age. And any humans who survive this human-made chaos would be as remote from our current consciousness as we are from that of the first shamanists ten thousand years ago, who themselves survived on the edges of a remote and cold planet. Something about the magnitude of all this is shattering: most people try not to think about it too much because it’s unthinkable, in the same way that death is always unthinkable for the living. For the people who have to think about it—climate scientists, activists, and advocates—that looming catastrophe evokes a similar horror: the potential extinction of humanity in the future puts humanity into question now. (...)

An Empty World

Many of the climate scientists and activists we’ve spoken with casually talk of their work with a sense of mounting despair and hopelessness, a feeling we call political depression. We’re used to considering and treating depression as an internal, medical condition, something that can be put right with a few chemicals to keep the brain swimming in serotonin; in conceptualizing our more morose turns of mind, modern medicine hasn’t come too far from the ancient idea that a melancholy disposition arises from too much black bile in the body. But when depressives talk about their experiences, they describe depression in terms of a lost relationship to the world. The author Tim Lott writes that depression “is commonly described as being like viewing the world through a sheet of plate glass; it would be more accurate to say a sheet of thick, semi-opaque ice.” A woman going by the pseudonym of Marie-Ange, one of Julia Kristeva’s analysands, describes a world hollowed out and replaced by “a nothingness . . . like invisible, cosmic, crushing antimatter.” In other words, the inward condition of depression is nothing less than a psychic event horizon; the act of staring at a vast gaping absence—of hope, of a future, of the possibility of human life. The depressive peeks into the future that climate change generates. Walter Benjamin, trying to lay out the contours of melancholic experience, saw it there. “Something new emerged,” he wrote: “an empty world.”

Freud diagnoses melancholia as the result of a lost object—a thing, a person, a world—and the fracture of that loss repeats itself within the psyche. It’s the loss that comes first. We do not think of political depression as a personal disorder, the state of being depressed because of political events; rather it’s the interiorization of our objective powerlessness in the world. We all feel, vaguely, that our good intentions should matter, that we should have some power to affect the things around us for the better; political depression is the hopelessness that meets the determination to do something in a society whose systems and instruments are designed to frustrate our ability to act.

But it’s not that, like Kafka’s heroes, we’re facing a vast and inscrutable apparatus whose operation seems to make no sense, trembling in front of a machine. What’s unbearable is that it does make sense; it’s the same logic that governs every second of our lives.

At times, the climate movement has insisted on burying this crushing truth under a relentless optimism: the disaster can be averted, all that’s needed is the political will, and we simply have no time to luxuriate in feeling sad. And all this is true. But as activists have begun to acknowledge, there needs to be room for sadness. As the veteran activist Danni Paffard—arrested three times in climate protests, once narrowly avoiding prison after she shut down a runway on Heathrow Airport—puts it to us, “the climate movement has recognized that this is an existential problem and has created spaces for people to talk things through,” to exist within the sense of grief, to work with political depression instead of repressing it. After all, as the writer Andrew Solomon says, “a lot of the time, what [depressives] are expressing is not illness, but insight, and one comes to think what’s really extraordinary is that most of us know about those existential questions and they don’t distract us very much.” There’s a substantial literature on “depressive realism”—the suspicion that depressed people are actually right. In one 1979 study by Lauren B. Alloy and Lyn Y. Abramson, it was found that when compared to their nondepressed peers, depressed subjects’ “judgements of contingency were surprisingly accurate.’”

The depressive is, first of all, one who refuses to forget. In Freud’s account, while mourning is the slow release of emotional ties to something that’s vanished, melancholia is a refusal to let go. It’s not just that climate change is depressing; the determination to stop it has to begin from a depressive conviction: to not just forget that so much has been lost and more is going every day—to keep close to memory. Or as Paffard puts it, “You need to hold what’s at stake in your head enough to remember why it’s important to take action.”

La-La Land

In April this year, the Australian marine biologist Jon Brodie made headlines with his widely publicized despair. In an unprecedented tide, severe coral bleaching had destroyed much of the Great Barrier Reef; for Brodie, what had once been a worst-case scenario took horrifying form. “We’ve given up,” he told the Guardian. “It’s been my life managing water quality, we’ve failed. Even though we’ve spent a lot of money, we’ve had no success.” Brodie had spent decades warning the Australian government—which also funds his efforts—that something like this would happen if serious action didn’t take place, and being repeatedly disappointed as politicians refused to listen.

What do you do after the worst has already happened? He sounds stoic over the phone when we speak to him, as if he’s not fully aware of just how awful everything he says really is. “If you want to see the coral reefs,” he tells us, “go now. It’s got some good bits, but you have to see them now, because they won’t look like that in ten years’ time.”

Hope is difficult. “I work with young people,” Brodie explains. “Even up until five years ago, I felt I could inspire them. But now I have PhD students—I have trouble giving them a feeling that they can still do something. We’re in an era of science denial.” It’s not the inevitability of climate change that’s depressing; rather, it’s precisely the realization that it can be prevented—together with the day-to-day reckoning with the pettiness of what stands in the way. “When I was younger,” Paffard tells us, “I would walk through the City of London and look at people living their everyday lives and think, ‘We’re all just continuing as though everything is normal, as though the world isn’t about to end.’ And that used to freak me out and make me angry. But now it just makes me sad . . . it’s the moments where you let yourself think about it when you get overwhelmed by it.”

For Brodie, political stupidities blend seamlessly into the apocalypse that they create.

When I contrast what we had eighteen years ago to the idiots [in government] today, I feel sad and angry. They keep positing coal as the solution to our energy needs. They’re living in la-la land. In the end, [biological] life will go on. Maybe humans will go on a bit longer, but the Earth will still be here. (...)

Brodie seeks to cope with the loss of the coral reefs by creating an ecosystem within his control: “There is a sense of loss, but I do other things to compensate. I live on a large piece of land and I am growing a forest on it, so that gives me a sense of satisfaction—there are birds and butterflies.” You step back; you find other things; the moments we still have. Faced with the vastness of climate change, people reach for what’s smaller. “I keep myself so busy,” Paffard says, “so I don’t think about it on an existential level.” Lewis, who notes that “it’s very hard to plan long term because we live in a capitalist economy,” and that “people hedge their bets by consuming now and worrying about the future later,” says he resorts to similar strategies of full cognitive immersion in the many shorter-term tasks at hand.

“People on the outside of science think we sit around all day worrying about these big questions, and we don’t. Scientists are thinking about where their next grant is going to come from. You find intellectual stimulation in your work without thinking about the big picture. Recently I caught myself thinking, when the El Niño happened over the last couple of years, which gave us abnormally high temperatures—brilliant! I get to see what abnormally high temperatures do to the tropical forests I’m studying.”

With this response, Lewis says, he shocked himself. There’s an impersonality to the processes that are destroying our planet. He even has a kind of sympathy for fossil-fuel lobbyists: what they do is evil, but it’s hard to separate from the evil that’s everywhere around us. “A lot of people go in trying to change it on the inside and then end up adopting the culture, and don’t change things. Because it’s very difficult . . . There’s all sorts of psychological tricks people play on themselves to allow them to do things that are incredibly antisocial.” It’s difficult for anyone to change things, and the prospects for substantive change can be as hard for government-funded scientists and battle-hardened campaigners as for anyone else. “Would I want to live like someone in Papua New Guinea to avoid climate change?” Brodie wonders. “Probably not.”

Political depression means staring into a vastness, but one without grandeur or the sublime, one that’s almost invisible. When we wake up with every morning, it’s just there, seeping into our bones. “I am amazed,” Paffard tells us, “by our inability to engage with things that are scary and bigger than us. It’s the minutiae that keep us going . . . it’s too big for us to hold in our minds.” What can we do? We’re only human.

by Sam Kriss and Ellie Mae O'Hagan, The Baffler |  Read more:
Image: Jacob Magraw

The Globalized Jitters

News Updates from the Edge

All the cool kids have anxiety disorders these days. I’m not claiming that this makes me one of them. Correlation, as we all know, does not imply causation, and I am reliably informed that the cool kids also understand Snapchat, wear floral jumpsuits, and know how to talk to people they fancy without pulling a face like a spaniel on acid. Nevertheless, if depression was the definitive diagnosis of the 1990s, anxiety is the mental health epidemic that makes the modern world what it is: overwhelmed, unstable, and in serious need of a decade-long lie down.

The ubiquity of anxiety disorders would shock anyone who hadn’t watched the news lately and understood quite how much most of us have to worry about. Nearly one in five American adults over the age of thirteen suffer from an anxiety disorder, with women twice as likely to be affected. Depression, anxiety, and related disorders have increased in the decade since the financial crash, and we can’t blame it all on Big Pharma. In June, New York Times writer Alex Williams sized up a slew of anxiety memoirs atop the bestseller lists and noted that “Prozac Nation Is Now the United States of Xanax.”

If dealing with your anxiety is now a lifestyle trend, talking about your anxiety is a publishing trend. Part of the reason that memoirs by professionally neurotic authors are now so bankable is that the last best job of a writer is to make the anxieties of the age beautiful, comprehensible and, if possible, lucrative. It helps that many of them—like Kat Kinsman’s Hi, Anxiety and Andrea Petersen’s On Edge: A Journey Through Anxiety—are also deliciously well-written. Anxiety, unlike depression, is exciting to write about, in part because it is a condition in which absolutely everything is suddenly way too exciting. This, by coincidence, is also a neat description of the geopolitics of the chaotic adolescence of the twenty-first century. In the immortal words of Horse ebooks: “Everything happens so much.”

The problem is both profound and profoundly modern. The problem, specifically, is that a lot of us are pretty freaked out pretty much all the time, and whether or not we’ve good reasons for it, the condition of constant panic is debilitating. Despite the rash of articles suggesting a novel condition known as “Trump-related anxiety,” this is a problem far bigger than the presidency. The gurning batrachian monster that crawled out of the mordant id of mass society to squat in the Oval Office was a symptom of our collective neurosis before he was a cause.

“Each phase of capitalism,” according to the activist collective Plan C,
has a particular affect which holds it together. . . . Today’s public secret is that everyone is anxious. Anxiety has spread from its previous localized locations—such as sexuality—to the whole of the social field. All forms of intensity, self-expression, emotional connection, immediacy, and enjoyment are now laced with anxiety. It has become the linchpin of subordination.

One major part of the social underpinning of anxiety is the multi-faceted omnipresent web of surveillance. The NSA, CCTV, performance management reviews, the Job Centre, the privileges system in the prisons, the constant examination and classification of the youngest schoolchildren. But this obvious web is only the outer carapace. We need to think about the ways in which a neoliberal idea of success inculcates these surveillance mechanisms inside the subjectivities and life-stories of most of the population. . . . We are failing to escape the generalized production of anxiety.
Anxiety may be a logical response to overwhelming stress and insecurity, but it is also a very easy way to keep people isolated, cowed, and compliant. Existing in a state of constant agitation is unpleasant, but it is also useful. Anxious people get things done—at least up to a certain point, at which productivity rapidly plummets in a condition known euphemistically as “burnout,” “stress,” or “complete and utter wall-gnawing, corner-scuttling, gibbering breakdown.”

In his forthcoming book Kids These Days, Malcolm Harris draws on a body of psychological research to observe:
Given what we know about recent changes in the American sociocultural environment, it would be a surprise if there weren’t elevated levels of anxiety among young people. Their lives center around production, competition, surveillance, and achievement in ways that were totally exceptional only a few decades ago. All this striving, all this trying to catch up and stay ahead—it simply has to have psychological consequences. The symptoms of anxiety aren’t just the unforeseen and unfortunate outcome of increased productivity and decreased labor costs; they’re useful. . . . Restlessness, dissatisfaction and instability—which Millennials report experiencing more than generations past—are negative ways of framing the flexibility and self-direction employers increasingly demand. . . . All of these psychopathologies are the result of adaptive developments. (...)
Reality Bites Back

If anything, nearly two decades into the new millennium, everything feels a little too real. In place of predictable consumerist anhedonia, instead of the blithe horizon of cardboard cut-out suburban security, reality has come back with a mouthful of razors ready to rip open the throats of anyone pretending they know what’s coming.

This age of anxiety did not begin with this presidency, nor does it end at the U.S. border, but there is something fundamentally Yankish about the culture of dogged, dead-eyed competition that produces it. It’s what happens when the American Dream becomes a nightmare you can’t wake from, and not just because you haven’t had a proper night’s sleep in years. It’s what happens when a society clings to a defining mythos that celebrates working until you drop, abhors poverty as evidence of moral failure, considers the provision of a basic safety net a pansy European affectation, and continues to call itself free.

Mental health is invariably political, even if the first available solutions are individual. Anxiety keeps us ready for a fight-or-flight response in a society that has all but outlawed both flight and fight. Today’s anxiety memoirists, in particular, are attached to the understanding of anxiety as a disorder with only a tangential relationship to real-world events. There is a focus on “raising awareness” of the condition, as if awareness by itself were any sort of answer. Those of us who live with anxiety have more awareness than we know what to do with, including of how our own brains can ambush us with the sudden black anticipation of imminent death. Knowing that you might at any point be incapacitated by panic is not a restful prospect for anyone who has been there. On top of your deadlines, your debts, and the crawling curiosity about which will fall apart first, your life plans or western civilization, you now also have to deal with being crazy.

But just because you’re crazy doesn’t mean you’re wrong. The problem is not that a very large number of us are very worried almost all the time; it is, rather, that there are an awful lot of things to worry about. This is true both at the micro level—how will you ever pay off the debts you ran up earning those qualifications you were told you needed for that job you can’t get?—and at the macro level, where we peek through our fingers at the wildfires and war refugees on the news and wonder if it’s worth starting any long books. Constant concern may be unhealthy, but it is not illogical. Of course, I would say that. I have an anxiety disorder.

On the other hand, simply recognizing that your fears are abundantly founded doesn’t make you less unwell—and choosing not to manage your anxiety is hardly an efficient way of enacting social change. No reputable shrink can write you a prescription for the psycho-social symptoms of late capitalism in its current form, and structural solutions for chronic despairing precarity are not available in over-the-counter form. But people still need to get up and go to work in the morning, whether or not it terrifies us. So what are we supposed to do?

The Organizing Cure

Well, we can always go shopping. The cultural response to this ambient panic can be measured in the desperate mood of wish-fulfillment we see in the sprawling market for emotionally stabilizing sloganeering and interior design. There is, for instance, a bewildering number of throw-pillows currently on sale begging for “good vibes only.” The aesthetics of pop culture are washed in a frantic blush-pastel color scheme, all seafoam green and soothing “millennial pink” and bare stripped-down minimalism, as if we’re desperate to decorate our box-room apartments like the inside of a psychiatric ward. And then there’s the pop culture meme that refuses to die—the endless regurgitation on T-shirts and tea towels and pocket pill-cutters of the cheesy Blitz-era slogan “keep calm and carry on,” as if either approach were a remotely appropriate response to the mounting crises that are engulfing the common weal. The last thing any of us needs to do is keep calm.

by Laurie Penny, The Baffler |  Read more:
Image: Shreya Chopra

Tuesday, October 24, 2017


The Circleville Herald, Ohio, June 29, 1956
via: