Tuesday, March 13, 2018

Democrats and the Crisis of Legitimacy

The American electoral system, and with it what passes for representative democracy, is facing a crisis of legitimacy reflected in continued fallout from the 2016 election. The duopoly political Parties—Democrats and Republicans, have both experienced mass exoduses for reasons specific to each. Because they have effective control over which candidates and programs get put forward in elections, they must be gotten out of the way for constructive political resolution to be possible.

The Republican Party saw a mass exodus of registered voters when George W. Bush’s war against Iraq became a conspicuous quagmire. By the time of the financial crisis that marked the onset of the Great Recession, some fair number had registered as Democrats while others dropped their duopoly Party affiliation to become what are implausibly called ‘independents.’

At the time it became apparent that the Obama administration was intent on restoring the forces of economic repression— Wall Street and corporate-state plutocracy, the Democrats saw their own mass exodus. Against the storyline of competing interests, registered voters fled both Parties. By implication, these mass exoduses suggest that neither duopoly Party represents the programs and candidates of interest to voters.

These mass exoduses have several implications: (1) with voters fleeing both duopoly Parties, it is the political system that has lost credibility, (2) the back-and-forth of faux ‘opposition’ that provided the illusion of political difference has lost potency as a driver of domestic politics and (3) charges that foreign influence determined the 2016 electoral outcomes are wholly implausible when placed in the context of the scale of voter disaffection with the duopoly Party system.

For instance, 71% of eligible voters didn’t vote for the Democratic Party candidate. 73% didn’t vote for Donald Trump (Clinton won the popular vote). Ninety million eligible voters (40%) didn’t cast a ballot at all. Why it makes sense to present outcomes in terms of what voters didn’t do is (1) the duopoly Parties control which candidates and programs are put forward and (2) voters have fled the duopoly Party system rather than simply switching Parties.

The political problem for the national Democrats is that one can endorse the very worst that can be said about their alleged opposition, the Republicans, without raising their standing. Between 2009 and 2016 somewhere between eight and ten million registered voters— 20% of registered Democrats, fled the Democratic Party. In conjunction with the loss of over 1,000 legislative seats over this same period, the Democrats were in the midst of a full blown political crisis going into the 2016 election.

But here’s the punchline— the Republicans were also in the midst of a political crisis of their own. As disaffection with Barack Obama’s programs took hold, national Republicans saw little benefit (top graph). Voters didn’t simply switch Parties. They left both Parties. The implied motivation isn’t that the ‘opposition’ Party had better ideas. Had this been the case, total Party affiliation would have remained largely unchanged. But that isn’t the case. For a two-Party political system, such abandonment of the ‘center’ is a textbook definition of a crisis of legitimacy.

Over the prior century the duopoly Party strategy has been to maintain control of the political system by controlling the electoral process regardless of levels of political disaffection. The Democrats’ ‘crises’ of 1968 and 1972 were a result of systemic inflexibility in the face of widespread political disaffection. Phrased differently, tightly controlled electoral ‘choices’ are inadequate during crises of systemic legitimacy. Their ultimate response was to lead a right-wing coupagainst the political accommodations of the New Deal and the Great Society.

Back in the present, the strategic term for newly unaffiliated voters is ‘independent,’ as if the wholesale, willful abandonment of Party affiliation solved the problem of Party control over which candidates do and don’t get to run for office. By 2016 over 40% of registered voters, including those who had recently left the Democratic Party, were self-defined as unaffiliated with either duopoly Party (graph below). And this figure leaves aside thirty million eligible voters who aren’t registered to vote.


In response, political pollsters created categories of ‘Democratic-leaning’ and ‘Republican-leaning’ as if (1) the mass exodus from Party affiliation were a fashion statement rather than one of political disaffection and (2) duopoly Party control over the electoral process doesn’t effectively preclude unaffiliated candidates and Parties from running for office. Including unaffiliated voters in duopoly Party tallies is a strategy of systemic legitimation in that it implies choices outside of the duopoly Party system that don’t exist.

The institutional response has been to re-conjure the ‘foreign influences’ that so well supported American imperial endeavors in the past. For those short on nostalgia for the days of MAD (Mutual Assured Destruction) and genocidal slaughters, what hasn’t yet been well explained is (1) how foreign meddling prevented one hundred and sixty five million eligible voters (71%) from voting for Hillary Clinton and (2) how this external meddling differs from internal meddling in the form voter disenfranchisement and the legalized bribery that ‘motivates’ American politics.

by Rob Urie, Counterpunch |  Read more:
Images: Carlos Pacheco and Gallup

Kitty Hawk: Autonomous Flying Taxis


Autonomous flying taxis just took one big step forward to leaping off the pages of science fiction and into the real world, thanks to Google co-founder Larry Page’s Kitty Hawk.

The billionaire-backed firm has announced that it will begin the regulatory approval process required for launching its autonomous passenger-drone system in New Zealand, after conducting secret testing under the cover of another company called Zephyr Airworks.

The firm’s two-person craft, called Cora, is a 12-rotor plane-drone hybrid that can take off vertically like a drone, but then uses a propeller at the back to fly at up to 110 miles an hour for around 62 miles at a time. The all-electric Cora flies autonomously up to 914 metres (3,000ft) above ground, has a wingspan of 11 metres, and has been eight years in the making.

by Samuel Gibbs, The Guardian |  Read more:
Image: Kitty Hawk
[ed. It was only a matter of time. I hope there are private versions.]

Monday, March 12, 2018

On the Mysterious, Powerful Effects of Placebos

In 1957 psychologist Bruno Klopfer reported on the amazing case of a man he called Mr. Wright. Mr. Wright was suffering from advanced cancer of the lymph nodes. Tumors the size of oranges studded his skeleton and wound throughout his organs. He was so near death that he was more malignancy than man, his face pale on the pillow, an IV plunged into one of his stringy veins.

Some people, as they near the end of a long battle with cancer, their hair gone and their teeth loose in their sockets, are ready to exit, exhausted by the demanding treatments, by the burn of radiation and the poison of chemotherapy. But Mr. Wright, because he had a severe anemic condition, was not eligible for the treatments of the day, which were radiotherapy and nitrogen mustard. He had wasted away all on his own, without the help of cures that also kill. But his will to live, his desire to see the day, was strong, and the shadow of death that fell across his hospital bed, a dark hole into which he would soon dwindle and disappear, terrified him.

Then one day Mr. Wright—“febrile, gasping for air, completely bedridden,” according to his doctor—overheard people talking about a new cancer cure called Krebiozen, a horse serum, which was being tested at the very hospital he was in. Hope sprang up like a stalk inside him. He begged his doctor for a dose, and his doctor, although doubting the drug would help at this late stage, nevertheless loaded his syringe and took his patient’s wasted arm.

Three days passed as Mr. Wright lay quietly in his hospital bed. On the third morning after the shot of Krebiozen had been administered, his doctor returned to examine him, and an incredible thing had happened. Before the doctor arrived, Mr. Wright had swung his feet over his hospital bed and for the first time in months stood up straight on the floor, strong enough to support himself, to walk, even to stride, which he did, out of his room and down the ward to the station around which the nurses flurried. The doctor found this man who had been at death’s door now joking, flirting, cavorting. X-rays showed that the tumors had shrunk from the size of oranges to golf balls—having melted “like snowballs on a hot stove.”

No one could quite believe it, but no one could deny it either, because here was the man, once washed out but now ruddy with health and hope. Within ten days Mr. Wright was discharged from the hospital, cancer-free, and he went home to pick up where he had left off before cancer came to claim him, stepping back into his life as if slipping into a perfectly fitted suit. He was alive and loving it.

Days passed, weeks passed, and Mr. Wright remained free of malignancies. Within two months, however, reports came out in the news saying that the Krebiozen trial had concluded and the drug was worthless. Soon after that Mr. Wright’s tumors returned and he was back in the hospital, once more staring at the drain hole of death, at the shadow falling across his bed.

His doctor then did something that doctors today would never be permitted to do. He told Mr. Wright a story, a lie. The news reports, the doctor said, were wrong. Krebiozen was in fact a potent anticancer drug. Why, then, Mr. Wright wondered, had he relapsed, and so badly? Because, his doctor said, Mr. Wright had unfortunately been given an injection of the stuff from a weak batch, but the hospital was expecting a new shipment and it was guaranteed to be two times stronger than even the most potent Krebiozen to date. Mr. Wright’s doctor delayed administering anything to his patient so that his anticipation would build. After several days had passed, the doctor rolled up Mr. Wright’s sleeve; Mr. Wright offered his arm, and the doctor gave his patient a new injection—of pure water.

Again hope made an entrance. Mr. Wright let all his tumors go. Once again they shrank and disappeared until no trace of them could be found in his body, and once again he left the hospital. It’s not hard to picture him dancing his way through his days. A second remission! Mr. Wright lived for a further two months without symptoms and then, unfortunately for him, came another news report. The American Medical Association, after numerous tests on patients, issued its final verdict on Krebiozen, confidently declaring the drug to be useless. Mr. Wright’s tumors reappeared, and this time, within two days after his readmission to the hospital, he was dead.

The Pharmaceutical Factory in Our Heads


In the 1970s came the discovery of endorphins, which are opiate-like chemicals the body manufactures all on its own and which play a key role in the placebo effect, especially in cases of pain. The discovery led scientists to uncover a rich supply of nerves linking the brain to the immune system, which in turn resulted in the rise of a new branch of medicine called psychoneuroimmunology. Studies in this new medicine suggested that placebos may work to decrease pain—something they are especially good at doing—by increasing endorphins in the brain.

At the University of California, San Francisco, for example, in a 1978 double-blind experiment with young people who had recently had their wisdom teeth removed, most patients were given a placebo and reported significantly less pain. Then some of the subjects were given naloxone, a drug that is typically administered in emergency rooms in cases when a patient has overdosed on heroin or morphine. Naloxone works by blocking the opiate, thereby immediately reversing the effect of the deadly ingestion. In this study with the wisdom teeth patients, once they were given naloxone, the pain relief they had experienced as a result of the placebo suddenly vanished. Once again the young people were in pain. This outcome provided researchers with a strong suggestion as to how placebos might work. It must indeed be that they released the brain’s natural opiates—endorphins—and that as long as this release wasn’t blocked by naloxone, or by some other organic means, then these endorphins would allow us to find real relief.

Blue and Pink Pills

The form of the placebo has implications for its function. For instance, when it comes to pills, scientists have discovered that blue placebos tend to make people drowsy, whereas red or pink placebos induce alertness. In the 1970s several professors at the University of Cincinnati took 57 second-year medical students and divided them into four groups. Two groups received pink tablets and two groups blue tablets, and of the two groups receiving the same color, one group received one pill and the other group two pills. All of the tablets were inert. Thne students then listened to a one-hour lecture and after that went back to the lab to fill out forms rating their moods.

The results? The students who had received two tablets reported more intense responses than the students who had taken only one tablet. And of the students who had taken the blue tablets, 66 percent felt less alert after the lecture compared to only 26 percent of students who had taken the pink tablets. Medical anthropologist Daniel Moerman believes that the color of a capsule or a pill has a strong significance to the imbiber. Blues and greens are cool colors while reds and pinks are hot colors. A study in Texas showed that red and black capsules were ranked as strongest while white ones were weakest. “Colors are meaningful,” Moerman writes, “and these meanings can affect the outcome of medical treatment.” Blue pills make us drowsy while carmines perk us up. And large pills have more power over us than medium-sized ones, especially if they are multicolored.

The research on the size and color of pills makes one wonder if we might also be more strongly affected by pills embossed or engraved with a name: Tagamet, Venlafaxine, Zyprexa, Abilify, Concerta. Are drug companies not hoping that if they carefully and suggestively label their medicines, we will give their pills extra credence? Clearly the name matters. It is always multisyllabic and often suggests technological prowess. You cannot call a placebo Tim, for instance. The name should bring to mind test tubes and Bunsen burners with their petal-shaped flames. The name should also connote, somewhere in its utterance, the pure peace of good health, the abilities with which Abilify will endow you, the consonance of Concerta, when all the world makes solid sense.

Even more persuasive than pills, at least in the treatment of headaches, are placebo injections. A meta-analysis of a drug called Imitrex—which, when first introduced, was available only as an injection and then later as a capsule or a nasal spray—looked at 35 trials treating migraine sufferers with Imitrex versus placebo and found that, of those patients taking a placebo tablet, only 25.7 percent reported that their headache was mild or gone, compared to 32.4 percent of those treated with a placebo injection reporting relief. This may seem like a small difference, but it is statistically significant and could be expected to happen by chance only twice if the experiment were repeated a thousand times. Over and over, research has revealed that when patients are injected with an inert substance they report more pain relief than those who have simply swallowed a pill. Perhaps there is something about the needle, the press of the plunger as the supposed miracle liquid seeps below the skin and into the muscle, finding its way into the circulatory system, and at last to the wet red charm that sits within its curved cage. While a pill can be quiet, simple, its magic subtler and singular, there is drama in a shot.

Hope

Of course, none of this so far answers the question of exactly how endorphins get released in the first place. It seems to have something to do with belief, with hope, with faith. Even the smallest spark of it helps our heads to secrete chemicals so soothing that their analogues are illegal around the world. People who think they are drinking alcohol, but in fact are not, will nevertheless get tipsy. The opposite of hope is also very telling. Where it is absent, or unknown, medication sometimes fails to work. Valium, for instance, has been shown to affect a person only if he knows he is taking it.

But while there have been many studies done to predict the personality type of placebo responders, they have proven inconclusive. If only we knew! Then we would have a clear class of people to whom we could confidently feed inert substances and who could be assured of getting real relief. No such study, however, has been able to find a personality type, or rather, it would be more accurate to say that all of the studies conflict with one another. Some claim that people who respond to placebos have neurotic personality types; others claim that introverts are more likely to be fooled by placebos; while research from Britain has found that extroverts are the group most susceptible to placebos. Scientists have claimed at different times that placebo responders are both quiet and ebullient; that they have poor ego formation and superegos the size of a city; that they are judgmental as well as easily swayed; that they are trusting and skeptical. The net sum suggests that there is no definitive profile of a person likely to respond to a placebo. Everyone is a responder—maybe not all of the time but some of the time, in some situations, in great pain or fear, perhaps, or with wants so large that they outstrip the self who holds them. We do not know. The only thing we can say for sure is that 30 to 60 percent of the population can be fooled by a trick, by a sugar pill, by water, by an injection of saline or a bright pink sphere glittering in the palm.

by Lauren Slater, LitHub |  Read more:
Image: uncredited

Costco Auto Program

Shopping for a car can be an overwhelming process.

If, say, you know you're looking for an SUV, you have to determine the brand, model, and model year you'd like, as well as the dealership you want to use, whether you'd like to buy new or used, and whether you want to buy or lease. Where do you start your research? Which sources can you trust? What's a reasonable price for a given model?

The Costco Auto Program attempts to eliminate some of that uncertainty. Costco members can use the program's website to research and compare vehicles, calculate monthly payments, and get a discount at participating dealerships. While the size of the discount varies based on the vehicle's class, brand, and model, a Costco Auto Program spokesperson told Business Insider that the average discount is over $1,000 off a vehicle's average transaction price.

And since the program uses the same customers as Costco's retail operation, it has plenty of reasons to vet dealers and salespeople so their customers don't end up feeling like they were tricked — and putting the blame on Costco.

"We're not just providing leads to dealers. We're creating a referral," Costco Auto Program senior executive Rick Borg told Business Insider.

Here's how using the Costco Auto Program is different than the average car shopping process:

1. You have to be a Costco member

This may sound obvious, but while non-members can use some of the Auto Program's research tools, you need to be a Costco member to be eligible for the discounted price.

2. Multiple strands of research are condensed into one place

One of the most difficult parts of car shopping is figuring out where to start and end your research, especially if you don't read car news and reviews for fun.

The Costco Auto Program brings reviews, safety ratings, a financial calculator, and vehicle comparison tool under one roof. While it never hurts to compare research from multiple sources, the Costco Auto Program's website gives customers a good place to start.

3. Your choice of dealerships and salespeople is limited


According to Borg, Costco works with one dealership per brand in a defined geographic area around a given Costco warehouse. And at each participating dealership, only a handful of salespeople are authorized to work with customers shopping through the Auto Program.

Borg said Costco picks dealerships based on their prices, customer satisfaction index (CSI) scores, and reputations on social media. And authorized salespeople are also evaluated based on their CSI scores and must work at their dealership for at least six months before being eligible for the program.

But the limited number of dealerships and salespeople makes things a little more difficult for customers who don't end up satisfied with the first dealership Costco recommends to them. While Borg said Costco can point customers to other participating dealerships if they don't like the first one they're sent to, they may not be geographically convenient.

4. Costco has already negotiated the price

Negotiating the price on your car can be an intimidating process. The dealership has much of the information — inventory, the dealership or salesperson's proximity to their quarterly goals, the average discount customers receive — you need to negotiate the lowest possible price.

Borg said Costco takes a holistic approach when negotiating prices with their participating dealerships, looking at national and local prices for given models, as well as the prices customers can find through other discount programs to determine the discount its members should receive. And since it has a large membership base it can funnel to selected dealers, it has more leverage than any individual shopper.

by Mark Matousek, Business Insider | Read more:
Image: Ted S. Warren/AP

Sunday, March 11, 2018

Whose University Is It Anyway?

Toward the end of his life George Orwell wrote, “By the age of 50, everyone has the face he deserves.” The same is true of societies and their universities. By the time a society reaches its prime, it has the university it deserves. We have arrived there now in Canada, in the middle age of our regime, well past our youth but not quite to our dotage. What do we see when we look into the mirror of our universities? What image do we find there? Lots of smiling students, lots of talk of “impact” and “innovation,” more than one shovel going into the ground, a host of new community and industry partnerships to celebrate. But whose image is that really? Who created it and whom does it serve?

Administrators control the modern university. The faculty have “fallen,” to use Benjamin Ginsberg’s term. It’s an “all-administrative” institution now. Spending on administrators and administration exceeds spending on faculty, administrators out-number faculty by a long shot, and administrative salaries and benefit packages, particularly those of presidents and other senior managers, have skyrocketed over the last 10 years. Even more telling perhaps, students themselves increasingly resemble administrators more than professors in their ambitions and needs. Safety, comfort, security, quality services, first-class accommodations, guaranteed high grades, institutional brand, better job placements, the market value of the credential — these are the things one hears students demanding these days, not truth, justice, and intelligence. The traditional language of “professors” and “students” still exists, though “service provider” and “consumer” are making serious bids to replace them. The principles of collegial governance and joint decision-making are still on the books, but they are no longer what the institution is about or how it works.

The revolution is over and the administrators have won. But the persistence of traditional structures and language has led some to think that the fight over the institution is now just beginning. This is a mistake. As with most revolutions, open conflict occurs only after real power has already changed hands. In France, for instance, the bourgeoisie were able to seize control of the regime because in a sense they already had it. The same is true of the modern university. Administrators have been slowly taking control of the institution for decades. The recent proliferation of books, essays, and manifestoes critiquing this takeover creates the impression that the battle is now on. But that is an illusion, and most writers know it. All the voices of protest, many of them beautiful and insightful, all of them noble, are either cries of the vanquished or merely a dogged determination to take the losing case to court.

So what’s to do? Keep fighting and risk being canned? Admit the world has changed and join them? Concede defeat and quit?

These are all plausible responses, some uneasy mixture of which is likely what most of us use each day to survive. Personally, I’m less strident than the activists but more active than the pessimists. My own proposal is thus old-fashioned but also mildly seditious: I suggest we think about this change in the university in order to reach some understanding of what it means. Then we can act as we see fit, though without any illusions about consequences.

In order to do this I propose a test. A favorite trope among the administrative castes is accountability. People must be held accountable, they tell us, particularly professors. Well, let’s take them at their word and hold themaccountable. How have they done with the public trust since having assumed control of the university?

There is more than a little irony in this test. One of the most significant changes initiated in Canadian universities by the new administrative caste is precisely a reversal of traditional roles of accountability. In the traditional university, professors were “unaccountable.” The university was a sacred space where they were at liberty to pursue with students and colleagues their fields of inquiry without coercion or interference. This doesn’t mean they were free without qualification, of course. Professors were deeply accountable, but in a sense that went far beyond the reach, ambition, and perhaps even the interests of the administrative caste — they were accountable to discover and then to tell the truth, and to encourage their students to do the same. Assessing their abilities and accomplishments in this regard was a matter of judgment and so could not be quantified; it could be exercised only by those capable of it. A mechanism was therefore introduced to ensure this judgment was reached before the university committed to a faculty member permanently. After roughly 15 years of undergraduate and postgraduate study, and then a long period of careful professional observation and assessment, in most universities lasting five to six years, only those professors who proved themselves worthy were granted tenure and allowed to continue their teaching and research in pursuit of this beautiful goal

Administrators, on the other hand, were always held accountable precisely because their responsibilities were administrative in nature and therefore amenable to measurement and regular public audit. They were responsible to ensure the activities of students and professors were not interfered with and to manage the institution’s financial affairs. They were, in this sense, stewards of the sacred space, not its rulers.

In the contemporary university these roles have been reversed. Faculty members are the ones who are now accountable, but no longer to their peers and students and no longer regarding mastery of their subjects. Instead, they are accountable to administrators, who employ an increasingly wide array of instruments and staff to assess their productivity and measure their performance, all of which are now deemed eminently quantifiable. In place of judgment regarding the quality of their work we now have a variety of “outcomes” used as measures of worth. Student evaluations and enrollments (i.e., popularity), learning as determined by “rubrics,” quantity of publications, amount of research dollars, extent of social “impact” are the things that count now. In other words, only things you can quantify and none of which require judgment.

The administrators who protested so vociferously the lack of accountability of professors have now assumed the position themselves. Administrators are virtually untouchable today. Their value to the institution is assumed to be so great that it cannot be measured and cannot be subject to critical assessment. This explains in part their metastatic growth within the institution. University presidents having trouble “transitioning” to their new positions? Administrators having trouble administrating? No problem. What we need is a “Transition Committee” — that is to say, more administration — and for them all to be given ever more power in the governance of the institution.

Ask about virtually any problem in the university today and the solution proposed will inevitably be administrative. Why? Because we think administrators, not professors, guarantee the quality of the product and the achievement of institutional goals. But how is that possible in an academic environment in which knowledge and understanding are the true goals? Without putting too fine a point on it, it’s because they aren’t the true goals any longer. With the exception of certain key science and technology programs in which content proficiency is paramount, administrative efficiency and administrative mindedness are the true goals of the institution. Liberal arts and science programs are quietly being transmogrified through pressure from technology and technological modes of education so that their “content” is increasingly merely an occasion for the delivery of what the university truly desires — well-adjusted, administratively minded people to populate the administrative world we’ve created for them. The latent assumption in all this is that what is truly important is not what students know or how intelligent they are, but how well and how often they perform and how finely we measure it.

If you think I exaggerate, consider the deliverables universities are forever touting to students today: “collaboration,” “communication,” “critical analysis,” “impact.” All abstract nouns indicating things you can do or have, but not a word about what you know or who you are. No promise to teach you history or politics or biology or to make you wise or thoughtful or prudent. Just skills training to equip you to perform optimally in a competitive, innovative world.

Western capitalist societies have come into an inheritance in this respect. Friedrich Engels infamously remarked that in a truly communist state “the government of persons” would be replaced by the “administration of things.” The West has done the East one better and achieved its goal without the brutality that was the East’s undoing. We are now all happy, efficient, administrative objects producing and functioning within the Western technocratic social organism.

by Ron Srigley, LARB | Read more:
Image: uncredited via

Tending the Digital Commons: A Small Ethics Toward the Future

Facebook is unlikely to shut down tomorrow; nor is Twitter, or Instagram, or any other major social network. But they could. And it would be a good exercise to reflect on the fact that, should any or all of them disappear, no user would have any legal or practical recourse. I started thinking about this situation a few years ago when Tumblr—a platform devoted to a highly streamlined form of blogging, with an emphasis on easy reposting from other accounts—was bought by Yahoo. I was a heavy user of Tumblr at the time, having made thousands of posts, and given the propensity of large tech companies to buy smaller ones and then shut them down, I wondered what would become of my posts if Yahoo decided that Tumblr wasn’t worth the cost of maintaining it. I found that I was troubled by the possibility to a degree I hadn’t anticipated. It would be hyperbolic (not to say comical) to describe my Tumblr as a work of art, but I had put a lot of thought into what went on it, and sometimes I enjoyed looking through the sequence of posts, noticing how I had woven certain themes into that sequence, or feeling pleasure at having found interesting and unusual images. I felt a surge of proprietary affection—and anxiety.

Many personal computers have installed on them a small command-line tool called wget, which allows you to download webpages, or even whole websites, to your machine. I immediately downloaded the whole of my Tumblr to keep it safe—although if Tumblr did end up being shut down, I wasn’t sure how I would get all those posts back online. But that was a problem I could reserve for another day. In the meantime, I decided that I needed to talk with my students.

I was teaching a course at the time on reading, writing, and research in digital environments, so the question of who owns what we typically think of as “our” social media presence was a natural one. Yet I discovered that these students, all of whom were already interested in and fairly knowledgeable about computing, had not considered this peculiar situation—and were generally reluctant to: After all, what were the alternatives? Social media are about connecting with people, one of them commented, which means that you have to go where the people are. So, I replied, if that means that you have to give your personal data to tech companies that make money from it, that’s what you do? My students nodded, and shrugged. And how could I blame them? They thought as I had thought until about forty-eight hours earlier; and they acted as I continued to act, although we were all to various degrees uneasy about our actions.

In the years since I became fully aware of the vulnerability of what the Internet likes to call my “content,” I have made some changes in how I live online. But I have also become increasingly convinced that this vulnerability raises wide-ranging questions that ought to be of general concern. Those of us who live much of our lives online are not faced here simply with matters of intellectual property; we need to confront significant choices about the world we will hand down to those who come after us. The complexities of social media ought to prompt deep reflection on what we all owe to the future, and how we might discharge this debt. (...)

Learning to Live Outside the Walls

The first answers to these questions are quite concrete. This is not a case in which a social problem can profitably be addressed by encouraging people to change their way of thinking—although as a cultural critic I naturally default to that mode of suasion. It goes against my nature to say simply that certain specific changes in practice are required. But this is what I must say. We need to revivify the open Web and teach others—especially those who have never known the open Web—to learn to live extramurally: outside the walls.

What do I mean by “the open Web”? I mean the World Wide Web as created by Tim Berners-Lee and extended by later coders. The open Web is effectively a set of protocols that allows the creating, sharing, and experiencing of text, sounds, and images on any computer that is connected to the Internet and has installed on it a browser that can interpret information encoded in conformity with these protocols.

In their simplicity, those protocols are relentlessly generative, producing a heterogeneous mass of material for which the most common descriptor is simply “content.” It took a while for that state of affairs to come about, especially since early Internet service providers like CompuServe and AOL tried to offer proprietary content that couldn’t be found elsewhere, after the model of newspapers or magazines. This model might have worked for a longer period if the Web had been a place of consumption only, but it was also a place of creation, and people wanted what they created to be experienced by the greatest number of people possible. (As advertising made its way onto the Web, this was true of businesses as well as individuals.) And so the open Web, the digital commons, triumphed over those first attempts to keep content enclosed.

In the relatively early years of the Web, the mass of content was small enough that a group of people at Yahoo could organize it by category, in something like a digital version of the map of human knowledge created by the French Encyclopedists. But soon this arrangement became unwieldy, and seekers grew frustrated with clicking their way down into submenus only to have to click back up again when they couldn’t find what they wanted and plunge into a different set of submenus. Moreover, as the Web became amenable to more varied kinds of “content,” the tasks of encoding, unloading, and displaying one’s stuff became more technically challenging; not all web browsers were equally adept at rendering and displaying all the media formats and types. It was therefore inevitable that companies would arise to help manage the complexities.

Thus the rise of Google, with its brilliantly simple model of keyword searching as the most efficient replacement for navigating through tree-like structures of data—and thus, ultimately, the rise of services that promised to do the technical heavy lifting for their users, display their content in a clear and consistent way, and connect them with other people with similar interests, experiences, or histories. Some of these people have become the overlords of social media.

It is common to refer to universally popular social media sites like Facebook, Instagram, Snapchat, and Pinterest as “walled gardens.” But they are not gardens; they are walled industrial sites, within which users, for no financial compensation, produce data which the owners of the factories sift and then sell. Some of these factories (Twitter, Tumblr, and more recently Instagram) have transparent walls, by which I mean that you need an account to post anything but can view what has been posted on the open Web; others (Facebook, Snapchat) keep their walls mostly or wholly opaque. But they all exercise the same disciplinary control over those who create or share content on their domain.

I say there is no financial compensation for users, but many users feel themselves amply compensated by the aforementioned provisions: ease of use, connection with others, and so on. But such users should realize that everything they find desirable and beneficial about those sites could disappear tomorrow and leave them with absolutely no recourse, no one to whom to protest, no claim that they could make to anyone. When George Orwell was a scholarship boy at an English prep school, his headmaster, when angry, would tell him, “You are living on my bounty.” If you’re on Facebook, you are living on Mark Zuckerberg’s bounty.

This is of course a choice you are free to make. The problem comes when, by living in conditions of such dependence, you forget that there’s any other way to live—and therefore cannot teach another way to those who come after you. Your present-day social-media ecology eclipses the future social-media ecology of others. What if they don’t want their social lives to be bought and sold? What if they don’t want to live on the bounty of the factory owners of Silicon Valley? It would be good if we bequeathed to them another option, the possibility of living outside the walls the factory owners have built—whether for our safety or to imprison us, who can say? The open Web happens outside those walls.

A Domain of One’s Own

For the last few years we’ve been hearing a good many people (most of them computer programmers) say that every child should learn to code. As I write these words, I learn that Tim Cook, the CEO of Apple, has echoed that counsel. Learning to code is a nice thing, I suppose, but should be far, far down on our list of priorities for the young. Coding is a problem-solving skill, and few of the problems that beset young people today, or are likely to in the future, can be solved by writing scripts or programs for computers to execute. I suggest a less ambitious enterprise with broader applications, and I’ll begin by listing the primary elements of that enterprise. I think every young person who regularly uses a computer should learn the following:
how to choose a domain name
how to buy a domain
how to choose a good domain name provider
how to choose a good website-hosting service
how to find a good free text editor
how to transfer files to and from a server
how to write basic HTML, including links to CSS (Cascading Style Sheet) files
how to find free CSS templates
how to fiddle around in those templates to adjust them to your satisfaction
how to do basic photograph editing
how to cite your sources and link to the originals
how to use social media to share what you’ve created on your own turf rather than create within a walled factory
One could add considerably to this list, but these, I believe, are the rudimentary skills that should be possessed by anyone who wants to be a responsible citizen of the open Web—and not to be confined to living on the bounty of the digital headmasters.

There is, of course, no way to be completely independent online, either as an individual or a community: This is life on the grid, not off. Which means that anyone who learns the skills listed above—and even those who go well beyond such skills and host their websites on their own servers, while producing electricity on their own wind farms—will nevertheless need an Internet service provider. I am not speaking here of complete digital independence, but, rather, independence from the power of the walled factories and their owners.

A person who possesses and uses the skills on my list will still be dependent on organizations like ICANN (Internet Corporation for Assigned Names and Numbers) and its subsidiary IANA (Internet Assigned Numbers Authority), and the W3C (World Wide Web Consortium). But these are nonprofit organizations, and are moving toward less entanglement with government. For instance, IANA worked for eighteen years under contract with the National Telecommunications and Information Administration, a bureau of the US Department of Commerce, but that contract expired in October 2016, and IANA and ICANN are now run completely by an international community of volunteers. Similarly, the W3C, which controls the protocols by which computers on the Web communicate with one another and display information to users, is governed by a heterogenous group that included, at the time of writing, not only universities, libraries, and archives from around the world but also Fortune 500 companies—a few of them being among those walled factories I have been warning against.

In essence, the open Web, while not free from governmental and commercial pressures, is about as free from such pressures as a major component of modern capitalist society can be. And indeed it is this decentralized organizational model, coupled with heavy reliance on volunteer labor, that invites the model of stewardship I commended earlier in this essay. No one owns the Internet or the World Wide Web, and barring the rise of an industrial mega-power like the Buy-n-Large Corporation of Pixar’s 2008 movie WALL•E, no one will. Indeed, the healthy independence of the Internet and the Web is among the strongest bulwarks against the rise of a Buy-n-Large or the gigantic transnational corporations that play such a major role in the futures imagined by Kim Stanley Robinson, especially in his Hugo Award–winning Mars trilogy.

Some of the people most dedicated to the maintenance and development of the open Web also produce open-source software that makes it possible to acquire the skills I listed above. In this category we may find nonprofit organizations such as Mozilla, maker of the Firefox web browser, as well as for-profit organizations that make and release free and open-source software—for instance, Automattic, the maker of the popular blogging platform WordPress, and Github, whose employees, along with many volunteers, have created the excellent Atom text editor. One could achieve much of the independence I have recommended by using software available from those three sources alone.

I am, in short, endorsing here the goals of the Domain of One’s Own movement. As Audrey Watters, one of its most eloquent advocates, has observed,
By providing students and staff with a domain, I think we can start to address this [effort to achieve digital independence]. Students and staff can start to see how digital technologies work—those that underpin the Web and elsewhere. They can think about how these technologies shape the formation of their understanding of the world—how knowledge is formed and shared; how identity is formed and expressed. They can engage with that original purpose of the Web—sharing information and collaborating on knowledge-building endeavors—by doing meaningful work online, in the public, with other scholars. [The goal is that] they have a space of their own online, along with the support and the tools to think about what that can look like.
Watters adds that such a program of education goes far beyond the mere acquisition of skills: “I think its potential is far more radical than that. This isn’t about making sure literature students ‘learn to code’ or history students ‘learn to code’ or medical faculty ‘learn to code’ or chemistry faculty ‘learn to code.’” Instead, the real possibilities emerge from “recognizing that the World Wide Web is a site for scholarly activity. It’s about recognizing that students are scholars.” Scholars, I might add, who, through their scholarship, can be accountable to the future—who, to borrow a phrase from W.H. Auden, can “assume responsibility for time.” (...)

The Difference between Projecting and Promising


Training young people how to live and work extramurally—to limit their exposure to governance via terms of service and APIs—is a vital hedge against this future. We cannot prevent anyone from trusting his or her whole life to Facebook or Snapchat; but to know that there are alternatives, and alternatives over which we have a good deal of control, is powerful in itself. And this knowledge has the further effect of reminding us that code—including the algorithmic code that so often determines what we see online—is written by human beings for purposes that may be at odds with our own. The code that constitutes Facebook is written and constantly tweaked in order to increase the flow to Facebook of sellable data; if that code also promotes “global community,” so much the better, but that will never be its reason for being.

To teach children how to own their own domains and make their own websites might seem a small thing. In many cases it will be a small thing. Yet it serves as a reminder that the online world does not merely exist, but is built, and built to meet the desires of certain very powerful people—but could be built differently. Given the importance of online experience to most of us, and the great likelihood that its importance will only increase over time, training young people to do some building themselves can be a powerful counterspell to the one pronounced by Zuckerberg, who says that the walls of our social world are crumbling and only Facebook’s walls can replace them. We can live elsewhere and otherwise, and children should know that, and know it as early as possible. This is one of the ways in which we can exercise “the imperative of responsibility,” and to represent the future in the present.

by Alan Jacobs, The Hedgehog |  Read more:
Image: HedgehogReview.com
[ed. This is why I got off Facebook. Why share stuff I was interested in through an intermediary? Of course, Google could kill me off at anytime as well, and probably will at some point (being on Blogger), so enjoy Duck Soup while you can (or until I figure out how to transfer everything over to WordPress).]

Did you know the CIA _____?

I remember learning about Frank Olson in a high school psychology class, in our unit on drugs. What I learned is that during the ’50s the CIA experimented with LSD in their offices until one of their own got so high he fell out a window, embarrassing the agency. Not yet having experimented with LSD myself, that sounded like a believable turn of events. I did not learn about Frank Olson’s son Eric, and his life-defining quest to discover the truth about his father’s death. I did not learn about what actually happened to Frank, which is the subject of the new Errol Morris Netflix series Wormwood. What I learned that day in high school was a CIA cover story.

Spoiler alert: Frank Olson did not fall out of a hotel window in New York City, at least not on accident. The CIA did drug him—along with some of his coworkers—on a company retreat, but the LSD element seems to have functioned mostly as a red herring, a way to admit something without admitting the truth. Frank did not die while on drugs; the week following the acid retreat, Olson informed a superior he planned to leave his job at Camp Detrick and enter a new line of work. Within days he was dead, murdered by the CIA.

Wormwood is a six-episode miniseries, and because Morris spends the first few wiggling out from behind various CIA lies, the viewer isn’t prepared to understand and contextualize what (upon reflection) obviously happened, even when we’re told more or less straight out. Olson was a microbiologist who worked in weapons systems. He was killed in November 1953, in the waning days of open hostilities on the Korean peninsula, almost two years after the North Koreans first accused the United States of engaging in biological warfare. For decades there were rumors and claims: meningitis, cholera, smallpox, plague, hemorrhagic fever. Some of them diseases that had never been previously encountered in the area. The United States denied everything. But the United States also denied killing Frank Olson.

The most affecting moment in Wormwood occurs not during any of the historical reenactments—Peter Sarsgaard’s performance as Frank is only a notch or two above the kind of thing you might see on the History Channel—but at the end, when journalist Seymour Hersh is explaining to Morris that he can’t say on the record exactly what he now knows to be true about the case without burning his high-level source, but he still wants to offer Eric some closure. “Eric knows the ending,” he says, “I think he’s right. He’s totally convinced he knows the ending, am I right? Is he ambivalent in any way?” “No,” Morris confirms. Hersh gives a small shrug, “It’s a terrible story.” In the slight movement of his shoulders he says it all: Yes, the CIA murdered Eric’s father, as he has spent his whole adult life trying to prove, as he has known all along.

The CIA manages to contain a highly contradictory set of meanings: In stock conspiracy theory, the agency is second only to aliens in terms of “who did it,” as well as the Occam’s Razor best suspect for any notable murder that occurred anywhere in the world during the second half of the 20th century. I don’t think Americans have trouble simultaneously believing that stories of the CIA assassinating people are mostly “crazy,” and that they absolutely happened. What emerges from the contradiction is naïveté coated in a candy shell of cynicism, in the form of a trivia game called “Did you know the CIA _____?” Did you know the CIA killed Mossadegh? Did you know they killed Lumumba? Did you know the CIA killed Marilyn Monroe and Salvador Allende? Did you know they made a fake porn movie with a Sukarno lookalike, and they had to take out Noriega because he still had his CIA paystubs in a box in his closet? There’s a whole variant just about Fidel Castro. Some of these stories are urban legends, most are fundamentally true, and yet as individual tidbits they lack a total context. If cold war is the name for the third world war that didn’t happen, what’s the name for what did?

In a recent segment, Fox News host Laura Ingraham invited former CIA director James Woolsey to talk about Russian intervention in the American election. After chatting about China and Russia’s comparative cyber capabilities, Ingraham goes off script: “Have we ever tried to meddle in other countries’ elections?” Woolsey answers quickly: “Oh, probably, but it was for the good of the system, in order to avoid communists taking over. For example, in Europe, in ’47, ’48,’49 . . . the Greeks and the Italians . . . we, the CIA . . . ” Ingraham cuts him off, “We don’t do that now though?” She is ready to deny it to herself and the audience, but here Woolsey makes a horrible, inane sound with his mouth. The closest analog I can think of is the sound you make when you’re playing with a toddler and you pretend to eat a piece of plastic watermelon, something like: “Myum myum myum myum.” He and Ingraham both burst into laughter. “Only for a very good cause. In the interests of democracy,” he chuckles. In the late ‘40s, rigged Greek elections triggered a civil war in which over 150,000 people died. It is worth noting that Woolsey is a lifelong Democrat, while Ingraham gave a Nazi salute from the podium at the 2016 Republican National Convention.

Why does Woolsey answer “Oh, probably,” when he knows, first- or second-hand, that the answer is yes, and follows up with particular examples? The non-denial hand-wave goes further than yes. It says: Come on, you know we’d do anything. And Ingraham, already submerged in that patriotic blend of knowing and declining to know, transitions smoothly from “We don’t do that now though?” to laughing out loud. The glare of the studio lights off her titanium-white teeth is bright enough to illuminate seventy years of world history.

For as long as the CIA has existed, the US government has used outlandish accusations against the agency as evidence that this country’s enemies are delusional liars. At the same time, the agency has undeniably engaged in activities that are indistinguishable from the wildest conspiracy theories. Did the CIA drop bubonic plague on North Korea? Of course not. But if we did, then of course we did. It’s a convenient jump: Between these two necessities is the range of behaviors for which people and institutions can be held responsible. It’s hard to pull off this act with a straight face, but as Woolsey demonstrates in the Fox News clip, there’s no law saying you can’t do it with a big grin. (...)

Unfortunately Morris and Wormwood are focused on ambiguity for ambiguity’s sake, when, by the end of the story, there’s very little of it left. Eric found the CIA assassination manual, which includes a description of the preferred method: knocking someone on the head and then throwing them out of a high window in a public place. He has narrowed down the reasonable explanations—at the relevant level of specificity—to one. Unlike in a normal true-crime series, however, there’s nothing to be done: As Eric explains, you can sue the government for killing someone on accident, but not for killing them on purpose. The end result of Wormwood is that the viewer’s answer to the son flips like an Ingraham switch, from “Of course the CIA didn’t murder your dad” to “Of course the CIA murdered your dad.” I hope for his sake that the latter is easier to bear.
***
Did you know the CIA killed Bob Marley?

A CIA agent named Bill Oxley confessed on his deathbed that he gave the singer a pair of Converse sneakers, one of which hid in the toe a wire tainted with cancer. When Marley put on the shoes, he pricked his toe and was infected with the disease that would lead to his death.

No, that’s wrong. There was no CIA agent named Bill Oxley, and the story of Bob Marley’s lethal shoe is somewhere between an urban legend and fake news.

But did you know the CIA almost killed Bob Marley?

In 1976, facing a potentially close election, Jamaican prime minister Michael Manley maneuvered to co-opt a public concert by Marley, turning an intentionally apolitical show into a government-sponsored rally. When Marley agreed to go through with the show anyway, many feared a reprisal from the opposition Jamaica Labour Party (JLP), whose candidate Edward Seaga was implicitly endorsed by the American government. All year accusations had been flying that the CIA was, in various ways, intentionally destabilizing Jamaica in order to get Seaga in power and move the island away from Cuba (politically) and, principally, ensure cheap American access to the island’s bauxite ore. Both the JLP and Manley’s PNP controlled groups of gunmen, but (much to America’s chagrin) the social democrat Manley controlled the security forces, remained popular with the people, and was in general a capable politician (as evidenced by the concert preparations).

On December 3, 1976—two days before the concert—Marley was wounded when three gunmen shot up his house. Witnesses to the destruction describe “immense” firepower, with four automatics firing round after round—one of the men using two at the same time. The confidential State Department wire from Kingston was sent four days later: “REGGAE STAR SHOT; MOTIVE PROBABLY POLITICAL.” There was only one reasonable political motive: destabilization, in the interest of Seaga (or, as Kingston graffiti had it, “CIAga.”) The concert was meant to bring Jamaicans together, but some forces wanted to rip them apart. Where did the assassins get their guns? The people of Jamaica knew: The CIA. (...)

The lack of a smoking gun for any particular accusation shouldn’t be a stumbling block. In the famous words of Donald Rumsfeld: “Simply because you do not have evidence that something exists does not mean that you have evidence that it doesn’t exist.” (Rumsfeld would know; he was serving his first tour as secretary of defense during the Jamaican destabilization campaign.) The CIA exists in part to taint evidence, especially of its own activity. Even participant testimony can be discredited, as the CIA has done repeatedly (and with success) whenever former employees have spoken out, including during the Jamaican campaign. After all, in isolation each individual claim sounds—is carefully designed to sound—crazy. The circumstantial evidence, however, is harder to dismiss. If I rest a steak on my kitchen counter, leave the room, and come back to no steak and my dog licking the tile floor, I don’t need to check my door for a bandit. The CIA’s propensity for replacing frustrating foreign leaders or arming right-wing paramilitaries—especially in the western hemisphere—is no more mysterious than the dog. Refusing to put two and two together is not a mark of sophistication or fair-mindedness.

Of course the CIA shot Bob Marley. To assert that in that way is not to make a particular falsifiable claim about who delivered money to whom, who brought how many bullets where, who pulled which trigger, or who knew what when. It’s a broader claim about the circumstances under which it happened: a dense knot of information and interests and resources and bodies that was built that way on purpose, for that tangled quality, and to obtain a set of desired outcomes. The hegemonic “Grouping”—to put the State Department’s sarcastic term to honest work—ties the knot.

by Malcolm Harris, N+1 |  Read more:
Image: uncredited

The Imprint of a Mitsubishi kamikaze Zero along the side of H.M.S Sussex. 1945.

Japanese college students during their relocation to a internment camp. Sacramento, California 1942
via:

Saturday, March 10, 2018


Ira Carter
via:

In Which I Fix My Girlfriend's Grandparent's WiFi and am Hailed as a Conquering Hero

Lo, in the twilight days of the second year of the second decade of the third millennium did a great darkness descend over the wireless internet connectivity of the people of 276 Ferndale Street in the North-Central lands of Iowa. For many years, the gentlefolk of these lands basked in a wireless network overflowing with speed and ample internet, flowing like a river into their Compaq Presario. Many happy days did the people spend checking Hotmail and reading USAToday.com.

But then one gray morning did Internet Explorer 6 no longer load The Google. Refresh was clicked, again and again, but still did Internet Explorer 6 not load The Google. Perhaps The Google was broken, the people thought, but then The Yahoo too did not load. Nor did Hotmail. Nor USAToday.com. The land was thrown into panic. Internet Explorer 6 was minimized then maximized. The Compaq Presario was unplugged then plugged back in. The old mouse was brought out and plugged in beside the new mouse. Still, The Google did not load.

Some in the kingdom thought the cause of the darkness must be the Router. Little was known of the Router, legend told it had been installed behind the recliner long ago by a shadowy organization known as Comcast. Others in the kingdom believed it was brought by a distant cousin many feasts ago. Concluding the trouble must lie deep within the microchips, the people of 276 Ferndale Street did despair and resign themselves to defeat.

But with the dawn of the feast of Christmas did a beacon of hope manifest itself upon the inky horizon. Riding in upon a teal Ford Focus came a great warrior, a suitor of the gentlefolks’ granddaughter. Word had spread through the kingdom that this warrior worked with computers and perhaps even knew the true nature of the Router.

The people did beseech the warrior to aid them. They were a simple people, capable only of rewarding him with gratitude and a larger-than-normal serving of Jell-O salad. The warrior considered the possible battles before him. While others may have shirked the duties, forcing the good people of Ferndale Street to prostrate themselves before the tyrants of Comcast, Linksys, and Geek Squad, the warrior could not chill his heart to these depths. He accepted the quest and strode bravely across the beige shag carpet of the living room.

Deep, deep behind the recliner did the warrior crawl, over great mountains of National Geographic magazines and deep chasms of TV Guides. At last he reached a gnarled thicket of cords, a terrifying knot of gray and white and black and blue threatening to ensnare all who ventured further. The warrior charged ahead. Weaker men would have lost their minds in the madness: telephone cords plugged into Ethernet jacks, AC adapters plugged into phone jacks, a lone VGA cable wrapped in a firm knot around an Ethernet cord. But the warrior bested the thicket, ripping away the vestigial cords and swiftly untangling the deadly trap.

And at last the warrior arrived at the Router. It was a dusty black box with an array of shimmering green lights, blinking on and off, as if to taunt him to come any further. The warrior swiftly maneuvered to the rear of the router and verified what he had feared, what he had heard whispered in his ear from spirits beyond: all the cords were securely in place.

The warrior closed his eyes, summoning the power of his ancestors, long departed but watchful still. And then with the echoing beep of his digital watch, he moved with deadly speed, wrapping his battle-hardened hands around the power cord at the back of the Router.

Gripping it tightly, he pulled with all his force, dislodging the cord from the Router. The heavens roared. The earth wailed. The green lights turned off. Silently the warrior counted. One. Two. Three. And just as swiftly, the warrior plugged the cord back into the router. Great crashes of blood-red lightning boomed overhead. Murders of crows blackened the skies. The Power light came on solid green. The seas rolled. The WLAN light blinked on. The forests ignited. A dark fog rolled over the land and suddenly all was silent. The warrior stared at the Internet light, waiting, waiting. And then, as the world around him seemed all but dead, the Internet light began to blink.

by Mike Lacher, McSweeny's |  Read more:
Image: Piotr Adamowicz

McCoy Tyner Trio

The Streaming Void

Has the era of the cult film come to an end?

The defining cult film of the twenty-first century is neither a mirror held up to nature or a hammer used to shape reality. The Room, released in 2003, is like a ninety-nine-minute episode of The Real World as performed by the inmates of the asylum of Charenton under the direction of no one. It is an incoherent broadside against evil women (or all women) and a backwards vindication of all-American male breadwinners who buy their girls roses and befriend at-risk teens. It’s a tragedy not just because it ends with a suicide, but also because sitting through it requires a robust Dionysian death drive. The Room is so bad that when you point out its idiocy, the idiocy of stating the obvious bounces back and sticks to you.

The plot is both simplistic and convoluted. The film’s writer, director, and producer, Tommy Wiseau, stars as Johnny, the only banker in America who’s also a stand-up guy. His fiancée, Lisa (Juliette Danielle), is a gold digger who spends idle days seducing Johnny’s best friend, Mark (Greg Sestero), and shopping with her manipulative mother (Carolyn Minnott). When Johnny learns about the affair, he kills himself. Fin. But first, Wiseau allows himself some inexplicable digressions. Johnny and his friends play football in tuxedos. Johnny and Mark save a teenage boy (Philip Haldiman) from a gun-wielding drug dealer (Dan Janjigian). The mom announces she has breast cancer. There are several endless, poorly blocked sex scenes. Some of this is funny; mostly, though, it’s boring.

It was Wiseau’s performance, mainly the dialogue studded with non sequiturs, that elevated The Room to its current “Citizen Kane of bad movies” status. In one famous scene, Johnny storms onto his building’s roof deck, ranting about a rumor Lisa’s spreading that he hit her, then greets his buddy with a casual, “Oh, hi, Mark.” It didn’t help that Wiseau was a creepy-looking dude in his late forties who styled himself like a romance-novel cover model and cast actors in their twenties as his peers. His accent, which is never explained in the movie, brings to mind a generic “foreigner” in an old sitcom.

Before you protest that I’m picking on a defenseless oddball, you should know how The Room got made and how it became a cult sensation. Wiseau was a wealthy man living under an assumed name, with residences in San Francisco and Los Angeles. An enthusiastic American patriot, he was cagey about his country of origin and claimed, flimsily, to have made his money flipping real estate. Sestero—Wiseau’s friend, collaborator, sometime roommate, and the co-author of The Disaster Artist, a memoir about The Room—once found a driver’s license in his friend’s name listing a date of birth thirteen years later than Wiseau was actually born.

Wiseau spent $6 million on the project—which used few locations and no complicated special effects—because its star wasted hours stumbling over simple lines and its director made dozens of expensive, absurd decisions. The Room was shot simultaneously on 35 mm film and digital video, for no good reason. Instead of filming an exterior scene in an alley outside the studio, Wiseau made his art director build an identical indoor alley set. It’s not that everyone just sat back and let a rich fool wreck himself—Wiseau ignored his crew’s advice, bullied actresses about their appearances, threw tantrums, and lied constantly. Minott once fainted because Wiseau refused to buy an air conditioner for the set.

When the movie was finally finished, Wiseau paid for two weeks of L.A.-area screenings in order to submit it for Oscar consideration. During that run, The Room earned only $1,800 but caught the attention of film students Michael Rousselet and Scott Gairdner, who Sestero claims were drawn in by a review blurb outside the theater that read, “Watching this film is like getting stabbed in the head.” They spread the gospel of Tommy Wiseau to its rightful audience of bad-movie connoisseurs, who’ve been throwing spoons (in tribute to the living-room set’s inscrutable spoon art) at the screen during sold-out midnight showings ever since. In September 2017, The Hollywood Reporter quoted an expert who estimated The Room was earning up to $25,000 a month. This must have helped Wiseau recoup the $300,000 he spent on the strange billboard advertising the film that hung in Hollywood for five years.

The Disaster Artist has been fictionalized as a well-received buddy comedy that yielded a best actor Golden Globe for its own director and star, James Franco. As midnight screenings of The Room grew ever more popular, the new publicity secured it one day of wide theatrical release, on January 10. (The next evening, the L.A. Times published five women’s allegations of sexual misconduct against Franco, which helps to explain both his apparent amusement at Wiseau’s creepy misogyny and why he didn’t get any Oscar nominations.) But the awards-bait Tommy Wiseau is a lighter character than the mean, narcissistic borderline stalker Sestero describes, and the movie’s tale of a weirdo’s unlikely triumph rings hollow when you consider that people with $6 million of disposable income can do pretty much whatever they want. (Although we now know Wiseau is sixty-two and hails from Poland, the source of his fortune—described in Sestero’s book as a “bottomless pit”—remains a mystery.)

It makes an unfortunate sort of sense, when you consider our current political reality, that we’ve spent so much time and money celebrating the stupid, misogynistic vanity project of a self-described real estate tycoon with piles of possibly ill-gotten cash. Cult movies used to be scruffy, desperately original, and intermittently brilliant works of transgressive art that left audiences energized, and sometimes radicalized. The Room—which is bad art, but art nonetheless—does the opposite. The mirror it holds up is the underside of a dirty metal spoon; the reflection you see in it is blurry but genuine. So what’s sadder: that it set the prototype for the twenty-first-century American cult film or that it might wind up being our last enduring cult hit?

Hammer Time

Cult films once resembled Brechtian hammers more often than Shakespearean mirrors. The history of the form is as disjointed as the shaggiest entries in its filmography, but it’s possible to splice together a rough chronology. Although the phrase “cult film” wasn’t common until the seventies, the idea that movies and their stars could have cultish appeal dates back to the silent era. In the essay “Film Cults,” from 1932, the critic Harry Alan Potamkin traces the phenomenon to French Charlie Chaplin fans in the 1910s. He figures the United States had cultists of its own by 1917, when “American boys of delight,” by which he means populist critics, “began to write with seriousness, if not with critical insight, about the rudimentary film.” Potamkin cites the Marx Brothers, Mickey Mouse, and The Cabinet of Dr. Caligari as early objects of cinephilic obsession.

Over the next few decades, cults formed around stars whose personalities eclipsed their versatility as actors, from Humphrey Bogart to Judy Garland. B movies thrived at fifties drive-ins, spawning genre-loyal cults of western, sci-fi, and horror fans. Exploitation cinema—skeletally plotted collages of sex, drugs, and violence created to “exploit” captive audiences of various demographics—took off in the sixties, especially after the Production Code collapsed in 1968. Then the Hollywood wing of the youth counterculture started to make psychedelic films like Easy Rider and Head. Arthouses showed such sexually explicit, politically radical European movies as I Am Curious (Yellow) alongside the work of Fellini and Godard. Low-budget auteurs, most notably John Waters, combined all of those influences to make self-aware trash with subversive overtones.

Alejandro Jodorowsky’s mystical “acid western” El Topo wasn’t the first movie to screen at midnight, but its six-month run at New York’s Elgin Theater in 1970 and 1971 set the template for “midnight movies” as a cult ritual. About five years later, The Rocky Horror Picture Show opened a mile away at the Waverly. Interactive midnight screenings in cities around the country followed, and they’re still filling theaters after four decades.

That half a century of cult films preceded any attempt to define the category helps to explain why determining what even makes a “cult film” is so difficult. Cultists’ holiest text, Danny Peary’s Cult Movies (1981), does a solid job enumerating their most common attributes: “atypical heroes and heroines; offbeat dialogue; surprising plot resolutions; highly original storylines; brave themes, often of a sexual or political nature; ‘definitive’ performances by stars who have cult status; the novel handling of popular but stale genres.” Rocky Horror, a retro sci-fi musical that chronicles a prudish young couple’s corruption at the hands of a genderqueer alien/mad scientist who is ultimately vanquished by his own servants, meets all of these criteria.

Still, “cult classic” is an infinitely elastic term that crosses the boundaries of budget, genre, style, language, and intended audience.

by Judy Berman, The Baffler | Read more:
Image: Najeebah Al-Ghadban
[ed. I just finished reading The Disaster Artist and have absolutely zero interest in ever watching it on film, or The Room either (although one of my favorite movies of all time is Ed Wood. Go figure).]

Friday, March 9, 2018

Thoughts on the Trump-Kim Summit

I wanted to briefly comment and share some perspective on yesterday’s announcement of a Kim-Trump summit this spring. I’m going to format them as a series of propositions or individual items rather than a structured argument.

1. Despite all the bad things about President Trump’s management of U.S. foreign policy, there’s almost nothing that could be worse or more perilous than the progression of events of the last six to eight months on the Korean peninsula. There’s likely no more dangerous tension point in the world for the United States than the Korean peninsula. This is better than the alternative.

2. It is critical to understand that it is very, very hard to imagine that North Korea at any time in the foreseeable future will give up its nuclear weapons and nuclear weapons delivery capacity. President Trump does not seem to realize that. Why should they? One thing that is clear in the post-Cold War world is that states with nuclear weapons do not get attacked or overthrown by force of arms by the U.S. or anyone else. Nuclear states are the “made men” of the 21st-century global order. The North Korean state leadership may be paranoid. But they do have enemies. Critically, they are the only communist state based on a Cold War-era national division which has survived the fall of the Soviet Union. And power vis a vis the outside world is a centerpiece of the Kim family’s legitimacy within North Korea. (People I listen to who really know these issues often remind me that the Kim family’s calculus is driven not by calculations about the U.S. or South Korea but with the internal logic of regime stability.)

It is equally important to understand that North Korea probably mainly already has what it wants, a robust nuclear deterrent. They have demonstrated an ability to detonate multiple nuclear warheads and they have demonstrated the ability to launch ICBMs which can likely reach the continental United States. It’s unclear to me (and I suspect unclear to the U.S.) whether North Korea can combine those two technologies to deliver a nuclear warhead to the United States. But I don’t think we can discount that possibility. That very real possibility creates a massive deterrent already. This is important because it means that North Korea has some freedom to suspend its nuclear and missile testing for a short time or perhaps even indefinitely. Because they already have what they want. On the deterrence front, the status quo may work for them.

What Kim has done is agree to suspend testing (something he likely feels he can do from some position of strength) and meet with the U.S. as equals with no preconditions. This is a resounding confirmation of Kim’s premise and internal argument for legitimacy and power that building a nuclear arsenal will bring North Korea respect, power, and international legitimacy. Remember a key point. The Kims have been pushing for a summit with a U.S. president for 25 years. They have wanted this forever. Did President Trump know this? Or did he think it was a confirmation of his policy and genius? I strongly suspect it’s the latter.

What all of this means is that North Korea mainly has what it wants or rather what it feels it truly needs and can bargain from a position of strength to get what it wants: end of sanctions, normalization, aid, etc. It is highly unrealistic to imagine that North Korea will ever agree to denuclearize.

3. Generally speaking, you agree to a summit like this once there’s an agreement more or less in place worked out by subordinates. The meeting is what brings it all together. Trump appears inclined to approach this like a business negotiation that he’s going to knock out of the park even though he doesn’t have much understanding of what’s being discussed. He shows every sign of getting played and we’re likely to see a lengthy process of aides trying to make the best of the fait accompli he’s created.

US commentators often say that the U.S. shouldn’t hold summits like this because it confers “legitimacy” on a bad acting state. This always sounds a bit self-flattering to me. It is probably more accurate to say that it confers status and power to treat with the global great power as an equal. That is a thing of value, certainly to North Korea. They’ve wanted it for decades.

President Trump has already stated publicly that this is a negotiation, a meeting to achieve denuclearization. No one from North Korea has said that. Trump has said that. It is highly unlikely that North Korea will ever agree to that. That sets up high odds of embarrassment and disappointment. Given that President has shown very little inclination to be briefed or take advice, the odds are even greater. So will Trump agree to things he shouldn’t? Will he feel humiliated and react belligerently? It’s a highly unpredictable encounter with an inexperienced and petulant President who will reject almost all counsel. It sounds like North Korea has gotten a really big thing in exchange for very little and has no real incentive to do more than meet, bask, say generic things and not agree to anything. Trump looks like he’s getting played big time. I suspect we will learn that he didn’t consult with any advisors before agreeing to meet.

What does this all mean? As Churchill said, jaw, jaw, jaw is better than war, war, war. We have been on an extremely dangerous trajectory. There are no good solutions. There are probably no realistic paths to North Korea ceasing to be a nuclear power. But you could perhaps find agreements to limit the scope and reach of the nuclear and missile programs in place (perhaps even scale it back) with some mix of normalization and aid. But we start with an opening gambit in which Trump seems to be stumbling into something of a trap and being guided by his self-importance and vanity rather than any realistic appraisal of the situation.

Despite it being better than the alternative it’s starting in the worst way.

by Josh Marshall, TPM |  Read more:
[ed. See also: White House Now Trying To Moonwalk Back Trump’s Summit Goof]

$1 Fentanyl Test Strip

No drug has fueled the current spike in overdose deaths more than fentanyl. The synthetic opioid claimed two thirds of the record 64,000 such fatalities in the U.S. in 2016.

Up to 100 times more potent than morphine, this compound has played a significant role in reducing Americans’ life expectancy for the second straight year. In three states—Rhode Island, New Hampshire and Massachusetts—the drug was found responsible for at least 70 percent of opioid-related deaths, in what drug-harm reduction specialists have described as “slow-motion slaughter.”

Jess Tilley, a harm-reduction veteran in Northampton, Mass., deploys several outreach teams to rural areas. They pass out clean syringes and the overdose-reversal drug naloxone—and refer people to detox programs. But Tilley’s most in-demand item is a $1 testing strip that accurately detects the presence of fentanyl, which dealers sometimes add to boost the strength of illicit drugs.

In 2016, when the overdose rate in western Massachusetts doubled in a year, Tilley bought a thousand fentanyl testing strips—a low-tech device that resembles a pregnancy test—from a Canadian company, and began distributing them to drug users. She says the response was immediate. As demand skyrocketed, she also began asking low-level drug dealers to test their supplies for fentanyl. Tilley says they began regularly pulling tainted supplies from the market. “When people get a tangible result, it changes behavior,” says Tilley, executive director of the nonprofit New England User's Union. “I’ve been able to track behavior trends. People say when they get results [from the strips], they’re cutting back half of what they’re doing, or they’re making sure they have someone with them when they get high.”

A study released in February reinforces Tilley’s anecdotal accounts. Conducted by Johns Hopkins and Brown universities, the study examined three technologies for testing fentanyl in street drug supplies, and looked at how such testing influenced fentanyl use behavior. The strips (based on an immunoassay, which uses the bonding of an antibody with an antigen to detect the presence of fentanyl) proved most reliable according to the study, detecting fentanyl with 100 percent accuracy in drug samples from Baltimore and 96 percent accuracy in those from Rhode Island. (...)

“It’s an important study, and it shows that the fentanyl test can be really used as a point-of-care test within harm-reduction programs,” says Jon Zibbell, a public health scientist at nonprofit research organization RTI International. “The one limitation of the test strips is that they are not quantitative—they don't tell you how much product is there.”

Zibbell, a former health scientist at the U.S. Centers for Disease Control and Prevention who was not involved in the new study, says he is working on his own fentanyl-testing research. He believes the logical next step for opioid-deluged communities would be to set up local facilities where users can have their drugs subjected to a more quantitative and qualitative analysis. “If we really want to deal with the myriad of drugs that are in these products,” he says, “we need to have labs where people can drop their stuff off and have a result in real time. That would increase knowledge, increase safety and, at the end of the day, reduce overdose fatalities.”

by Alfonso Serrano, Scientific American | Read more:
Image: wonderferret Flickr

Kimiyo Mishima, Fragment II, 1964
via: