Sunday, March 11, 2018

Whose University Is It Anyway?

Toward the end of his life George Orwell wrote, “By the age of 50, everyone has the face he deserves.” The same is true of societies and their universities. By the time a society reaches its prime, it has the university it deserves. We have arrived there now in Canada, in the middle age of our regime, well past our youth but not quite to our dotage. What do we see when we look into the mirror of our universities? What image do we find there? Lots of smiling students, lots of talk of “impact” and “innovation,” more than one shovel going into the ground, a host of new community and industry partnerships to celebrate. But whose image is that really? Who created it and whom does it serve?

Administrators control the modern university. The faculty have “fallen,” to use Benjamin Ginsberg’s term. It’s an “all-administrative” institution now. Spending on administrators and administration exceeds spending on faculty, administrators out-number faculty by a long shot, and administrative salaries and benefit packages, particularly those of presidents and other senior managers, have skyrocketed over the last 10 years. Even more telling perhaps, students themselves increasingly resemble administrators more than professors in their ambitions and needs. Safety, comfort, security, quality services, first-class accommodations, guaranteed high grades, institutional brand, better job placements, the market value of the credential — these are the things one hears students demanding these days, not truth, justice, and intelligence. The traditional language of “professors” and “students” still exists, though “service provider” and “consumer” are making serious bids to replace them. The principles of collegial governance and joint decision-making are still on the books, but they are no longer what the institution is about or how it works.

The revolution is over and the administrators have won. But the persistence of traditional structures and language has led some to think that the fight over the institution is now just beginning. This is a mistake. As with most revolutions, open conflict occurs only after real power has already changed hands. In France, for instance, the bourgeoisie were able to seize control of the regime because in a sense they already had it. The same is true of the modern university. Administrators have been slowly taking control of the institution for decades. The recent proliferation of books, essays, and manifestoes critiquing this takeover creates the impression that the battle is now on. But that is an illusion, and most writers know it. All the voices of protest, many of them beautiful and insightful, all of them noble, are either cries of the vanquished or merely a dogged determination to take the losing case to court.

So what’s to do? Keep fighting and risk being canned? Admit the world has changed and join them? Concede defeat and quit?

These are all plausible responses, some uneasy mixture of which is likely what most of us use each day to survive. Personally, I’m less strident than the activists but more active than the pessimists. My own proposal is thus old-fashioned but also mildly seditious: I suggest we think about this change in the university in order to reach some understanding of what it means. Then we can act as we see fit, though without any illusions about consequences.

In order to do this I propose a test. A favorite trope among the administrative castes is accountability. People must be held accountable, they tell us, particularly professors. Well, let’s take them at their word and hold themaccountable. How have they done with the public trust since having assumed control of the university?

There is more than a little irony in this test. One of the most significant changes initiated in Canadian universities by the new administrative caste is precisely a reversal of traditional roles of accountability. In the traditional university, professors were “unaccountable.” The university was a sacred space where they were at liberty to pursue with students and colleagues their fields of inquiry without coercion or interference. This doesn’t mean they were free without qualification, of course. Professors were deeply accountable, but in a sense that went far beyond the reach, ambition, and perhaps even the interests of the administrative caste — they were accountable to discover and then to tell the truth, and to encourage their students to do the same. Assessing their abilities and accomplishments in this regard was a matter of judgment and so could not be quantified; it could be exercised only by those capable of it. A mechanism was therefore introduced to ensure this judgment was reached before the university committed to a faculty member permanently. After roughly 15 years of undergraduate and postgraduate study, and then a long period of careful professional observation and assessment, in most universities lasting five to six years, only those professors who proved themselves worthy were granted tenure and allowed to continue their teaching and research in pursuit of this beautiful goal

Administrators, on the other hand, were always held accountable precisely because their responsibilities were administrative in nature and therefore amenable to measurement and regular public audit. They were responsible to ensure the activities of students and professors were not interfered with and to manage the institution’s financial affairs. They were, in this sense, stewards of the sacred space, not its rulers.

In the contemporary university these roles have been reversed. Faculty members are the ones who are now accountable, but no longer to their peers and students and no longer regarding mastery of their subjects. Instead, they are accountable to administrators, who employ an increasingly wide array of instruments and staff to assess their productivity and measure their performance, all of which are now deemed eminently quantifiable. In place of judgment regarding the quality of their work we now have a variety of “outcomes” used as measures of worth. Student evaluations and enrollments (i.e., popularity), learning as determined by “rubrics,” quantity of publications, amount of research dollars, extent of social “impact” are the things that count now. In other words, only things you can quantify and none of which require judgment.

The administrators who protested so vociferously the lack of accountability of professors have now assumed the position themselves. Administrators are virtually untouchable today. Their value to the institution is assumed to be so great that it cannot be measured and cannot be subject to critical assessment. This explains in part their metastatic growth within the institution. University presidents having trouble “transitioning” to their new positions? Administrators having trouble administrating? No problem. What we need is a “Transition Committee” — that is to say, more administration — and for them all to be given ever more power in the governance of the institution.

Ask about virtually any problem in the university today and the solution proposed will inevitably be administrative. Why? Because we think administrators, not professors, guarantee the quality of the product and the achievement of institutional goals. But how is that possible in an academic environment in which knowledge and understanding are the true goals? Without putting too fine a point on it, it’s because they aren’t the true goals any longer. With the exception of certain key science and technology programs in which content proficiency is paramount, administrative efficiency and administrative mindedness are the true goals of the institution. Liberal arts and science programs are quietly being transmogrified through pressure from technology and technological modes of education so that their “content” is increasingly merely an occasion for the delivery of what the university truly desires — well-adjusted, administratively minded people to populate the administrative world we’ve created for them. The latent assumption in all this is that what is truly important is not what students know or how intelligent they are, but how well and how often they perform and how finely we measure it.

If you think I exaggerate, consider the deliverables universities are forever touting to students today: “collaboration,” “communication,” “critical analysis,” “impact.” All abstract nouns indicating things you can do or have, but not a word about what you know or who you are. No promise to teach you history or politics or biology or to make you wise or thoughtful or prudent. Just skills training to equip you to perform optimally in a competitive, innovative world.

Western capitalist societies have come into an inheritance in this respect. Friedrich Engels infamously remarked that in a truly communist state “the government of persons” would be replaced by the “administration of things.” The West has done the East one better and achieved its goal without the brutality that was the East’s undoing. We are now all happy, efficient, administrative objects producing and functioning within the Western technocratic social organism.

by Ron Srigley, LARB | Read more:
Image: uncredited via

Tending the Digital Commons: A Small Ethics Toward the Future

Facebook is unlikely to shut down tomorrow; nor is Twitter, or Instagram, or any other major social network. But they could. And it would be a good exercise to reflect on the fact that, should any or all of them disappear, no user would have any legal or practical recourse. I started thinking about this situation a few years ago when Tumblr—a platform devoted to a highly streamlined form of blogging, with an emphasis on easy reposting from other accounts—was bought by Yahoo. I was a heavy user of Tumblr at the time, having made thousands of posts, and given the propensity of large tech companies to buy smaller ones and then shut them down, I wondered what would become of my posts if Yahoo decided that Tumblr wasn’t worth the cost of maintaining it. I found that I was troubled by the possibility to a degree I hadn’t anticipated. It would be hyperbolic (not to say comical) to describe my Tumblr as a work of art, but I had put a lot of thought into what went on it, and sometimes I enjoyed looking through the sequence of posts, noticing how I had woven certain themes into that sequence, or feeling pleasure at having found interesting and unusual images. I felt a surge of proprietary affection—and anxiety.

Many personal computers have installed on them a small command-line tool called wget, which allows you to download webpages, or even whole websites, to your machine. I immediately downloaded the whole of my Tumblr to keep it safe—although if Tumblr did end up being shut down, I wasn’t sure how I would get all those posts back online. But that was a problem I could reserve for another day. In the meantime, I decided that I needed to talk with my students.

I was teaching a course at the time on reading, writing, and research in digital environments, so the question of who owns what we typically think of as “our” social media presence was a natural one. Yet I discovered that these students, all of whom were already interested in and fairly knowledgeable about computing, had not considered this peculiar situation—and were generally reluctant to: After all, what were the alternatives? Social media are about connecting with people, one of them commented, which means that you have to go where the people are. So, I replied, if that means that you have to give your personal data to tech companies that make money from it, that’s what you do? My students nodded, and shrugged. And how could I blame them? They thought as I had thought until about forty-eight hours earlier; and they acted as I continued to act, although we were all to various degrees uneasy about our actions.

In the years since I became fully aware of the vulnerability of what the Internet likes to call my “content,” I have made some changes in how I live online. But I have also become increasingly convinced that this vulnerability raises wide-ranging questions that ought to be of general concern. Those of us who live much of our lives online are not faced here simply with matters of intellectual property; we need to confront significant choices about the world we will hand down to those who come after us. The complexities of social media ought to prompt deep reflection on what we all owe to the future, and how we might discharge this debt. (...)

Learning to Live Outside the Walls

The first answers to these questions are quite concrete. This is not a case in which a social problem can profitably be addressed by encouraging people to change their way of thinking—although as a cultural critic I naturally default to that mode of suasion. It goes against my nature to say simply that certain specific changes in practice are required. But this is what I must say. We need to revivify the open Web and teach others—especially those who have never known the open Web—to learn to live extramurally: outside the walls.

What do I mean by “the open Web”? I mean the World Wide Web as created by Tim Berners-Lee and extended by later coders. The open Web is effectively a set of protocols that allows the creating, sharing, and experiencing of text, sounds, and images on any computer that is connected to the Internet and has installed on it a browser that can interpret information encoded in conformity with these protocols.

In their simplicity, those protocols are relentlessly generative, producing a heterogeneous mass of material for which the most common descriptor is simply “content.” It took a while for that state of affairs to come about, especially since early Internet service providers like CompuServe and AOL tried to offer proprietary content that couldn’t be found elsewhere, after the model of newspapers or magazines. This model might have worked for a longer period if the Web had been a place of consumption only, but it was also a place of creation, and people wanted what they created to be experienced by the greatest number of people possible. (As advertising made its way onto the Web, this was true of businesses as well as individuals.) And so the open Web, the digital commons, triumphed over those first attempts to keep content enclosed.

In the relatively early years of the Web, the mass of content was small enough that a group of people at Yahoo could organize it by category, in something like a digital version of the map of human knowledge created by the French Encyclopedists. But soon this arrangement became unwieldy, and seekers grew frustrated with clicking their way down into submenus only to have to click back up again when they couldn’t find what they wanted and plunge into a different set of submenus. Moreover, as the Web became amenable to more varied kinds of “content,” the tasks of encoding, unloading, and displaying one’s stuff became more technically challenging; not all web browsers were equally adept at rendering and displaying all the media formats and types. It was therefore inevitable that companies would arise to help manage the complexities.

Thus the rise of Google, with its brilliantly simple model of keyword searching as the most efficient replacement for navigating through tree-like structures of data—and thus, ultimately, the rise of services that promised to do the technical heavy lifting for their users, display their content in a clear and consistent way, and connect them with other people with similar interests, experiences, or histories. Some of these people have become the overlords of social media.

It is common to refer to universally popular social media sites like Facebook, Instagram, Snapchat, and Pinterest as “walled gardens.” But they are not gardens; they are walled industrial sites, within which users, for no financial compensation, produce data which the owners of the factories sift and then sell. Some of these factories (Twitter, Tumblr, and more recently Instagram) have transparent walls, by which I mean that you need an account to post anything but can view what has been posted on the open Web; others (Facebook, Snapchat) keep their walls mostly or wholly opaque. But they all exercise the same disciplinary control over those who create or share content on their domain.

I say there is no financial compensation for users, but many users feel themselves amply compensated by the aforementioned provisions: ease of use, connection with others, and so on. But such users should realize that everything they find desirable and beneficial about those sites could disappear tomorrow and leave them with absolutely no recourse, no one to whom to protest, no claim that they could make to anyone. When George Orwell was a scholarship boy at an English prep school, his headmaster, when angry, would tell him, “You are living on my bounty.” If you’re on Facebook, you are living on Mark Zuckerberg’s bounty.

This is of course a choice you are free to make. The problem comes when, by living in conditions of such dependence, you forget that there’s any other way to live—and therefore cannot teach another way to those who come after you. Your present-day social-media ecology eclipses the future social-media ecology of others. What if they don’t want their social lives to be bought and sold? What if they don’t want to live on the bounty of the factory owners of Silicon Valley? It would be good if we bequeathed to them another option, the possibility of living outside the walls the factory owners have built—whether for our safety or to imprison us, who can say? The open Web happens outside those walls.

A Domain of One’s Own

For the last few years we’ve been hearing a good many people (most of them computer programmers) say that every child should learn to code. As I write these words, I learn that Tim Cook, the CEO of Apple, has echoed that counsel. Learning to code is a nice thing, I suppose, but should be far, far down on our list of priorities for the young. Coding is a problem-solving skill, and few of the problems that beset young people today, or are likely to in the future, can be solved by writing scripts or programs for computers to execute. I suggest a less ambitious enterprise with broader applications, and I’ll begin by listing the primary elements of that enterprise. I think every young person who regularly uses a computer should learn the following:
how to choose a domain name
how to buy a domain
how to choose a good domain name provider
how to choose a good website-hosting service
how to find a good free text editor
how to transfer files to and from a server
how to write basic HTML, including links to CSS (Cascading Style Sheet) files
how to find free CSS templates
how to fiddle around in those templates to adjust them to your satisfaction
how to do basic photograph editing
how to cite your sources and link to the originals
how to use social media to share what you’ve created on your own turf rather than create within a walled factory
One could add considerably to this list, but these, I believe, are the rudimentary skills that should be possessed by anyone who wants to be a responsible citizen of the open Web—and not to be confined to living on the bounty of the digital headmasters.

There is, of course, no way to be completely independent online, either as an individual or a community: This is life on the grid, not off. Which means that anyone who learns the skills listed above—and even those who go well beyond such skills and host their websites on their own servers, while producing electricity on their own wind farms—will nevertheless need an Internet service provider. I am not speaking here of complete digital independence, but, rather, independence from the power of the walled factories and their owners.

A person who possesses and uses the skills on my list will still be dependent on organizations like ICANN (Internet Corporation for Assigned Names and Numbers) and its subsidiary IANA (Internet Assigned Numbers Authority), and the W3C (World Wide Web Consortium). But these are nonprofit organizations, and are moving toward less entanglement with government. For instance, IANA worked for eighteen years under contract with the National Telecommunications and Information Administration, a bureau of the US Department of Commerce, but that contract expired in October 2016, and IANA and ICANN are now run completely by an international community of volunteers. Similarly, the W3C, which controls the protocols by which computers on the Web communicate with one another and display information to users, is governed by a heterogenous group that included, at the time of writing, not only universities, libraries, and archives from around the world but also Fortune 500 companies—a few of them being among those walled factories I have been warning against.

In essence, the open Web, while not free from governmental and commercial pressures, is about as free from such pressures as a major component of modern capitalist society can be. And indeed it is this decentralized organizational model, coupled with heavy reliance on volunteer labor, that invites the model of stewardship I commended earlier in this essay. No one owns the Internet or the World Wide Web, and barring the rise of an industrial mega-power like the Buy-n-Large Corporation of Pixar’s 2008 movie WALL•E, no one will. Indeed, the healthy independence of the Internet and the Web is among the strongest bulwarks against the rise of a Buy-n-Large or the gigantic transnational corporations that play such a major role in the futures imagined by Kim Stanley Robinson, especially in his Hugo Award–winning Mars trilogy.

Some of the people most dedicated to the maintenance and development of the open Web also produce open-source software that makes it possible to acquire the skills I listed above. In this category we may find nonprofit organizations such as Mozilla, maker of the Firefox web browser, as well as for-profit organizations that make and release free and open-source software—for instance, Automattic, the maker of the popular blogging platform WordPress, and Github, whose employees, along with many volunteers, have created the excellent Atom text editor. One could achieve much of the independence I have recommended by using software available from those three sources alone.

I am, in short, endorsing here the goals of the Domain of One’s Own movement. As Audrey Watters, one of its most eloquent advocates, has observed,
By providing students and staff with a domain, I think we can start to address this [effort to achieve digital independence]. Students and staff can start to see how digital technologies work—those that underpin the Web and elsewhere. They can think about how these technologies shape the formation of their understanding of the world—how knowledge is formed and shared; how identity is formed and expressed. They can engage with that original purpose of the Web—sharing information and collaborating on knowledge-building endeavors—by doing meaningful work online, in the public, with other scholars. [The goal is that] they have a space of their own online, along with the support and the tools to think about what that can look like.
Watters adds that such a program of education goes far beyond the mere acquisition of skills: “I think its potential is far more radical than that. This isn’t about making sure literature students ‘learn to code’ or history students ‘learn to code’ or medical faculty ‘learn to code’ or chemistry faculty ‘learn to code.’” Instead, the real possibilities emerge from “recognizing that the World Wide Web is a site for scholarly activity. It’s about recognizing that students are scholars.” Scholars, I might add, who, through their scholarship, can be accountable to the future—who, to borrow a phrase from W.H. Auden, can “assume responsibility for time.” (...)

The Difference between Projecting and Promising


Training young people how to live and work extramurally—to limit their exposure to governance via terms of service and APIs—is a vital hedge against this future. We cannot prevent anyone from trusting his or her whole life to Facebook or Snapchat; but to know that there are alternatives, and alternatives over which we have a good deal of control, is powerful in itself. And this knowledge has the further effect of reminding us that code—including the algorithmic code that so often determines what we see online—is written by human beings for purposes that may be at odds with our own. The code that constitutes Facebook is written and constantly tweaked in order to increase the flow to Facebook of sellable data; if that code also promotes “global community,” so much the better, but that will never be its reason for being.

To teach children how to own their own domains and make their own websites might seem a small thing. In many cases it will be a small thing. Yet it serves as a reminder that the online world does not merely exist, but is built, and built to meet the desires of certain very powerful people—but could be built differently. Given the importance of online experience to most of us, and the great likelihood that its importance will only increase over time, training young people to do some building themselves can be a powerful counterspell to the one pronounced by Zuckerberg, who says that the walls of our social world are crumbling and only Facebook’s walls can replace them. We can live elsewhere and otherwise, and children should know that, and know it as early as possible. This is one of the ways in which we can exercise “the imperative of responsibility,” and to represent the future in the present.

by Alan Jacobs, The Hedgehog |  Read more:
Image: HedgehogReview.com
[ed. This is why I got off Facebook. Why share stuff I was interested in through an intermediary? Of course, Google could kill me off at anytime as well, and probably will at some point (being on Blogger), so enjoy Duck Soup while you can (or until I figure out how to transfer everything over to WordPress).]

Did you know the CIA _____?

I remember learning about Frank Olson in a high school psychology class, in our unit on drugs. What I learned is that during the ’50s the CIA experimented with LSD in their offices until one of their own got so high he fell out a window, embarrassing the agency. Not yet having experimented with LSD myself, that sounded like a believable turn of events. I did not learn about Frank Olson’s son Eric, and his life-defining quest to discover the truth about his father’s death. I did not learn about what actually happened to Frank, which is the subject of the new Errol Morris Netflix series Wormwood. What I learned that day in high school was a CIA cover story.

Spoiler alert: Frank Olson did not fall out of a hotel window in New York City, at least not on accident. The CIA did drug him—along with some of his coworkers—on a company retreat, but the LSD element seems to have functioned mostly as a red herring, a way to admit something without admitting the truth. Frank did not die while on drugs; the week following the acid retreat, Olson informed a superior he planned to leave his job at Camp Detrick and enter a new line of work. Within days he was dead, murdered by the CIA.

Wormwood is a six-episode miniseries, and because Morris spends the first few wiggling out from behind various CIA lies, the viewer isn’t prepared to understand and contextualize what (upon reflection) obviously happened, even when we’re told more or less straight out. Olson was a microbiologist who worked in weapons systems. He was killed in November 1953, in the waning days of open hostilities on the Korean peninsula, almost two years after the North Koreans first accused the United States of engaging in biological warfare. For decades there were rumors and claims: meningitis, cholera, smallpox, plague, hemorrhagic fever. Some of them diseases that had never been previously encountered in the area. The United States denied everything. But the United States also denied killing Frank Olson.

The most affecting moment in Wormwood occurs not during any of the historical reenactments—Peter Sarsgaard’s performance as Frank is only a notch or two above the kind of thing you might see on the History Channel—but at the end, when journalist Seymour Hersh is explaining to Morris that he can’t say on the record exactly what he now knows to be true about the case without burning his high-level source, but he still wants to offer Eric some closure. “Eric knows the ending,” he says, “I think he’s right. He’s totally convinced he knows the ending, am I right? Is he ambivalent in any way?” “No,” Morris confirms. Hersh gives a small shrug, “It’s a terrible story.” In the slight movement of his shoulders he says it all: Yes, the CIA murdered Eric’s father, as he has spent his whole adult life trying to prove, as he has known all along.

The CIA manages to contain a highly contradictory set of meanings: In stock conspiracy theory, the agency is second only to aliens in terms of “who did it,” as well as the Occam’s Razor best suspect for any notable murder that occurred anywhere in the world during the second half of the 20th century. I don’t think Americans have trouble simultaneously believing that stories of the CIA assassinating people are mostly “crazy,” and that they absolutely happened. What emerges from the contradiction is naïveté coated in a candy shell of cynicism, in the form of a trivia game called “Did you know the CIA _____?” Did you know the CIA killed Mossadegh? Did you know they killed Lumumba? Did you know the CIA killed Marilyn Monroe and Salvador Allende? Did you know they made a fake porn movie with a Sukarno lookalike, and they had to take out Noriega because he still had his CIA paystubs in a box in his closet? There’s a whole variant just about Fidel Castro. Some of these stories are urban legends, most are fundamentally true, and yet as individual tidbits they lack a total context. If cold war is the name for the third world war that didn’t happen, what’s the name for what did?

In a recent segment, Fox News host Laura Ingraham invited former CIA director James Woolsey to talk about Russian intervention in the American election. After chatting about China and Russia’s comparative cyber capabilities, Ingraham goes off script: “Have we ever tried to meddle in other countries’ elections?” Woolsey answers quickly: “Oh, probably, but it was for the good of the system, in order to avoid communists taking over. For example, in Europe, in ’47, ’48,’49 . . . the Greeks and the Italians . . . we, the CIA . . . ” Ingraham cuts him off, “We don’t do that now though?” She is ready to deny it to herself and the audience, but here Woolsey makes a horrible, inane sound with his mouth. The closest analog I can think of is the sound you make when you’re playing with a toddler and you pretend to eat a piece of plastic watermelon, something like: “Myum myum myum myum.” He and Ingraham both burst into laughter. “Only for a very good cause. In the interests of democracy,” he chuckles. In the late ‘40s, rigged Greek elections triggered a civil war in which over 150,000 people died. It is worth noting that Woolsey is a lifelong Democrat, while Ingraham gave a Nazi salute from the podium at the 2016 Republican National Convention.

Why does Woolsey answer “Oh, probably,” when he knows, first- or second-hand, that the answer is yes, and follows up with particular examples? The non-denial hand-wave goes further than yes. It says: Come on, you know we’d do anything. And Ingraham, already submerged in that patriotic blend of knowing and declining to know, transitions smoothly from “We don’t do that now though?” to laughing out loud. The glare of the studio lights off her titanium-white teeth is bright enough to illuminate seventy years of world history.

For as long as the CIA has existed, the US government has used outlandish accusations against the agency as evidence that this country’s enemies are delusional liars. At the same time, the agency has undeniably engaged in activities that are indistinguishable from the wildest conspiracy theories. Did the CIA drop bubonic plague on North Korea? Of course not. But if we did, then of course we did. It’s a convenient jump: Between these two necessities is the range of behaviors for which people and institutions can be held responsible. It’s hard to pull off this act with a straight face, but as Woolsey demonstrates in the Fox News clip, there’s no law saying you can’t do it with a big grin. (...)

Unfortunately Morris and Wormwood are focused on ambiguity for ambiguity’s sake, when, by the end of the story, there’s very little of it left. Eric found the CIA assassination manual, which includes a description of the preferred method: knocking someone on the head and then throwing them out of a high window in a public place. He has narrowed down the reasonable explanations—at the relevant level of specificity—to one. Unlike in a normal true-crime series, however, there’s nothing to be done: As Eric explains, you can sue the government for killing someone on accident, but not for killing them on purpose. The end result of Wormwood is that the viewer’s answer to the son flips like an Ingraham switch, from “Of course the CIA didn’t murder your dad” to “Of course the CIA murdered your dad.” I hope for his sake that the latter is easier to bear.
***
Did you know the CIA killed Bob Marley?

A CIA agent named Bill Oxley confessed on his deathbed that he gave the singer a pair of Converse sneakers, one of which hid in the toe a wire tainted with cancer. When Marley put on the shoes, he pricked his toe and was infected with the disease that would lead to his death.

No, that’s wrong. There was no CIA agent named Bill Oxley, and the story of Bob Marley’s lethal shoe is somewhere between an urban legend and fake news.

But did you know the CIA almost killed Bob Marley?

In 1976, facing a potentially close election, Jamaican prime minister Michael Manley maneuvered to co-opt a public concert by Marley, turning an intentionally apolitical show into a government-sponsored rally. When Marley agreed to go through with the show anyway, many feared a reprisal from the opposition Jamaica Labour Party (JLP), whose candidate Edward Seaga was implicitly endorsed by the American government. All year accusations had been flying that the CIA was, in various ways, intentionally destabilizing Jamaica in order to get Seaga in power and move the island away from Cuba (politically) and, principally, ensure cheap American access to the island’s bauxite ore. Both the JLP and Manley’s PNP controlled groups of gunmen, but (much to America’s chagrin) the social democrat Manley controlled the security forces, remained popular with the people, and was in general a capable politician (as evidenced by the concert preparations).

On December 3, 1976—two days before the concert—Marley was wounded when three gunmen shot up his house. Witnesses to the destruction describe “immense” firepower, with four automatics firing round after round—one of the men using two at the same time. The confidential State Department wire from Kingston was sent four days later: “REGGAE STAR SHOT; MOTIVE PROBABLY POLITICAL.” There was only one reasonable political motive: destabilization, in the interest of Seaga (or, as Kingston graffiti had it, “CIAga.”) The concert was meant to bring Jamaicans together, but some forces wanted to rip them apart. Where did the assassins get their guns? The people of Jamaica knew: The CIA. (...)

The lack of a smoking gun for any particular accusation shouldn’t be a stumbling block. In the famous words of Donald Rumsfeld: “Simply because you do not have evidence that something exists does not mean that you have evidence that it doesn’t exist.” (Rumsfeld would know; he was serving his first tour as secretary of defense during the Jamaican destabilization campaign.) The CIA exists in part to taint evidence, especially of its own activity. Even participant testimony can be discredited, as the CIA has done repeatedly (and with success) whenever former employees have spoken out, including during the Jamaican campaign. After all, in isolation each individual claim sounds—is carefully designed to sound—crazy. The circumstantial evidence, however, is harder to dismiss. If I rest a steak on my kitchen counter, leave the room, and come back to no steak and my dog licking the tile floor, I don’t need to check my door for a bandit. The CIA’s propensity for replacing frustrating foreign leaders or arming right-wing paramilitaries—especially in the western hemisphere—is no more mysterious than the dog. Refusing to put two and two together is not a mark of sophistication or fair-mindedness.

Of course the CIA shot Bob Marley. To assert that in that way is not to make a particular falsifiable claim about who delivered money to whom, who brought how many bullets where, who pulled which trigger, or who knew what when. It’s a broader claim about the circumstances under which it happened: a dense knot of information and interests and resources and bodies that was built that way on purpose, for that tangled quality, and to obtain a set of desired outcomes. The hegemonic “Grouping”—to put the State Department’s sarcastic term to honest work—ties the knot.

by Malcolm Harris, N+1 |  Read more:
Image: uncredited

The Imprint of a Mitsubishi kamikaze Zero along the side of H.M.S Sussex. 1945.

Japanese college students during their relocation to a internment camp. Sacramento, California 1942
via:

Saturday, March 10, 2018


Ira Carter
via:

In Which I Fix My Girlfriend's Grandparent's WiFi and am Hailed as a Conquering Hero

Lo, in the twilight days of the second year of the second decade of the third millennium did a great darkness descend over the wireless internet connectivity of the people of 276 Ferndale Street in the North-Central lands of Iowa. For many years, the gentlefolk of these lands basked in a wireless network overflowing with speed and ample internet, flowing like a river into their Compaq Presario. Many happy days did the people spend checking Hotmail and reading USAToday.com.

But then one gray morning did Internet Explorer 6 no longer load The Google. Refresh was clicked, again and again, but still did Internet Explorer 6 not load The Google. Perhaps The Google was broken, the people thought, but then The Yahoo too did not load. Nor did Hotmail. Nor USAToday.com. The land was thrown into panic. Internet Explorer 6 was minimized then maximized. The Compaq Presario was unplugged then plugged back in. The old mouse was brought out and plugged in beside the new mouse. Still, The Google did not load.

Some in the kingdom thought the cause of the darkness must be the Router. Little was known of the Router, legend told it had been installed behind the recliner long ago by a shadowy organization known as Comcast. Others in the kingdom believed it was brought by a distant cousin many feasts ago. Concluding the trouble must lie deep within the microchips, the people of 276 Ferndale Street did despair and resign themselves to defeat.

But with the dawn of the feast of Christmas did a beacon of hope manifest itself upon the inky horizon. Riding in upon a teal Ford Focus came a great warrior, a suitor of the gentlefolks’ granddaughter. Word had spread through the kingdom that this warrior worked with computers and perhaps even knew the true nature of the Router.

The people did beseech the warrior to aid them. They were a simple people, capable only of rewarding him with gratitude and a larger-than-normal serving of Jell-O salad. The warrior considered the possible battles before him. While others may have shirked the duties, forcing the good people of Ferndale Street to prostrate themselves before the tyrants of Comcast, Linksys, and Geek Squad, the warrior could not chill his heart to these depths. He accepted the quest and strode bravely across the beige shag carpet of the living room.

Deep, deep behind the recliner did the warrior crawl, over great mountains of National Geographic magazines and deep chasms of TV Guides. At last he reached a gnarled thicket of cords, a terrifying knot of gray and white and black and blue threatening to ensnare all who ventured further. The warrior charged ahead. Weaker men would have lost their minds in the madness: telephone cords plugged into Ethernet jacks, AC adapters plugged into phone jacks, a lone VGA cable wrapped in a firm knot around an Ethernet cord. But the warrior bested the thicket, ripping away the vestigial cords and swiftly untangling the deadly trap.

And at last the warrior arrived at the Router. It was a dusty black box with an array of shimmering green lights, blinking on and off, as if to taunt him to come any further. The warrior swiftly maneuvered to the rear of the router and verified what he had feared, what he had heard whispered in his ear from spirits beyond: all the cords were securely in place.

The warrior closed his eyes, summoning the power of his ancestors, long departed but watchful still. And then with the echoing beep of his digital watch, he moved with deadly speed, wrapping his battle-hardened hands around the power cord at the back of the Router.

Gripping it tightly, he pulled with all his force, dislodging the cord from the Router. The heavens roared. The earth wailed. The green lights turned off. Silently the warrior counted. One. Two. Three. And just as swiftly, the warrior plugged the cord back into the router. Great crashes of blood-red lightning boomed overhead. Murders of crows blackened the skies. The Power light came on solid green. The seas rolled. The WLAN light blinked on. The forests ignited. A dark fog rolled over the land and suddenly all was silent. The warrior stared at the Internet light, waiting, waiting. And then, as the world around him seemed all but dead, the Internet light began to blink.

by Mike Lacher, McSweeny's |  Read more:
Image: Piotr Adamowicz

McCoy Tyner Trio

The Streaming Void

Has the era of the cult film come to an end?

The defining cult film of the twenty-first century is neither a mirror held up to nature or a hammer used to shape reality. The Room, released in 2003, is like a ninety-nine-minute episode of The Real World as performed by the inmates of the asylum of Charenton under the direction of no one. It is an incoherent broadside against evil women (or all women) and a backwards vindication of all-American male breadwinners who buy their girls roses and befriend at-risk teens. It’s a tragedy not just because it ends with a suicide, but also because sitting through it requires a robust Dionysian death drive. The Room is so bad that when you point out its idiocy, the idiocy of stating the obvious bounces back and sticks to you.

The plot is both simplistic and convoluted. The film’s writer, director, and producer, Tommy Wiseau, stars as Johnny, the only banker in America who’s also a stand-up guy. His fiancée, Lisa (Juliette Danielle), is a gold digger who spends idle days seducing Johnny’s best friend, Mark (Greg Sestero), and shopping with her manipulative mother (Carolyn Minnott). When Johnny learns about the affair, he kills himself. Fin. But first, Wiseau allows himself some inexplicable digressions. Johnny and his friends play football in tuxedos. Johnny and Mark save a teenage boy (Philip Haldiman) from a gun-wielding drug dealer (Dan Janjigian). The mom announces she has breast cancer. There are several endless, poorly blocked sex scenes. Some of this is funny; mostly, though, it’s boring.

It was Wiseau’s performance, mainly the dialogue studded with non sequiturs, that elevated The Room to its current “Citizen Kane of bad movies” status. In one famous scene, Johnny storms onto his building’s roof deck, ranting about a rumor Lisa’s spreading that he hit her, then greets his buddy with a casual, “Oh, hi, Mark.” It didn’t help that Wiseau was a creepy-looking dude in his late forties who styled himself like a romance-novel cover model and cast actors in their twenties as his peers. His accent, which is never explained in the movie, brings to mind a generic “foreigner” in an old sitcom.

Before you protest that I’m picking on a defenseless oddball, you should know how The Room got made and how it became a cult sensation. Wiseau was a wealthy man living under an assumed name, with residences in San Francisco and Los Angeles. An enthusiastic American patriot, he was cagey about his country of origin and claimed, flimsily, to have made his money flipping real estate. Sestero—Wiseau’s friend, collaborator, sometime roommate, and the co-author of The Disaster Artist, a memoir about The Room—once found a driver’s license in his friend’s name listing a date of birth thirteen years later than Wiseau was actually born.

Wiseau spent $6 million on the project—which used few locations and no complicated special effects—because its star wasted hours stumbling over simple lines and its director made dozens of expensive, absurd decisions. The Room was shot simultaneously on 35 mm film and digital video, for no good reason. Instead of filming an exterior scene in an alley outside the studio, Wiseau made his art director build an identical indoor alley set. It’s not that everyone just sat back and let a rich fool wreck himself—Wiseau ignored his crew’s advice, bullied actresses about their appearances, threw tantrums, and lied constantly. Minott once fainted because Wiseau refused to buy an air conditioner for the set.

When the movie was finally finished, Wiseau paid for two weeks of L.A.-area screenings in order to submit it for Oscar consideration. During that run, The Room earned only $1,800 but caught the attention of film students Michael Rousselet and Scott Gairdner, who Sestero claims were drawn in by a review blurb outside the theater that read, “Watching this film is like getting stabbed in the head.” They spread the gospel of Tommy Wiseau to its rightful audience of bad-movie connoisseurs, who’ve been throwing spoons (in tribute to the living-room set’s inscrutable spoon art) at the screen during sold-out midnight showings ever since. In September 2017, The Hollywood Reporter quoted an expert who estimated The Room was earning up to $25,000 a month. This must have helped Wiseau recoup the $300,000 he spent on the strange billboard advertising the film that hung in Hollywood for five years.

The Disaster Artist has been fictionalized as a well-received buddy comedy that yielded a best actor Golden Globe for its own director and star, James Franco. As midnight screenings of The Room grew ever more popular, the new publicity secured it one day of wide theatrical release, on January 10. (The next evening, the L.A. Times published five women’s allegations of sexual misconduct against Franco, which helps to explain both his apparent amusement at Wiseau’s creepy misogyny and why he didn’t get any Oscar nominations.) But the awards-bait Tommy Wiseau is a lighter character than the mean, narcissistic borderline stalker Sestero describes, and the movie’s tale of a weirdo’s unlikely triumph rings hollow when you consider that people with $6 million of disposable income can do pretty much whatever they want. (Although we now know Wiseau is sixty-two and hails from Poland, the source of his fortune—described in Sestero’s book as a “bottomless pit”—remains a mystery.)

It makes an unfortunate sort of sense, when you consider our current political reality, that we’ve spent so much time and money celebrating the stupid, misogynistic vanity project of a self-described real estate tycoon with piles of possibly ill-gotten cash. Cult movies used to be scruffy, desperately original, and intermittently brilliant works of transgressive art that left audiences energized, and sometimes radicalized. The Room—which is bad art, but art nonetheless—does the opposite. The mirror it holds up is the underside of a dirty metal spoon; the reflection you see in it is blurry but genuine. So what’s sadder: that it set the prototype for the twenty-first-century American cult film or that it might wind up being our last enduring cult hit?

Hammer Time

Cult films once resembled Brechtian hammers more often than Shakespearean mirrors. The history of the form is as disjointed as the shaggiest entries in its filmography, but it’s possible to splice together a rough chronology. Although the phrase “cult film” wasn’t common until the seventies, the idea that movies and their stars could have cultish appeal dates back to the silent era. In the essay “Film Cults,” from 1932, the critic Harry Alan Potamkin traces the phenomenon to French Charlie Chaplin fans in the 1910s. He figures the United States had cultists of its own by 1917, when “American boys of delight,” by which he means populist critics, “began to write with seriousness, if not with critical insight, about the rudimentary film.” Potamkin cites the Marx Brothers, Mickey Mouse, and The Cabinet of Dr. Caligari as early objects of cinephilic obsession.

Over the next few decades, cults formed around stars whose personalities eclipsed their versatility as actors, from Humphrey Bogart to Judy Garland. B movies thrived at fifties drive-ins, spawning genre-loyal cults of western, sci-fi, and horror fans. Exploitation cinema—skeletally plotted collages of sex, drugs, and violence created to “exploit” captive audiences of various demographics—took off in the sixties, especially after the Production Code collapsed in 1968. Then the Hollywood wing of the youth counterculture started to make psychedelic films like Easy Rider and Head. Arthouses showed such sexually explicit, politically radical European movies as I Am Curious (Yellow) alongside the work of Fellini and Godard. Low-budget auteurs, most notably John Waters, combined all of those influences to make self-aware trash with subversive overtones.

Alejandro Jodorowsky’s mystical “acid western” El Topo wasn’t the first movie to screen at midnight, but its six-month run at New York’s Elgin Theater in 1970 and 1971 set the template for “midnight movies” as a cult ritual. About five years later, The Rocky Horror Picture Show opened a mile away at the Waverly. Interactive midnight screenings in cities around the country followed, and they’re still filling theaters after four decades.

That half a century of cult films preceded any attempt to define the category helps to explain why determining what even makes a “cult film” is so difficult. Cultists’ holiest text, Danny Peary’s Cult Movies (1981), does a solid job enumerating their most common attributes: “atypical heroes and heroines; offbeat dialogue; surprising plot resolutions; highly original storylines; brave themes, often of a sexual or political nature; ‘definitive’ performances by stars who have cult status; the novel handling of popular but stale genres.” Rocky Horror, a retro sci-fi musical that chronicles a prudish young couple’s corruption at the hands of a genderqueer alien/mad scientist who is ultimately vanquished by his own servants, meets all of these criteria.

Still, “cult classic” is an infinitely elastic term that crosses the boundaries of budget, genre, style, language, and intended audience.

by Judy Berman, The Baffler | Read more:
Image: Najeebah Al-Ghadban
[ed. I just finished reading The Disaster Artist and have absolutely zero interest in ever watching it on film, or The Room either (although one of my favorite movies of all time is Ed Wood. Go figure).]

Friday, March 9, 2018

Thoughts on the Trump-Kim Summit

I wanted to briefly comment and share some perspective on yesterday’s announcement of a Kim-Trump summit this spring. I’m going to format them as a series of propositions or individual items rather than a structured argument.

1. Despite all the bad things about President Trump’s management of U.S. foreign policy, there’s almost nothing that could be worse or more perilous than the progression of events of the last six to eight months on the Korean peninsula. There’s likely no more dangerous tension point in the world for the United States than the Korean peninsula. This is better than the alternative.

2. It is critical to understand that it is very, very hard to imagine that North Korea at any time in the foreseeable future will give up its nuclear weapons and nuclear weapons delivery capacity. President Trump does not seem to realize that. Why should they? One thing that is clear in the post-Cold War world is that states with nuclear weapons do not get attacked or overthrown by force of arms by the U.S. or anyone else. Nuclear states are the “made men” of the 21st-century global order. The North Korean state leadership may be paranoid. But they do have enemies. Critically, they are the only communist state based on a Cold War-era national division which has survived the fall of the Soviet Union. And power vis a vis the outside world is a centerpiece of the Kim family’s legitimacy within North Korea. (People I listen to who really know these issues often remind me that the Kim family’s calculus is driven not by calculations about the U.S. or South Korea but with the internal logic of regime stability.)

It is equally important to understand that North Korea probably mainly already has what it wants, a robust nuclear deterrent. They have demonstrated an ability to detonate multiple nuclear warheads and they have demonstrated the ability to launch ICBMs which can likely reach the continental United States. It’s unclear to me (and I suspect unclear to the U.S.) whether North Korea can combine those two technologies to deliver a nuclear warhead to the United States. But I don’t think we can discount that possibility. That very real possibility creates a massive deterrent already. This is important because it means that North Korea has some freedom to suspend its nuclear and missile testing for a short time or perhaps even indefinitely. Because they already have what they want. On the deterrence front, the status quo may work for them.

What Kim has done is agree to suspend testing (something he likely feels he can do from some position of strength) and meet with the U.S. as equals with no preconditions. This is a resounding confirmation of Kim’s premise and internal argument for legitimacy and power that building a nuclear arsenal will bring North Korea respect, power, and international legitimacy. Remember a key point. The Kims have been pushing for a summit with a U.S. president for 25 years. They have wanted this forever. Did President Trump know this? Or did he think it was a confirmation of his policy and genius? I strongly suspect it’s the latter.

What all of this means is that North Korea mainly has what it wants or rather what it feels it truly needs and can bargain from a position of strength to get what it wants: end of sanctions, normalization, aid, etc. It is highly unrealistic to imagine that North Korea will ever agree to denuclearize.

3. Generally speaking, you agree to a summit like this once there’s an agreement more or less in place worked out by subordinates. The meeting is what brings it all together. Trump appears inclined to approach this like a business negotiation that he’s going to knock out of the park even though he doesn’t have much understanding of what’s being discussed. He shows every sign of getting played and we’re likely to see a lengthy process of aides trying to make the best of the fait accompli he’s created.

US commentators often say that the U.S. shouldn’t hold summits like this because it confers “legitimacy” on a bad acting state. This always sounds a bit self-flattering to me. It is probably more accurate to say that it confers status and power to treat with the global great power as an equal. That is a thing of value, certainly to North Korea. They’ve wanted it for decades.

President Trump has already stated publicly that this is a negotiation, a meeting to achieve denuclearization. No one from North Korea has said that. Trump has said that. It is highly unlikely that North Korea will ever agree to that. That sets up high odds of embarrassment and disappointment. Given that President has shown very little inclination to be briefed or take advice, the odds are even greater. So will Trump agree to things he shouldn’t? Will he feel humiliated and react belligerently? It’s a highly unpredictable encounter with an inexperienced and petulant President who will reject almost all counsel. It sounds like North Korea has gotten a really big thing in exchange for very little and has no real incentive to do more than meet, bask, say generic things and not agree to anything. Trump looks like he’s getting played big time. I suspect we will learn that he didn’t consult with any advisors before agreeing to meet.

What does this all mean? As Churchill said, jaw, jaw, jaw is better than war, war, war. We have been on an extremely dangerous trajectory. There are no good solutions. There are probably no realistic paths to North Korea ceasing to be a nuclear power. But you could perhaps find agreements to limit the scope and reach of the nuclear and missile programs in place (perhaps even scale it back) with some mix of normalization and aid. But we start with an opening gambit in which Trump seems to be stumbling into something of a trap and being guided by his self-importance and vanity rather than any realistic appraisal of the situation.

Despite it being better than the alternative it’s starting in the worst way.

by Josh Marshall, TPM |  Read more:
[ed. See also: White House Now Trying To Moonwalk Back Trump’s Summit Goof]

$1 Fentanyl Test Strip

No drug has fueled the current spike in overdose deaths more than fentanyl. The synthetic opioid claimed two thirds of the record 64,000 such fatalities in the U.S. in 2016.

Up to 100 times more potent than morphine, this compound has played a significant role in reducing Americans’ life expectancy for the second straight year. In three states—Rhode Island, New Hampshire and Massachusetts—the drug was found responsible for at least 70 percent of opioid-related deaths, in what drug-harm reduction specialists have described as “slow-motion slaughter.”

Jess Tilley, a harm-reduction veteran in Northampton, Mass., deploys several outreach teams to rural areas. They pass out clean syringes and the overdose-reversal drug naloxone—and refer people to detox programs. But Tilley’s most in-demand item is a $1 testing strip that accurately detects the presence of fentanyl, which dealers sometimes add to boost the strength of illicit drugs.

In 2016, when the overdose rate in western Massachusetts doubled in a year, Tilley bought a thousand fentanyl testing strips—a low-tech device that resembles a pregnancy test—from a Canadian company, and began distributing them to drug users. She says the response was immediate. As demand skyrocketed, she also began asking low-level drug dealers to test their supplies for fentanyl. Tilley says they began regularly pulling tainted supplies from the market. “When people get a tangible result, it changes behavior,” says Tilley, executive director of the nonprofit New England User's Union. “I’ve been able to track behavior trends. People say when they get results [from the strips], they’re cutting back half of what they’re doing, or they’re making sure they have someone with them when they get high.”

A study released in February reinforces Tilley’s anecdotal accounts. Conducted by Johns Hopkins and Brown universities, the study examined three technologies for testing fentanyl in street drug supplies, and looked at how such testing influenced fentanyl use behavior. The strips (based on an immunoassay, which uses the bonding of an antibody with an antigen to detect the presence of fentanyl) proved most reliable according to the study, detecting fentanyl with 100 percent accuracy in drug samples from Baltimore and 96 percent accuracy in those from Rhode Island. (...)

“It’s an important study, and it shows that the fentanyl test can be really used as a point-of-care test within harm-reduction programs,” says Jon Zibbell, a public health scientist at nonprofit research organization RTI International. “The one limitation of the test strips is that they are not quantitative—they don't tell you how much product is there.”

Zibbell, a former health scientist at the U.S. Centers for Disease Control and Prevention who was not involved in the new study, says he is working on his own fentanyl-testing research. He believes the logical next step for opioid-deluged communities would be to set up local facilities where users can have their drugs subjected to a more quantitative and qualitative analysis. “If we really want to deal with the myriad of drugs that are in these products,” he says, “we need to have labs where people can drop their stuff off and have a result in real time. That would increase knowledge, increase safety and, at the end of the day, reduce overdose fatalities.”

by Alfonso Serrano, Scientific American | Read more:
Image: wonderferret Flickr

Kimiyo Mishima, Fragment II, 1964
via:

When Winter Never Ends

DAY 1: FEB. 4, 2018

"There is timing in the whole life of the warrior, in his thriving and declining, in his harmony and discord."
--Musashi Miyamoto (circa 1584-1645), samurai and artist

Ichiro Suzuki steps out of the cold into the small restaurant that serves him dinner most nights. It's winter in Kobe, Japan, where he once played professional baseball and where he comes during the offseason to train. His wife, Yumiko, is back home in Seattle. He is here alone, free from the untidy bits of domestic life that might break his focus. Every day, he works out in a professional stadium he rents, and then he usually comes to this restaurant, which feels like a country inn transported to the city. It's tucked away on the fifth floor of a downtown building and accessible by a tiny elevator. Someone on the staff meets Ichiro at the back door so he can slip in unseen. Someone else rushes to take his coat, and Ichiro sits at a small bar with his back to the rest of the diners. Two friends join him. Inside the warm and glowing room, the chef slips on his traditional coat as he greets Ichiro in mock surprise.

"Thanks for coming again," says the chef, wearing Miami Marlins shorts.

"You guys made me wait outside," Ichiro jokes.

Ichiro is a meticulous man, held in orbit by patterns and attention to detail. This place specializes in beef tongue, slicing it thin by hand and serving it raw alongside hot cast-iron skillets. They do one thing perfectly, which appeals to Ichiro. Tonight he's got dark jeans rolled up to the calf, each leg even, and a gray T-shirt under a white button-down with a skinny tie. His hair looks darker than in some recent photos, maybe the lighting, maybe a dye job. Either way, not even a 44-year-old future Hall of Famer is immune from the insecurities and diminishments that come with time. This winter is the most insecure and diminished he's been.

He doesn't have a professional baseball contract in America or Japan. His agent, John Boggs, has called, texted and emailed teams so often that one MLB general manager now calls Boggs "the elephant hunter," because he's stalking his prey. Boggs recently sent an email to all 30 teams. Only one wrote back to decline. Ichiro hasn't spoken to Boggs once this offseason, locked in on what he and his aging body can control.

The restaurant fills up. Customers take off their shoes. At every table, signs warn that no pictures can be taken. Ichiro waves at an older couple. A producer type brings two young women over to meet him, and Ichiro makes small talk before they bow and recede. He makes some jokes about aging and turns a wine bottle in his hand to read the label. The waiters, wearing sandals and blue bandannas, sling plates of raw tongue and mugs of cold beer with ice flecks in them. The chef installs a fresh gas can and sets down a cast-iron grill in front of Ichiro.

"This is really delicious," Ichiro says.

He and his companions discuss the future, debating philosophies of business, a new world opening up. Later they turn nostalgic and talk about the past. He started training every day in the third grade and has never stopped. Once during his career he took a vacation, a trip to Milan that he hated. This past October, Marlins infielder Dee Gordon came to get something at the clubhouse after the season. He heard the crack of a bat in the cages and found Ichiro there, getting in his daily swings. "I really just hope he keeps playing," Gordon says with a chuckle, "because I don't want him to die. I believe he might die if he doesn't keep playing. What is Ichiro gonna do if he doesn't play baseball?"

Former teammates all have favorite Ichiro stories, about how he carries his bats in a custom humidor case to keep out moisture, how in the minors he'd swing the bat for 10 minutes every night before going to sleep, or wake up some mornings to swing alone in the dark from 1 to 4 a.m. All the stories make the same point: He has methodically stripped away everything from his life except baseball. Former first baseman Mike Sweeney, who got close to Ichiro in Seattle, tells one about getting a call from an old teammate who'd had an off-day in New York. You're not gonna believe this, the guy began. He'd brought along his wife and they walked through Central Park, thrilled to be together in such a serene place. Far off in the distance, at a sandlot field with an old backstop that looked leftover from the 1940s, they saw a guy playing long toss. The big leaguer did the quick math and figured the distant stranger was throwing 300 feet on the fly. Curious, he walked closer. The guy hit balls into the backstop, the powerful shotgun blast of real contact familiar to any serious player. He became impressed, so he got even closer, close enough to see.

The man working out alone in Central Park was Ichiro.

His agent and those close to him think he'll sign with a Japanese team if no offer comes from the major leagues. Television crews floated around the Ginza district in Tokyo the night before asking people what they think about Ichiro's future. Ichiro, as usual, is saying nothing. He's a cipher, keeping himself hidden, yet his yearning has never been more visible. His old team, the Orix Buffaloes, wants him back desperately -- but Japanese spring training started three days ago and Ichiro remains in Kobe. For a private man, these three days speak loudly about his need for another season in America. Over the years, he's talked about playing until he's 50 but also of his desire to "vanish" once his career ends. Those two desires exist in opposition, and if America never calls, he holds the power to make either of them real. He can sign with Orix, or he can fade away. The choice is his.

These are the things working in his life at dinner, a cold Sunday night between the Rokko mountains and Osaka Bay. Ichiro finally stands to leave. Two customers step into the aisle and bow, not the perfunctory half-bow of business associates and hotel bellmen, but a full to-the-waist bow of deep respect. Is this what the end of a great career looks like up close? Ichiro hates not playing baseball, but he might hate playing poorly even more. When he's slumping, his wife has said, she will wake up and find him crying in his sleep. The first time he went on the disabled list as a major leaguer was because of a bleeding stomach ulcer. That year, he'd led Japan to a victory in the 2009 World Baseball Classic, winning the final game with a base hit in extra innings. The stress ate a hole in his stomach. Weeks later, a Mariners team doctor told him he couldn't play on Opening Day. Ichiro refused to listen, his teammate Sweeney says. Before the team ultimately forced him to sit, the doctor tried to explain that a bleeding ulcer was a serious condition that could actually kill him.

Ichiro listened, unmoved.

"I'll take my chances," he said.

DAY 2: FEB. 5, 2018

The next morning at 11:46, Ichiro moves quickly through the Hotel Okura lobby. A hood covers his head. This 35-story waterfront tower is where he always stays, an understated gold and black lacquer palace that looks designed by the prop department from "You Only Live Twice." His green Mercedes G-Class SUV is parked directly in front of the hotel and he climbs inside. The ballpark he rents, literally an entire stadium, is over the mountains, and he takes a right onto Highway 2, then an exit onto Fusehatake. He uses his blinker to change lanes.

The temperature is 38 degrees and falling.

A waterfall in front of the hotel is frozen mid-cascade.

On the drive toward the stadium, it begins to flurry.

At the field, he changes into shorts and steps out onto the field. A hard wind blows. Passing clouds drop the mercury even more. Ichiro isn't here in spite of the brutal cold but because of it. Japanese culture in general -- and Ichiro in particular -- remains influenced by remnants of bushido, the code of honor and ethics governing the samurai warrior class. Suffering reveals the way to greatness. When the nation opened up to the Western world in 1868, the language didn't even have a word to call games played for fun. Baseball got filtered through the prism of martial arts, and it remains a crucible rather than an escape. Japanese home run king Sadaharu Oh wrote in his memoir: "Baseball in America is a game that is born in spring and dies in autumn. In Japan it is bound to winter as the heart is to the body." (...)

DAY 3: FEB. 6, 2018

Ichiro walks through the hotel lobby at exactly the same time as the day before, 11:46 a.m., repeating his routine to the minute. He's a funny self-deprecating guy who often makes light of his own compulsive behaviors, which extend far beyond his baseball-related rituals. He said in a Japanese interview that he once listened to the same song for a month or more. There's enlightenment in obsession, he says, because focus opens perception to many things. It boils life down.

"I'm not normal," he admitted.

He gets stuck in patterns. In the minors, sometimes his 10-minute bedtime swinging ritual stretched to two hours or more. His mind wouldn't let him stop. For years, he only ate his wife's curry before games, day after day. According to a Japanese reporter who's covered him for years, Ichiro now eats udon noodles or toasted bread. He likes the first slice toasted for 2 minutes, 30 seconds, and the second slice toasted for 1 minute, 30 seconds. (He calculates the leftover heat in the toaster.) For a while on the road he ate only cheese pizzas from California Pizza Kitchen. He prefers Jojoen barbecue sauce for his beef. Once Yumiko ran out and mixed the remaining amount with Sankoen brand sauce -- which is basically identical -- and Ichiro immediately noticed. These stories are endless and extend far beyond food. This past September, a Japanese newspaper described how he still organizes his life in five-minute blocks. Deviations can untether him. Retirement remains the biggest deviation of all. Last year, a Miami newspaperman asked what he planned on doing after baseball.

"I think I'll just die," Ichiro said.

Today Ichiro walks onto the field in Kobe, right on time, and everyone is waiting. It's uncanny. They bow when he reaches the dugout. It's a Tuesday, even colder than the day before, but the routine doesn't change: the four jogging laps across the outfield, the baserunning, the 50 soft-toss pitches, exactly 50. Except for the cold these aren't hard workouts, more like a ritualized ceremony among friends. He could choose the best players in Japan to help him but he doesn't. He doesn't need to get better at swinging a bat. What he needs, and what he seems to find in this rented stadium, is the comfort of the familiar, a place where he knows who he is supposed to be.

He is equally precise during the season, to the amusement of teammates. Dee Gordon says Ichiro even lint-rolls the floor of his locker. He cleans and polishes his glove and keeps wipes in the dugout to give his shoes a once-over before taking the field. The Yankees clubhouse manager tells a story about Ichiro's arrival to the team in 2012. Ichiro came to him with a serious matter to discuss: Someone had been in his locker. The clubhouse guy was worried something had gone missing, like jewelry or a watch, and he rushed to check.

Ichiro pointed at his bat.

Then he pointed at a spot maybe 8 inches away.

His bat had moved.

The clubhouse manager sighed in relief and told Ichiro that he'd accidentally bumped the bat while putting a clean uniform or spikes or something back into Ichiro's locker, which is one of the main roles of clubhouse attendants.

"That can't happen," Ichiro said, smiling but serious.

From that day forward, the Yankees staff didn't replace anything in his locker like they did for every other player on the team. They waited until he arrived and handed him whatever he needed for the day.

These stories are funny individually, but they feel different when taken as a whole. Like nearly all obsessive people, Ichiro finds some sort of safety in his patterns. He goes up to the plate with a goal in mind, and if he accomplishes that goal, then he is at peace for a few innings. Since his minor league days in Japan, he has devised an achievable, specific goal every day, to get a boost of validation upon completion. That's probably why he hates vacations. In the most public of occupations, he is clearly engaged in a private act of self-preservation. He's winnowed his life to only the cocoon baseball provides. His days allow for little beyond his routine, like leaving his hotel room at 11:45, or walking through the lobby a minute later, or going to the stadium day after day in the offseason -- perhaps his final offseason. Here in the freezing cold, with a 27-degree wind chill, the hooks ping off the flagpoles. The bat in his hand is 33.46 inches long. He steps into the cage and sees 78 pitches. He swings 75 times.

Up close, he looks a lot like a prisoner.

by Wright Thompson, ESPN | Read more:
Image: Kevork Djansezian/Getty

Thursday, March 8, 2018


Małgorzata Sajur, Mirror mirror
via:

The Arithmetic of Risk

A month ago, I noted that prevailing valuation extremes implied negative total returns for the S&P 500 on 10-12 year horizon, and losses on the order of two-thirds of the market’s value over the completion of the current market cycle. With our measures of market internals constructive, on balance, we had maintained a rather neutral near-term outlook for months, despite the most extreme “overvalued, overbought, overbullish” syndromes in U.S. history. Still, I noted, “I believe that it’s essential to carry a significant safety net at present, and I’m also partial to tail-risk hedges that kick-in automatically as the market declines, rather than requiring the execution of sell orders. My impression is that the first leg down will be extremely steep, and that a subsequent bounce will encourage investors to believe the worst is over.”

On February 2nd, our measures of market internals clearly deteriorated, shifting market conditions to a combination of extreme valuations and unfavorable market internals, coming off of the most extremely overextended conditions we’ve ever observed in the historical data. At present, I view the market as a “broken parabola” – much the same as we observed for the Nikkei in 1990, the Nasdaq in 2000, or for those wishing a more recent example, Bitcoin since January.

Two features of the initial break from speculative bubbles are worth noting. First, the collapse of major bubbles is often preceded by the collapse of smaller bubbles representing “fringe” speculations. Those early wipeouts are canaries in the coalmine. For example, in July 2000, the Wall Street Journal ran an article titled (in the print version) “What were we THINKING?” – reflecting on the “arrogance, greed, and optimism” that had already been followed by the collapse of dot-com stocks. My favorite line: “Now we know better. Why didn’t they see it coming?” Unfortunately, that article was published at a point where the Nasdaq still had an 80% loss (not a typo) ahead of it.

Similarly, in July 2007, two Bear Stearns hedge funds heavily invested in sub-prime loans suddenly became nearly worthless. Yet that was nearly three months before the S&P 500 peaked in October, followed by a collapse that would take it down by more than 55%. (...)

As I’ve emphasized in prior market comments, valuations are the primary driver of investment returns over a 10-12 year horizon, and of prospective losses over the completion of any market cycle, but they are rather useless indications of near-term returns. What drives near-term outcomes is the psychological inclination of investors toward speculation or risk-aversion. We infer that preference from the uniformity or divergence of market internals across a broad range of securities, sectors, industries, and security-types, because when investors are inclined to speculate, they tend to be indiscriminate about it. This has been true even in the advancing half-cycle since 2009.

The only difference in recent years was that, unlike other cycles where extreme “overvalued, overbought, overbullish” features of market action reliably warned that speculation had gone too far, these syndromes proved useless in the face of zero interest rates. Evidently, once interest rates hit zero, so did the collective IQ of Wall Street. We adapted incrementally, by placing priority on the condition of market internals, over and above those overextended syndromes. Ultimately, we allowed no exceptions.

The proper valuation of long-term discounted cash flows requires the understanding that if interest rates are low because growth rates are also low, no valuation premium is “justified” by the low interest rates at all. It requires consideration of how the structural drivers of GDP growth (labor force growth and productivity) have changed over time.

Careful, value-conscious, historically-informed analysis can serve investors well over the complete market cycle, but that analysis must also include investor psychology (which we infer from market internals). In a speculative market, it’s not the understanding of valuation, or economics, or a century of market cycles that gets you into trouble. It’s the assumption that anyone cares.

The important point is this: Extreme valuations are born not of careful calculation, thoughtful estimation of long-term discounted cash flows, or evidence-based reasoning. They are born of investor psychology, self-reinforcing speculation, and verbal arguments that need not, and often do not, hold up under the weight of historical data. Once investor preferences shift from speculation toward risk-aversion, extreme valuations should not be ignored, and can suddenly matter to their full extent. It appears that the financial markets may have reached that point.

A second feature of the initial break from a speculative bubble, which I observed last month, is that the first leg down tends to be extremely steep, and a subsequent bounce encourages investors to believe that the worst is over. That feature is clearly evident when we examine prior financial bubbles across history. Dr. Jean-Paul Rodrigue describes an idealized bubble as a series of phases, including that sort of recovery from the initial break, which he describes as a “bull trap.”


I continue to expect the S&P 500 to lose about two-thirds of its value over the completion of the current market cycle. With market internals now unfavorable, following the most offensive “overvalued, overbought, overbullish” combination of market conditions on record, our market outlook has shifted to hard-negative. Rather than forecasting how long present conditions may persist, I believe it’s enough to align ourselves with prevailing market conditions, and shift our outlook as those conditions shift. That leaves us open to the possibility that market action will again recruit the kind of uniformity that would signal that investors have adopted a fresh willingness to speculate. We’ll respond to those changes as they arrive (ideally following a material retreat in valuations). For now, buckle up.

by John P. Hussman, Ph.D. Hussman Funds |  Read more:
Image: Dr. Jean-Paul Rodrigue

Ralph Goings, Still Life With Spoons
via:

Socialism as a Set of Principles

Nearly half of millennials describe themselves as sympathetic to “socialism” and not terribly fond of “capitalism.” Yet if you asked each of them to explain the mechanics of how a socialist economy would function, I doubt many would have especially detailed answers. Jacobin magazine’s ABCs of Socialism consists of answers to skeptical questions about socialism (e.g. “Don’t the rich deserve their money?” “Is socialism pacifist?” “Will socialism be boring?”) but notably “How will socialism actually work?” is not among them. With twelve million Democratic primary voters having cast ballots for a self-described “socialist,” isn’t it concerning that nobody has explained in detail how socialism will “work”? Embracing a new economic system without having a blueprint seems like it could only ever lead to something like Venezuela’s collapse.

I think this criticism seems very powerful, and comes from an understandable instinct. But it has a mistaken view of what socialism actually means to the people who use the label. In the 21st century, for many of its adherents socialism is not describing a particular set of economic rules and government policies, some clearly-defined “system” that must be implemented according to a plan. Instead, it describes a set of principles that we want the economic and political system to conform to. Bringing the world into harmony with these principles will require experimentation, but that lack of rigidity is an asset. Because 20th century “socialist” states attempted vast social engineering projects, there is a tendency to think of “a socialist economy” in engineering terms. Capitalism is an engine, with its parts all working together to produce an effect. Socialists come along and say that the engine should be designed entirely differently, with a totally different set of rules in order to produce better effects. If this is what we’re talking about when we’re talking about “capitalism versus socialism,” then it’s completely right to ask for an explanation of how the proposed alternative works. We’d be very suspicious of someone who said they had reinvented the combustion engine but refused to tell us how the alternative would work and insisted that before trying it we destroy all of our combustion engines.

But this is a poor way of thinking about what is being advocated by socialists. Books are a better analogy. We have, in our hands, a badly-written manuscript and are trying to edit it into a well-written manuscript. But there’s no blueprint for the well-written manuscript. We create it through a process. Delete a passage here, insert one there, move this around, move that around. And in doing this, we follow a set of principles: we want it to flow well, we want the reader not to get confused, we want all our sentences to be forceful and precise. Those principles aren’t handed down from on high, and there are lots of different ways we could write the book that would produce something satisfactory. But asking at the beginning of the process “Well, what will the finished product look like?” makes no sense. If we could present a blueprint for the finished book, we wouldn’t need a blueprint because we would already have finished the book.

Socialism can be conceived of similarly: socialists are trying to make society better, so that its operations meet a particular set of ideal criteria. Here, I want to quote Leszek Kołakowski, the Polish scholar of Marxism, who was a vicious opponent of communist governments but drew an important distinction between socialism as a system and an ideal:

[It would be] a pity if the collapse of communist socialism resulted in the demise of the socialist tradition as a whole and the triumph of Social Darwinism as the dominant ideology….Fraternity under compulsion is the most malignant idea devised in modern times… This is no reason, however, to scrap the idea of human fraternity. If it is not something that can be effectively achieved by means of social engineering, it is useful as a statement of goals. The socialist idea is dead as a project for an ‘alternative society.’ But as a statement of solidarity with the underdog and the oppressed, as a motivation to oppose Social Darwinism, as a light that keeps before our eyes something higher than competition and greed—for all these reasons, socialism—the ideal, not the system—still has its uses.

By his last years, Kołakowski was bitterly disenchanted by the left to an extreme I find off-putting. But even he offered high praise for the great socialists of early 20th century Europe, and the ideals they embodied. They “wanted not only equal, universal and obligatory education, a social health service, progressive taxation and religious tolerance, but also secular education, the abolition of national and racial discrimination, the equality of women, freedom of the press and of assembly, the legal regulation of labour conditions, and a social security system. They fought against militarism and chauvinism [and] embodied what was best in European political life.”

Here we begin to see what socialist principles actually involve. How can they best be summarized? Kołakowski suggests it’s “fraternity,” but that seems too limited and too squishy. It does start there, though: with a feeling of connectedness and compassion for other human beings. “We are here to help each other through this thing, whatever it is,” as Kurt Vonnegut said. Many socialists begin with that feeling of “solidarity” with people whose lives are needlessly hard and painful, and a sense that we are all in this together. (...)

This is what leads socialists toward the idea about “collective ownership of the means of production,” which is often cited as the core tenet of socialism. The reason socialists talk about “ownership” so much is that “ownership” refers to decision-making power. If I own a book, it means I am the one who gets to decide what happens to it. I can write in it, sell it, or throw it away. The instinct that “people should be able to shape their own destinies” leads socialists to endorse what I think is the core meaning of “democracy,” namely the idea that people should have decision-making power over those things that affect them. If we think people’s choices should be valued, then they should be included in decision-making that affects them. (...)

There are plenty of different ideas for how to make the world more democratic, to ensure that people’s lives aren’t being controlled by mysterious private or state forces that they have no control over. Socialists have a variety of proposals for economic democracy, such as the Universal Basic Income, worker cooperatives, and mandating profit-sharing. But the democratic principle isn’t just about economics. It’s also what turns socialists into feminists and anti-racists. Sexism and racism are outside forces that are acting on people against their will, making their lives more difficult on account of demographic characteristics that they cannot choose. The principle “everyone should have the most fulfilling possible life” means that women shouldn’t be harassed at work, transgender teens shouldn’t be bullied, and people of color shouldn’t face unique structural disadvantages.

One may think that by identifying ideas like “giving everyone a maximally fulfilling life” as core principles, I am draining socialism of meaning. After all, who doesn’t want people to have fulfilling lives? If socialism just means “things should be good,” everyone is a socialist. But that’s part of the point: socialism tries to apply values that are essentially universal. What differentiates the socialist and the non-socialist is the “apply” part. Everyone talks about democracy and freedom and fulfillment, but socialists are concerned to figure out what those things would really entail, and ensure that they are meaningful components of everybody’s lives, rather than only existing for some. The United States is “democratic,” and people are “free.” But when the public’s views don’t affect the government’s policies, and when people can’t get vacation time to go and take advantage of their freedom, these concepts are not being fully realized. (...)

The millennial embrace of socialism, then, does not mean that millennials are trying to implement some complicated new economic system that they do not understand. It means that they measure any economic system by the degree to which it is humane and democratic, and they are angered by the degree to which our current one fails people. It means that they reject selfishness and believe in solidarity. And it means that they are determined to help each other build something better, whatever that may be.

by Nathan J. Robinson, Current Affairs |  Read more:
Image: uncredited