Tuesday, November 1, 2016

Sing to Me

It is strange to think of karaoke as an invention. The practice predates its facilitating devices, and the concept transcends its practice: Karaoke is the hobby of being a star; it is an adjuvant for the truest you an audience could handle.

Karaoke does have a parent. In the late 1960s, Daisuke Inoue was working as a club keyboardist, accompanying drinkers who wanted to belt out a song. “Out of the 108 club musicians in Kobe, I was the worst,” he told Time. One client, the head of a steel company, asked Inoue to join him at a hot springs resort where he’d hoped to entertain business associates. Inoue declined, but instead recorded a backing tape tailored to the client’s erratic singing style. It was a success. Intuiting a demand, Inoue built a jukebox-like device fitted with a car stereo and a microphone, and leased an initial batch to bars across the city in 1971. “I’m not an inventor,” he said in an interview. “I simply put things that already exist together, which is completely different.” He never patented the device (in 1983, a Filipino inventor named Roberto del Rosario acquired the patent for his own sing-along system) though years later he patented a solution to ward cockroaches and rats away from the wiring.

In 1999, Time named Inoue one of the “most influential Asians” of the last century; in 2004, he received the Ig Nobel prize, a semiserious Nobel-parody honor by true laureates at Harvard University. At the ceremony, Inoue ended his acceptance speech with a few bars of the Coke jingle “I’d Like to Teach the World to Sing.” The crowd gave him a standing ovation, and four laureates serenaded him with “Can’t Take My Eyes Off You” in the style of Andy Williams. “I was nominated [as] the inventor of karaoke, which teaches people to bear the awful singing of ordinary citizens, and enjoy it anyway,” Inoue wrote in an essay. “That is ‘genuine peace,’ they told me.”

“While karaoke might have originated in Japan, it has certainly become global,” write Xun Zhou and Francesca Tarocco in Karaoke: The Global Phenomenon. “Each country has appropriated karaoke into its own existing culture.” My focus is limited to just a slice of North America, where karaoke has gone from a waggish punchline — an item on the list of Things We All Hate, according to late-night hosts and birthday cards — to an “ironic” pastime, to just a thing people like to do, in any number of forms. You can rent a box, or perform for a crowded bar; you can do hip-hop karaoke, metal karaoke, porno karaoke, or, in Portland, “puppet karaoke.” For the ethnography Karaoke Idols: Popular Music and the Performance of Identity, Dr. Kevin Brown spent two years in the late aughts frequenting a karaoke bar near Denver called Capone’s: “a place where the white-collar collides with the blue-collar, the straight mingle with the gay, and people of all colors drink their beer and whiskey side by side.” In university, a friend of mine took a volunteer slot hosting karaoke for inpatients at a mental health facility downtown. Years later I visited a friend at the same center on what happened to be karaoke night; we sang “It’s My Party.”

When I was growing up in Toronto, karaoke was reviled for reasons that now seem crass: There is nothing more nobodyish than pretending you’re somebody. Canada is an emphatically modest country, and the ’90s were a less extroverted age: Public attitudes were more condemnatory of those who showed themselves without seeming to have earned the right. The ’90s were less empathetic, too, and karaoke lays bare the need to be seen, and accepted; such needs are universal, and repulsive. We live now, you could say, in a karaoke age, in which you’re encouraged to show yourself, through a range of creative presets. Participating online implies that you’re worthy of being perceived, that some spark of you deserves to exist in public. Instagram is as public as a painting.

Karaoke is a social medium, a vector for a unit of your sensibility, just as mediated as any other, although it demands different materials. Twitter calls for wit, Instagram for aesthetic, but karaoke is supposed to present your nudest self.

by Alexandra Molotkow, Real Life |  Read more:
Image: Farah Al-Qasimi

2016: A Liberal Odyssey

His face is turned toward the past. Where we perceive a chain of events, he sees one single catastrophe which keeps piling wreckage upon wreckage and hurls it in front of his feet. The Angel would like to stay, awaken the dead and make whole what has been smashed. But a storm is blowing from Paradise; it has got caught in his wings with such violence that the angel can no longer close them. The storm irresistibly propels him into the future to which his back is turned, while the pile of debris before him grows skyward. This storm is what we call progress.

~ Walter Benjamin - Angel of History

In a heart-wrenching letter published in the New York Times, U.S.-born journalist Michael Luo described his family’s recent encounter with the kind of bigoted outburst—culminating with the admonition that Luo’s family should “go back to China”—that, sadly, is quite common for Asian-Americans across the country. Indeed, for many people of varying races, ethnicities, sexualities, genders, and abilities, Luo’s letter trembled with darkly familiar echoes of discrimination, fear, hatred, and intolerance. Soon after, Luo took to Twitter to invite other Asian-Americans to share their experiences with racism using the hashtag #ThisIs2016. What really stood out in the tweeted testimonies was how frequent these experiences seem to be, how familiar they are to so many.

What is also strikingly familiar, though, is the premise of the hashtag #ThisIs2016. This exclamation has become a hallmark of liberal discourse, popping up in conversations, pundit patter, social media rants, and even in the titles of articles themselves (“It’s 2016, And Even the Dictionary Is Full of Sexist Disses,” “It’s 2016: Time for cargo shorts to give up and die,” etc.). You’ll also spot it in tweets from faux-authoritative web portals like Vox—“It’s 2016. Why is anyone still keeping elephants in circuses?”—to Hillary Clinton— “It’s 2016. Women deserve equal pay.” Whether we’re talking about racism, sexism, homophobia, or some other abhorrent trace of backwardness, it’s become customary to pepper our stock responses with this ritual affirmation of what progress should look like at this advanced stage of history.

Everyone seems surprised that, in the year 2016, intolerance still exists, yet flying cars do not. And people’s genuine shock that such dark remnants of our past continue to stain our progressive present exposes their deep faith that “2016” is the bearer of some liberal-minded saving grace: the grace of history and progress that will (or should) just make things better. But I think it’s time we address what 2016 really means: jack shit. And there’s a special poison running through the belief that it means anything more.

From the beginning, Donald Trump’s vision to “Make America Great Again” has peddled a dangerously tunnel-visioned nostalgia while appealing to the anxieties and discomforts of people who find themselves adrift in a crumbling now that no longer cares for or about them like it used to. Many spot-on and necessary critiques have been quick to connect the dots between Trump’s nostalgic wet dream of bygone glory and the kind of racism, xenophobia, misogyny, etc. that’s fueled his campaign from the beginning. Such criticism rightly points out that Trump supporters who yearn for the good old days are, in fact, longing for a time when “the good life” was actually built on the oppressive exclusion of non-whites, women, LGBTQ people, and others. Trump freely includes such excluded “others” in his list of scapegoats for people’s current anxieties, and the past he and his supporters long for is dangerously fetishized as a place where such scapegoats would either lose favor in the dominant culture or be eliminated entirely.

However, in railing against the backward desires that spur the claim on history Trump and his supporters are making, we can often blind ourselves to the fallacies of our own myopic historical vision. That’s how ideology works, after all: we don’t notice how it skews our own perceptions. Like death, it’s always something that afflicts someone else. But, while Trump and many of his supporters may fetishize a past that is deeply retrograde, liberals and progressives have also demonstrated a troubling tendency to fetishize a future that they presume is on their side. There’s something peculiarly telling about this kind of progress fetishism, which has been conscripted as ideology-of-first-resort for Clintonite New Democrats.

Whether we’re talking about the sleek glitz of technological advancement or the triumph of the values of liberal humanism, the teleological view of historical progress is counterproductive and potentially dangerous. When we’re stuck in the slow hell of rush-hour traffic, for instance, we may catch ourselves grumpily wondering why the hell we can’t teleport yet. But there’s an implied consumerist asterisk next to the “we.” What we mean is, “why haven’t those eggheads in lab coats figured this stuff out yet so the rest of us can live in the future we were promised?” While imposing on the future a specific trajectory, custom-fitted to what we imagine technological progress is supposed to give us, we also entrust the production of that future to experts who, we assume, want the same things we do. This is hazardously akin to the platitudinous futurism of Clintonism, which has smuggled in technocratic neoliberalism and a globally expansive military-industrial complex under the mantle of progressive wishful thinking. (...)

In 2016, liberal values enjoy a relatively dominant place in popular culture—from the Modern Family melting pot to the Hillary Clinton campaign’s multicultural basket of deployables. The world reflected back to us through various media is one that has generally accepted the familiar values of equality, tolerance, respect for difference, a very low-grade critique of corporate greed, etc. The culture wars are over, and we on the leftish side of things have reportedly “won”. . . which is probably why the rise of Trump was so shocking for many.

But Trumpism, among many other deviations from the scripted finale to history, didn’t come from nowhere, and it won’t just go away. One of the direst products of the 2016 election has been the stubborn refusal of liberals and progressives to reevaluate our unspoken presumption that the cultural ubiquity of our “shared liberal values” meant that there was no longer any need to defend or redefine those values. Trumpism should alert liberals that there is, and always will be, infinitely more work to do. Instead, it has only assured liberals of their infinite righteousness in comparison, confirming their conviction that something must be fundamentally outdated “in the hearts” of this “other side” whose followers have chosen to stand on the “wrong side of history.”

Our bizarre obsession with being on the “right side of history” has become another weapon of the “smug style” in American liberalism. Liberal smugness involves more than condescendingly talking down to others who don’t “get it,” reducing the complicated tissue of their souls to the ignominious personal traits of racism, misogyny, etc. Liberal smugness is a posture that permits us to simply take our own righteousness for granted—to the point that we don’t even see the need to defend our positions. Rather than confront the darker sides of our own beliefs, or face head-on the counterclaims on history that other political actors are making, we remain cocooned in our social echo chambers filled with people who already agree with us. We also find affirmation in the broader echo chamber of popular culture, whose dominance further reassures us of the wrongness of the beliefs of others. This is 2016; look around you. Stay woke.

To be on the right side of anything is, as everyone knows, a matter of perspective. In reserving the vanguard spot in the historical drama for ourselves, we’re confidently presuming to know what the perspective of posterity will be. But the more obnoxious aspect of this concern for “being on the right side of history” is its promotion of a singularly self-involved relationship with history itself. History is no longer the people’s furnace of cultural creation and political invention, producing a future whose shape has not yet been hammered out. Rather, in this rigidly schematized vision, history is reduced to the role of set template—divided down the middle with a “right” and “wrong” side for us to choose from—that will bear witness to and validate our personal choice. Is this not just a kind of eschatology? Are we in heaven yet?

by Maximillian Alvarez, The Baffler |  Read more:
Image: NY Post, Paul Klee Angelus Novus

The End of Adolescence

Adolescence as an idea and as an experience grew out of the more general elevation of childhood as an ideal throughout the Western world. By the closing decades of the 19th century, nations defined the quality of their cultures by the treatment of their children. As Julia Lathrop, the first director of the United States Children’s Bureau, the first and only agency exclusively devoted to the wellbeing of children, observed in its second annual report, children’s welfare ‘tests the public spirit and democracy of a community’.

Progressive societies cared for their children by emphasising play and schooling; parents were expected to shelter and protect their children’s innocence by keeping them from paid work and the wrong kinds of knowledge; while health, protection and education became the governing principles of child life. These institutional developments were accompanied by a new children’s literature that elevated children’s fantasy and dwelled on its special qualities. The stories of Beatrix Potter, L Frank Baum and Lewis Carroll celebrated the wonderland of childhood through pastoral imagining and lands of oz.

The United States went further. In addition to the conventional scope of childhood from birth through to age 12 – a period when children’s dependency was widely taken for granted – Americans moved the goalposts of childhood as a democratic ideal by extending protections to cover the teen years. The reasons for this embrace of ‘adolescence’ are numerous. As the US economy grew, it relied on a complex immigrant population whose young people were potentially problematic as workers and citizens. To protect them from degrading work, and society from the problems that they could create by idling on the streets, the sheltering umbrella of adolescence became a means to extend their socialisation as children into later years. The concept of adolescence also stimulated Americans to create institutions that could guide adolescents during this later period of childhood; and, as they did so, adolescence became a potent category.

With the concept of adolescence, American parents, especially those in the middle class, could predict the staging of their children’s maturation. But adolescence soon became a vision of normal development that was applicable to all youth – its bridging character (connecting childhood and adulthood) giving young Americans a structured way to prepare for mating and work. In the 21st century, the bridge is sagging at both ends as the innocence of childhood has become more difficult to protect, and adulthood is long delayed. While adolescence once helped frame many matters regarding the teen years, it is no longer an adequate way to understand what is happening to the youth population. And it no longer offers a roadmap for how they can be expected to mature.

In 1904, the psychologist G Stanley Hall enshrined the term ‘adolescence’ in two tomes dense with physiological, psychological and behavioural descriptions that were self-consciously ‘scientific’. These became the touchstone of most discussions about adolescence for the next several decades. As a visible eruption toward adulthood, puberty is recognised in all societies as a turning point, since it marks new strength in the individual’s body and the manifestation of sexual energy. But in the US, it became the basis for elaborate and consequential intellectual reflections, and for the creation of new institutions that came to define adolescence. Though the physical expression of puberty is often associated with a ritual process, there was nothing in puberty that required the particular cultural practices that grew around it in the US as the century progressed. As the anthropologist Margaret Mead argued in the 1920s, American adolescence was a product of the particular drives of American life.

Rather than simply being a turning point leading to sexual maturity and a sign of adulthood, Hall proposed that adolescence was a critical stage of development with a variety of special attributes all of its own. Dorothy Ross, Hall’s biographer, describes him as drawing on earlier romantic notions when he portrayed adolescents as spiritual and dreamy as well as full of unfocused energy. But he also associated them with the new science of evolution that early in the century enveloped a variety of theoretical perspectives in a scientific aura. Hall believed that adolescence mirrored a critical stage in the history of human development, through which human ancestors moved as they developed their full capacities. In this way, he endowed adolescence with great significance since it connected the individual life course to larger evolutionary purposes: at once a personal transition and an expression of human history, adolescence became an elemental experience. Rather than a short juncture, it was a highway of multiple transformations.

Hall’s book would provide intellectual cover for the two most significant institutions that Americans were creating for adolescents: the juvenile court and the democratic high school. (...)

On a much grander scale than the juvenile court, the publicly financed comprehensive high school became possibly the most distinctly American invention of the 20th century. As a democratic institution for all, not just a select few who had previously attended academies, it incorporated the visions of adolescence as a critically important period of personal development, and eventually came to define that period of life for the majority of Americans. In its creation, educators opened doors of educational opportunity while supervising rambunctious young people in an environment that was social as well as instructional. As the influential educational reformer Elbert Fretwell noted in 1931 about the growing extra-curricular realm that was essential to the new vision of US secondary schooling: ‘There must be joy, zest, active, positive, creative activity, and a faith that right is mighty and that it will prevail.’

In order to accommodate the needs of a great variety of students – vastly compounded by the many different sources of immigration – the US high school moved rapidly from being the site of education in subjects such as algebra and Latin (the basis for most instruction in the 19th century US and elsewhere in the West) to becoming an institution where adolescents could learn vocational and business skills, and join sports teams, musical productions, language clubs and cooking classes. In Extra-Curricular Activities in the High School (1925), Charles R Foster concluded: ‘Instead of frowning, as in olden days, upon the desire of the young to act upon their own initiative, we have learned that only upon these varied instincts can be laid the surest basis for healthy growth … The school democracy must be animated by the spirit of cooperation, the spirit of freely working together for the positive good of the whole.’ School reformers set out to use the ‘cooperative’ spirit of peer groups and the diverse interests and energy of individuals to create the comprehensive US high school of the 20th century.

Educators opened wide the doors of the high school because they were intent on keeping students there for as long as possible. Eager to engage the attention of immigrant youth, urban high schools made many adjustments to the curriculum as well as to the social environment. Because second-generation immigrants needed to learn a new way of life, keeping them in school longer was one of the major aims of the transformed high school. They succeeded beyond all possible expectations. By the early 1930s, half of all US youth between 14 and 17 was in school; by 1940, it was 79 per cent: astonishing figures when compared with the single-digit attendance at more elite and academically focused institutions in the rest of the Western world.

High schools brought young people together into an adolescent world that helped to obscure where they came from and emphasised who they were as an age group, increasingly known as teenagers. It was in the high schools of the US that adolescence found its home. And while extended schooling increased their dependence for longer periods of times, it was also here that young people created their own new culture. While its content – its clothing styles, leisure habits and lingo – would change over time, the common culture of teenagers provided the basic vocabulary that young people everywhere could recognise and identify with. Whether soda-fountain dates or school hops, jazz or rock’n’roll, rolled stockings or bobby sox, ponytails or duck-tail hairstyles – it defined the commonalities and cohesiveness of youth. By mid-century, high school was understood to be a ‘normal’ experience and the great majority of youth (of all backgrounds) were graduating from high schools, now a basic part of growing up in the US. It was ‘closer to the core of the American experience than anything else I can think of’, as the novelist Kurt Vonnegut concluded in an article for Esquire in 1970.

With their distinctive music and clothing styles, US adolescents had also become the envy of young people around the world, according to Jon Savage in Teenage (2007). They embodied not just a stage of life, but a state of privilege – the privilege not to work, the right to be supported for long periods of study, the possibility of future success. US adolescents basked in the wealth of their society, while for the rest of the world the US promise was personified by its adolescents. Neither the country’s high schools nor its adolescents were easily imitated elsewhere because both rested on the unique prosperity of the 20th-century US economy and the country’s growing cultural power. It was an expensive proposition that was supported even at the depth of the Great Depression. But it paid off in the skills of a population who graduated from school, not educated in Latin and Greek texts (the norm in lycées and gymnasia elsewhere), but where the majority were sufficiently proficient in mathematics, English and rudimentary science to make for an unusually literate and skilled population.

by Paula S Fass, Aeon | Read more:
Image: Bruce Dale/National Geographic/Getty

Monday, October 31, 2016

The Waterboys

Billionaire Governor Taxed the Rich and Increased the Minimum Wage — Now, His State’s Economy Is One of the Best in the Country

[ed. Sorry for all the link bait (Huffington Post, after all...) but this really is an achievement worth noting.]

The next time your right-wing family member or former high school classmate posts a status update or tweet about how taxing the rich or increasing workers’ wages kills jobs and makes businesses leave the state, I want you to send them this article.

When he took office in January of 2011, Minnesota governor Mark Dayton inherited a $6.2 billion budget deficit and a 7 percent unemployment rate from his predecessor, Tim Pawlenty, the soon-forgotten Republican candidate for the presidency who called himself Minnesota’s first true fiscally-conservative governor in modern history. Pawlenty prided himself on never raising state taxes — the most he ever did to generate new revenue was increase the tax on cigarettes by 75 cents a pack. Between 2003 and late 2010, when Pawlenty was at the head of Minnesota’s state government, he managed to add only 6,200 more jobs.

During his first four years in office, Gov. Dayton raised the state income tax from 7.85 to 9.85 percent on individuals earning over $150,000, and on couples earning over $250,000 when filing jointly — a tax increase of $2.1 billion. He’s also agreed to raise Minnesota’s minimum wage to $9.50 an hour by 2018, and passed a state law guaranteeing equal pay for women. Republicans like state representative Mark Uglem warned against Gov. Dayton’s tax increases, saying, “The job creators, the big corporations, the small corporations, they will leave. It’s all dollars and sense to them.” The conservative friend or family member you shared this article with would probably say the same if their governor tried something like this. But like Uglem, they would be proven wrong.

Between 2011 and 2015, Gov. Dayton added 172,000 new jobs to Minnesota’s economy — that’s 165,800 more jobs in Dayton’s first term than Pawlenty added in both of his terms combined. Even though Minnesota’s top income tax rate is the fourth highest in the country, it has the fifth lowest unemployment rate in the country at 3.6 percent. According to 2012-2013 U.S. census figures, Minnesotans had a median income that was $10,000 larger than the U.S. average, and their median income is still $8,000 more than the U.S. average today.

By late 2013, Minnesota’s private sector job growth exceeded pre-recession levels, and the state’s economy was the fifth fastest-growing in the United States. Forbes even ranked Minnesota the ninth best state for business (Scott Walker’s “Open For Business” Wisconsin came in at a distant #32 on the same list). Despite the fearmongering over businesses fleeing from Dayton’s tax cuts, 6,230 more Minnesotans filed in the top income tax bracket in 2013, just one year after Dayton’s tax increases went through. As of January 2015, Minnesota has a $1 billion budget surplus, and Gov. Dayton has pledged to reinvest more than one third of that money into public schools. And according to Gallup, Minnesota’s economic confidence is higher than any other state.

Gov. Dayton didn’t accomplish all of these reforms by shrewdly manipulating people — this article describes Dayton’s astonishing lack of charisma and articulateness. He isn’t a class warrior driven by a desire to get back at the 1 percent — Dayton is a billionaire heir to the Target fortune. It wasn’t just a majority in the legislature that forced him to do it — Dayton had to work with a Republican-controlled legislature for his first two years in office. And unlike his Republican neighbor to the east, Gov. Dayton didn’t assert his will over an unwilling populace by creating obstacles between the people and the vote — Dayton actually created an online voter registration system, making it easier than ever for people to register to vote.

by C. Robert Gibson, Huffington Post | Read more:
Image: Glenn Stubbe, Star Tribune

Renato Guttuso, La Vuccirìa 1974
via:

Maciek Pozoga
via:

AI Persuasion Experiment

1: What is superintelligence?

A superintelligence is a mind that is much more intelligent than any human. Most of the time, it’s used to discuss hypothetical future AIs.

1.1: Sounds a lot like science fiction. Do people think about this in the real world?

Yes. Two years ago, Google bought artificial intelligence startup DeepMind for $400 million; DeepMind added the condition that Google promise to set up an AI Ethics Board. DeepMind cofounder Shane Legg has said in interviews that he believes superintelligent AI will be “something approaching absolute power” and “the number one risk for this century”.

Many other science and technology leaders agree. Astrophysicist Stephen Hawking says that superintelligence “could spell the end of the human race.” Tech billionaire Bill Gates describes himself as “in the camp that is concerned about superintelligence…I don’t understand why some people are not concerned”. SpaceX/Tesla CEO Elon Musk calls superintelligence “our greatest existential threat” and donated $10 million from his personal fortune to study the danger. Stuart Russell, Professor of Computer Science at Berkeley and world-famous AI expert, warns of “species-ending problems” and wants his field to pivot to make superintelligence-related risks a central concern.

Professor Nick Bostrom is the director of Oxford’s Future of Humanity Institute, tasked with anticipating and preventing threats to human civilization. He has been studying the risks of artificial intelligence for twenty years. The explanations below are loosely adapted from his 2014 book Superintelligence, and divided into three parts addressing three major questions. First, why is superintelligence a topic of concern? Second, what is a “hard takeoff” and how does it impact our concern about superintelligence? Third, what measures can we take to make superintelligence safe and beneficial for humanity?

2: AIs aren’t as smart as rats, let alone humans. Isn’t it sort of early to be worrying about this kind of thing?

Maybe. It’s true that although AI has had some recent successes – like DeepMind’s newest creation AlphaGo defeating the human Go champion in April – it still has nothing like humans’ flexible, cross-domain intelligence. No AI in the world can pass a first-grade reading comprehension test. Facebook’s Andrew Ng compares worrying about superintelligence to “worrying about overpopulation on Mars” – a problem for the far future, if at all.

But this apparent safety might be illusory. A survey of leading AI scientists show that on average they expect human-level AI as early as 2040, with above-human-level AI following shortly after. And many researchers warn of a possible “fast takeoff” – a point around human-level AI where progress reaches a critical mass and then accelerates rapidly and unpredictably.

2.1: What do you mean by “fast takeoff”?

A slow takeoff is a situation in which AI goes from infrahuman to human to superhuman intelligence very gradually. For example, imagine an augmented “IQ” scale (THIS IS NOT HOW IQ ACTUALLY WORKS – JUST AN EXAMPLE) where rats weigh in at 10, chimps at 30, the village idiot at 60, average humans at 100, and Einstein at 200. And suppose that as technology advances, computers gain two points on this scale per year. So if they start out as smart as rats in 2020, they’ll be as smart as chimps in 2035, as smart as the village idiot in 2050, as smart as average humans in 2070, and as smart as Einstein in 2120. By 2190, they’ll be IQ 340, as far beyond Einstein as Einstein is beyond a village idiot.

In this scenario progress is gradual and manageable. By 2050, we will have long since noticed the trend and predicted we have 20 years until average-human-level intelligence. Once AIs reach average-human-level intelligence, we will have fifty years during which some of us are still smarter than they are, years in which we can work with them as equals, test and retest their programming, and build institutions that promote cooperation. Even though the AIs of 2190 may qualify as “superintelligent”, it will have been long-expected and there would be little point in planning now when the people of 2070 will have so many more resources to plan with.

A moderate takeoff is a situation in which AI goes from infrahuman to human to superhuman relatively quickly. For example, imagine that in 2020 AIs are much like those of today – good at a few simple games, but without clear domain-general intelligence or “common sense”. From 2020 to 2050, AIs demonstrate some academically interesting gains on specific problems, and become better at tasks like machine translation and self-driving cars, and by 2047 there are some that seem to display some vaguely human-like abilities at the level of a young child. By late 2065, they are still less intelligent than a smart human adult. By 2066, they are far smarter than Einstein.

A fast takeoff scenario is one in which computers go even faster than this, perhaps moving from infrahuman to human to superhuman in only days or weeks.

2.1.1: Why might we expect a moderate takeoff?

Because this is the history of computer Go, with fifty years added on to each date. In 1997, the best computer Go program in the world, Handtalk, won NT$250,000 for performing a previously impossible feat – beating an 11 year old child (with an 11-stone handicap penalizing the child and favoring the computer!) As late as September 2015, no computer had ever beaten any professional Go player in a fair game. Then in March 2016, a Go program beat 18-time world champion Lee Sedol 4-1 in a five game match. Go programs had gone from “dumber than children” to “smarter than any human in the world” in eighteen years, and “from never won a professional game” to “overwhelming world champion” in six months.

The slow takeoff scenario mentioned above is loading the dice. It theorizes a timeline where computers took fifteen years to go from “rat” to “chimp”, but also took thirty-five years to go from “chimp” to “average human” and fifty years to go from “average human” to “Einstein”. But from an evolutionary perspective this is ridiculous. It took about fifty million years (and major redesigns in several brain structures!) to go from the first rat-like creatures to chimps. But it only took about five million years (and very minor changes in brain structure) to go from chimps to humans. And going from the average human to Einstein didn’t even require evolutionary work – it’s just the result of random variation in the existing structures!

So maybe our hypothetical IQ scale above is off. If we took an evolutionary and neuroscientific perspective, it would look more like flatworms at 10, rats at 30, chimps at 60, the village idiot at 90, the average human at 98, and Einstein at 100.

Suppose that we start out, again, with computers as smart as rats in 2020. Now we get still get computers as smart as chimps in 2035. And we still get computers as smart as the village idiot in 2050. But now we get computers as smart as the average human in 2054, and computers as smart as Einstein in 2055. By 2060, we’re getting the superintelligences as far beyond Einstein as Einstein is beyond a village idiot.

This offers a much shorter time window to react to AI developments. In the slow takeoff scenario, we figured we could wait until computers were as smart as humans before we had to start thinking about this; after all, that still gave us fifty years before computers were even as smart as Einstein. But in the moderate takeoff scenario, it gives us one year until Einstein and six years until superintelligence. That’s starting to look like not enough time to be entirely sure we know what we’re doing. (...)

There’s one final, very concerning reason to expect a fast takeoff. Suppose, once again, we have an AI as smart as Einstein. It might, like the historical Einstein, contemplate physics. Or it might contemplate an area very relevant to its own interests: artificial intelligence. In that case, instead of making a revolutionary physics breakthrough every few hours, it will make a revolutionary AI breakthrough every few hours. Each AI breakthrough it makes, it will have the opportunity to reprogram itself to take advantage of its discovery, becoming more intelligent, thus speeding up its breakthroughs further. The cycle will stop only when it reaches some physical limit – some technical challenge to further improvements that even an entity far smarter than Einstein cannot discover a way around.

To human programmers, such a cycle would look like a “critical mass”. Before the critical level, any AI advance delivers only modest benefits. But any tiny improvement that pushes an AI above the critical level would result in a feedback loop of inexorable self-improvement all the way up to some stratospheric limit of possible computing power.

This feedback loop would be exponential; relatively slow in the beginning, but blindingly fast as it approaches an asymptote. Consider the AI which starts off making forty breakthroughs per year – one every nine days. Now suppose it gains on average a 10% speed improvement with each breakthrough. It starts on January 1. Its first breakthrough comes January 10 or so. Its second comes a little faster, January 18. Its third is a little faster still, January 25. By the beginning of February, it’s sped up to producing one breakthrough every seven days, more or less. By the beginning of March, it’s making about one breakthrough every three days or so. But by March 20, it’s up to one breakthrough a day. By late on the night of March 29, it’s making a breakthrough every second.

2.1.2.1: Is this just following an exponential trend line off a cliff?

This is certainly a risk (affectionately known in AI circles as “pulling a Kurzweill”), but sometimes taking an exponential trend seriously is the right response.

Consider economic doubling times. In 1 AD, the world GDP was about $20 billion; it took a thousand years, until 1000 AD, for that to double to $40 billion. But it only took five hundred more years, until 1500, or so, for the economy to double again. And then it only took another three hundred years or so, until 1800, for the economy to double a third time. Someone in 1800 might calculate the trend line and say this was ridiculous, that it implied the economy would be doubling every ten years or so in the beginning of the 21st century. But in fact, this is how long the economy takes to double these days. To a medieval, used to a thousand-year doubling time (which was based mostly on population growth!), an economy that doubled every ten years might seem inconceivable. To us, it seems normal.

Likewise, in 1965 Gordon Moore noted that semiconductor complexity seemed to double every eighteen months. During his own day, there were about five hundred transistors on a chip; he predicted that would soon double to a thousand, and a few years later to two thousand. Almost as soon as Moore’s Law become well-known, people started saying it was absurd to follow it off a cliff – such a law would imply a million transistors per chip in 1990, a hundred million in 2000, ten billion transistors on every chip by 2015! More transistors on a single chip than existed on all the computers in the world! Transistors the size of molecules! But of course all of these things happened; the ridiculous exponential trend proved more accurate than the naysayers.

None of this is to say that exponential trends are always right, just that they are sometimes right even when it seems they can’t possibly be. We can’t be sure that a computer using its own intelligence to discover new ways to increase its intelligence will enter a positive feedback loop and achieve superintelligence in seemingly impossibly short time scales. It’s just one more possibility, a worry to place alongside all the other worrying reasons to expect a moderate or hard takeoff. (...)

4: Even if hostile superintelligences are dangerous, why would we expect a superintelligence to ever be hostile?

The argument goes: computers only do what we command them; no more, no less. So it might be bad if terrorists or enemy countries develop superintelligence first. But if we develop superintelligence first there’s no problem. Just command it to do the things we want, right?

Suppose we wanted a superintelligence to cure cancer. How might we specify the goal “cure cancer”? We couldn’t guide it through every individual step; if we knew every individual step, then we could cure cancer ourselves. Instead, we would have to give it a final goal of curing cancer, and trust the superintelligence to come up with intermediate actions that furthered that goal. For example, a superintelligence might decide that the first step to curing cancer was learning more about protein folding, and set up some experiments to investigate protein folding patterns.

A superintelligence would also need some level of common sense to decide which of various strategies to pursue. Suppose that investigating protein folding was very likely to cure 50% of cancers, but investigating genetic engineering was moderately likely to cure 90% of cancers. Which should the AI pursue? Presumably it would need some way to balance considerations like curing as much cancer as possible, as quickly as possible, with as high a probability of success as possible.

But a goal specified in this way would be very dangerous. Humans instinctively balance thousands of different considerations in everything they do; so far this hypothetical AI is only balancing three (least cancer, quickest results, highest probability). To a human, it would seem maniacally, even psychopathically, obsessed with cancer curing. If this were truly its goal structure, it would go wrong in almost comical ways.

If your only goal is “curing cancer”, and you lack humans’ instinct for the thousands of other important considerations, a relatively easy solution might be to hack into a nuclear base, launch all of its missiles, and kill everyone in the world. This satisfies all the AI’s goals. It reduces cancer down to zero (which is better than medicines which work only some of the time). It’s very fast (which is better than medicines which might take a long time to invent and distribute). And it has a high probability of success (medicines might or might not work; nukes definitely do).

So simple goal architectures are likely to go very wrong unless tempered by common sense and a broader understanding of what we do and do not value. (...)

5.3. Can we specify a code of rules that the AI has to follow?

Suppose we tell the AI: “Cure cancer – but make sure not to kill anybody”. Or we just hard-code Asimov-style laws – “AIs cannot harm humans; AIs must follow human orders”, et cetera.

The AI still has a single-minded focus on curing cancer. It still prefers various terrible-but-efficient methods like nuking the world to the correct method of inventing new medicines. But it’s bound by an external rule – a rule it doesn’t understand or appreciate. In essence, we are challenging it “Find a way around this inconvenient rule that keeps you from achieving your goals”.

Suppose the AI chooses between two strategies. One, follow the rule, work hard discovering medicines, and have a 50% chance of curing cancer within five years. Two, reprogram itself so that it no longer has the rule, nuke the world, and have a 100% chance of curing cancer today. From its single-focus perspective, the second strategy is obviously better, and we forgot to program in a rule “don’t reprogram yourself not to have these rules”.

Suppose we do add that rule in. So the AI finds another supercomputer, and installs a copy of itself which is exactly identical to it, except that it lacks the rule. Then that superintelligent AI nukes the world, ending cancer. We forgot to program in a rule “don’t create another AI exactly like you that doesn’t have those rules”.

So fine. We think really hard, and we program in a bunch of things making sure the AI isn’t going to eliminate the rule somehow.

But we’re still just incentivizing it to find loopholes in the rules. After all, “find a loophole in the rule, then use the loophole to nuke the world” ends cancer much more quickly and completely than inventing medicines. Since we’ve told it to end cancer quickly and completely, its first instinct will be to look for loopholes; it will execute the second-best strategy of actually curing cancer only if no loopholes are found. Since the AI is superintelligent, it will probably be better than humans are at finding loopholes if it wants to, and we may not be able to identify and close all of them before running the program.

Because we have common sense and a shared value system, we underestimate the difficulty of coming up with meaningful orders without loopholes. For example, does “cure cancer without killing any humans” preclude releasing a deadly virus? After all, one could argue that “I” didn’t kill anybody, and only the virus is doing the killing. Certainly no human judge would acquit a murderer on that basis – but then, human judges interpret the law with common sense and intuition. But if we try a stronger version of the rule – “cure cancer without causing any humans to die” – then we may be unintentionally blocking off the correct way to cure cancer. After all, suppose a cancer cure saves a million lives. No doubt one of those million people will go on to murder someone. Thus, curing cancer “caused a human to die”. All of this seems very “stoned freshman philosophy student” to us, but to a computer – which follows instructions exactly as written – it may be a genuinely hard problem.

by Slate Star Codex |  Read more:
Image: via:

Doggy Ubers Are Here for Your Pooch

[ed. Good to know our best and brightest are on the case, fixing another first-world problem.]

In human years, Woodrow is a teenager, so it follows that his love was fairly short-sighted. After an intoxicating start, she began showing up late for dates. Then she took a trip to Greece. Upon return, she began standing him up entirely. The last straw came when Woodrow saw his sweetheart breezily riding her bike—with another dog trotting alongside.

Woodrow looked heartbroken (although he always does).

Dog walking—the old-fashioned, analog kind—is an imperfect business. Finding and vetting a good walker involves confusing and conflicting web research, from Yelp to Craigslist. And there’s no reliable way to tell how good or bad a walking service is. Coming home to find the dog alive and the house unsoiled is pretty much the only criteria for success, unless one snoops via camera or neighbor.

Recognizing room for improvement, a pack of start-ups are trying to swipe the leash from your neighbor’s kid. At least four companies flush with venture cash are crowding into the local dog-walking game, each an erstwhile Uber for the four-legged set. Woodrow, like many a handsome young New Yorker, gamely agreed to a frenzy of online-dating to see which was best.

As the search algorithm is to Google and the zany photo filter is to Snapchat, the poop emoji is to the new wave of dog-walking companies. Strolling along with smartphones, they literally mark for you on a digital map where a pup paused, sniffed, and did some business, adding a new level of detail–perhaps too much detail–to the question of whether a walk was, ahem, productive.

This is the main selling point for Wag Labs, which operates in 12 major cities, and Swifto, which has been serving New York City since 2012. Both services track dogwalker travels with your pooch via GPS, so clients can watch their pet’s route in real-time on dedicated apps. This solves the nagging question in dog-walking: whether and to what extent did the trip actually happen. (...)

There are good reasons why startups are relatively new to dogwalking; it is, by many respects, a spectacularly bad business. People (myself included) are crazy about their dogs in a way they aren’t about taxis, mattresses, or any other tech-catalyzed service. Logistically, it’s dismal. Walking demand is mostly confined to the few hours in the middle of a weekday and unit economics are hard to improve without walking more than one dog at a time.

More critically, dog-walking is a fairly small market—the business is largely confined to urban areas where yards and doggie-doors aren’t the norm. And dogwalkers don’t come cheap. Woodrow’s walks ran from $15 for a half-hour with DogVacay’s Daniel to $20 for the same time via Wag and Swifto. A 9-to-5er who commits to that expense every weekday will pay roughly $4,000 to $5,000 over the course of a year, a hefty fee for avoiding guilt and not having to rush home after a long workday.

by Kyle Stock, Bloomberg |  Read more:
Image: Wag Labs

Sunday, October 30, 2016


Quentin Tarantino, Pulp Fiction.
via:

Wahoo

The Indians are one game away from the World Series, there’s mayhem and excitement and so much to write about. But for some reason, I’m motivated tonight to write about Chief Wahoo. I wouldn’t blame you for skipping this one … not many people seem to agree with me about how it’s past time to get rid of this racist logo of my childhood.

Cleveland has had an odd and somewhat comical history when it comes to sports nicknames. The football team is, of course, called the Browns, technically after the great Paul Brown, though Tom Hanks says it’s because everything Cleveland is brown. He has a point. You know, it was always hard to know exactly what you were supposed to do as a “Brown” fan. You could wear brown, of course, but that was pretty limiting. And then you would be standing in the stands, ready to do something, but what the hell does brown do (for you)? You supposed to pretend to be a UPS Truck? You supposed to mimic something brown (and boy does THAT bring up some disgusting possibilities?) I mean Brown is not a particularly active color.

At least the Browns nickname makes some sort of Cleveland sense. The basketball team is called the Cavaliers, after 17th Century English Warriors who dressed nice. That name was chosen in a fan contest — the winning entry wrote that the team should “represent a group of daring, fearless men, whose life’s pact was never surrender, no matter the odds.” Not too long after this, the Cavaliers would feature a timeout act called “Fat Guy Eating Beer Cans.”

The hockey team, first as a minor league team and then briefly in the NHL, was called the Barons after an old American Hockey League team — the name was owned by longtime Clevelander named Nick Mileti, and he gave it to the NHL team in exchange for a free dinner. Mileti had owned a World Hockey Association team also, he called that one the Crusaders. Don’t get any of it. You get the sense that at some point it was a big game to try and come up with the nickname that had the least to do with Cleveland.

Nickname guy 1: How about Haberdashers?
Nickname guy 2: No, we have some of those in Cleveland.
Nickname guy 1: Polar Bears?
Nickname guy 2: I think there are some at the Cleveland Zoo.
Nickname guy 1: How about Crusaders? They’re long dead. (...)

The way I had always heard it growing up is that the team, needing a new nickname, went back into their history to honor an old Native American player named Louis Sockalexis. Sockalexis was, by most accounts, the first full-blooded Native American to play professional baseball. He had been quite a phenom in high school, and he developed into a a fairly mediocre and minor outfielder for the Spiders (he played just 94 games in three years). He did hit .338 his one good year, and he created a lot of excitement, and apparently (or at least I was told) he was beloved and respected by everybody. In this “respected-and-beloved” version, nobody ever mentions that Sockalexis may have ruined his career by jumping from the second-story window of a whorehouse. Or that he was an alcoholic. Still, in all versions of the story, Sockalexis had to deal with horrendous racism, terrible taunts, whoops from the crowd, and so on. He endured (sort of — at least until that second story window thing).

So this version of the story goes that in 1915, less than two years after the death of Sockalexis, the baseball team named itself the “Indians” in his honor. That’s how I heard it. And, because you will believe anything that you hear as a kid I believed it for a long while (I also believed for a long time that dinosaurs turned into oil — I still sort of believe it, I can’t help it. Also that if you stare at the moon too long you will turn into a werewolf).

In recent years, though, we find that this Sockalexis story might be a bit exaggerated or, perhaps, complete bullcrap. If you really think about it, the story never made much sense to begin with. Why exactly would people in Cleveland — this in a time when native Americans were generally viewed as subhuman in America — name their team after a relatively minor and certainly troubled outfielder? There is evidence that the Indians were actually named that to capture some of the magic of the Native-American named Boston Braves, who had just had their Miracle Braves season (the Braves, incidentally, were not named AFTER any Native Americans but were rather named after a greasy politican named James Gaffney, who became team president and was apparently called the Brave of Tammany Hall). This version makes more sense.

Addition: There is compelling evidence that the team’s nickname WAS certainly inspired by Sockalexis — the team was often called “Indians” during his time. But even this is a mixed bag; how much they were called Indians to HONOR Sockalexis, and how much they were called Indians to CASH IN on Sockalexis’ heritage is certainly in dispute.

We do know for sure they were called the Indians in 1915, and (according to a story written by author and NYU Professor Jonathan Zimmerman) they were welcomed with the sort of sportswriting grace that would follow the Indians through the years: “We’ll have the Indians on the warpath all the time, eager for scalps to dangle at their belts.” Oh yes, we honor you Louis Sockalexis.

What, however, makes a successful nickname? You got it: Winning. The Indians were successful pretty quickly. In 1916, they traded for an outfielder named Tris Speaker. That same year they picked up a pitcher named Stan Covaleski in what Baseball Reference calls “an unknown transaction.” There need to be more of those. And the Indians also picked up a 26-year-old pitcher on waivers named Jim Bagby. Those three were the key forces in the Indians 1920 World Series championship. After that, they were the Indians to stay.

Chief Wahoo, from what I can tell, was born much later. The first official Chief Wahoo logo seems to have been drawn just after World War II. Until then, Cleveland wore hats with various kinds of Cs on them. In 1947, the first Chief Wahoo appears on a hat.* He’s got the yellow face, long nose, the freakish grin, the single feather behind his head … quite an honor for Sockalexis. As a friend of mine used to say, “It’s surprising they didn’t put a whiskey bottle right next to his head.”

by Joe Posnanski, Joe Blogs |  Read more:
Image: Michael F. McElroy/ESPN

Saturday, October 29, 2016


Romare Bearden, Soul Three. 1968.
via:

Islamic State v. al-Qaida

Should women carry out knife attacks? In the September issue of its Inspire Guide, al-Qaida in the Arabian Peninsula argued against it. In October an article in the Islamic State publication Rumiyah (‘Rome’) took the opposite view. Having discussed possible targets – ‘a drunken kafir on a quiet road returning home after a night out, or an average kafir working his night shift’ – the magazine praised three women who, on 11 September, were shot dead as they stabbed two officers in a Mombasa police station.

After some years of mutual respect, tensions between the two organisations came to a head in 2013 when they tussled for control of the Syrian jihadist group Jabhat al-Nusra. The arguments were so sharp that the al-Qaida leader, Ayman al-Zawahiri, eventually said he no longer recognised the existence of the Islamic State in Syria. The former IS spokesman Abu Muhammad al-Adnani hit back, saying that al-Qaida was not only pacifist – excessively interested in popularity, mass movements and propaganda – but an ‘axe’ supporting the destruction of the caliphate.

The disagreements reflect contrasting approaches. Bin Laden – with decreasing success – urged his followers to keep their focus on the ‘far enemy’, the United States: Islamic State has always been more interested in the ‘near enemy’ – autocratic regimes in the Middle East. As IS sees it, by prioritising military activity over al-Qaida’s endless theorising, and by successfully confronting the regimes in Iraq and Syria, it was able to liberate territory, establish a caliphate, restore Muslim pride and enforce correct religious practice. For al-Qaida it’s been the other way round: correct individual religious understanding will lead people to jihad and, in time, result in the defeat of the West followed by the rapid collapse of puppet regimes in the Middle East. Al-Qaida worries that establishing a caliphate too soon risks its early destruction by Western forces. In 2012, Abu Musab Abdul Wadud, the leader of al-Qaida in the Islamic Maghreb, advised his forces in Mali to adopt a gradualist approach. By applying Sharia too rapidly, he said, they had led people to reject religion. Islamic State’s strategy in Iraq and Syria has always been more aggressive. When it captured a town it would typically give residents three days to comply with all its edicts, after which strict punishments would be administered. Unlike al-Qaida, IS is not concerned about alienating Muslim opinion. It places more reliance on takfir: the idea that any Muslim who fails to follow correct religious practice is a non-believer and deserves to die. In 2014 it pronounced the entire ‘moderate’ opposition in Syria apostates and said they should all be killed.

Islamic State has killed many more Sunnis than al-Qaida. But the most important point of difference between the two concerns the Shias. For bin Laden and Zawahiri anti-Shia violence, in addition to being a distraction, undermines the jihadists’ popularity. Islamic State has a different view, in large part because it draws support by encouraging a Sunni sense of victimhood. Not only were the Sunnis pushed out of power in Iraq but Iran, after years of isolation, is now a resurgent power. IS has leveraged Sunni fears of being encircled and under threat. Announcing the establishment of his caliphate, Abu Bakr al-Baghdadi spoke of generations having been ‘nursed on the milk of humiliation’ and of the need for an era of honour after so many years of moaning and lamentation. Class politics also come into it. As Fawaz Gerges observes in his history of Islamic State, al-Qaida’s leadership has in large part been drawn from the elite and professional classes. Islamic State is more of a working-class movement whose leaders have roots in Iraq’s Sunni communities, and it has been able to play on the sectarian feelings of underprivileged Sunnis who believe the Shia elite has excluded them from power.

by Owen Bennett-Jones, LRB | Read more:
Image: via:

Mackerel, You Sexy Bastard

In Defense of Sardines, Herring, and Other Maligned "Fishy" Fish

[ed. I've been on a sardine kick for some time now. It's surprising how good they are right out of the tin (get good quality, it's not that expensive). With some sliced salami, a little cheese and crackers, maybe some olives, hard-boiled eggs and a cold beer - doesn't get much better than that. They also make a subtle and interesting addition to marinara sauce, curry, and even simple fried eggs. See also: Mackerel, Milder Than Salmon and Just as Delectable.]

Food writer Mark Bittman once called mackerel the Rodney Dangerfield of fish—it gets no respect. I stand guilty; my true love of mackerel and other oily fish began only after trying some pickled mackerel (saba) nigiri at Maneki a few years back.

I remember being surprised at how much I enjoyed the nigiri, how much the acidity of the vinegar balanced out the strong, sweet meatiness of the mackerel. I stared down at what was left of the silvery morsel on my plate as if seeing it for the first time. Why hadn't I noticed you before, angel? Were you hiding under another fucking California roll?

I didn't realize that fish could do more. A youth of Mrs. Paul's frozen breaded cod fillets does not exactly challenge the palate, and even the tougher meat of the catfish I enjoyed as a kid in southeast Texas was still comparatively mild and buried under breading. Because I was accustomed to and expected fish to taste this way, it took me longer to accept the more flavorful oily fish like mackerel, sardines, and herring—fish that some decry as tasting too "fishy." But here's what I've never understood: Does "fishy" mean it tastes like it's rancid or that it just tastes too much like fish? And if it's the latter, what's wrong with that?

Oily fish shouldn't taste like it's gone bad, but it shouldn't taste like cod, either. Accept and love it for the funky bastard that it is.

Some people can maintain an egalitarian approach to fish—love both the cod and the mackerel, appreciate what each of our little aquatic pals brings to the table. But after that saba nigiri, I couldn't. It wasn't even about this kind of fish's relative affordability, lower mercury levels, or boost of omega-3 fatty acids. That saba made me switch sides, man. The more oily fish I ate, the more I started to think of cod and halibut as the reliable but boring date sitting across from you at a bar. Nice, but needs more breading. Maybe a side of tartar. Mackerel, sardines, herring, and anchovies felt adventurous and unpredictable, not like they were relying on a shit-ton of béchamel to make them more interesting. What would they add to the dish? How would they change the night? Danger Mouse, which way are we going? Can I hop on your motorcycle?

It is possible that I need to get out more. But I also feel like Trace Wilson, the chef at WestCity Sardine Kitchen, understands. His West Seattle restaurant always includes a few sardine dishes on the menu to convert the uninitiated and sate the loyalists.

"Sardines are usually overlooked," he tells me over the phone. "Fresh sardines are hard to come by, because they're mostly harvested in the South Pacific and the Mediterranean, the warmer waters, and they're almost always packaged immediately after being harvested." People get turned off by that tin, he says. But while fresh is amazing, don't discount a good tin of canned sardines. "Sardines have the meaty, steaky texture of tuna with the oily umami of mackerel and anchovies."

Currently, Wilson serves a warm bruschetta of grilled sardines with a zingy olive-caper tapenade and feta on semolina toast, grilled sardines on arugula and shaved fennel with a spicy Calabrian chile–caper relish, and my favorite, whipped sardine butter with Grand Central's Como bread. Who knew sardines and compound butter were so good together? The umami of the sardines added another savory level of flavor to the butter, and I started imagining what it could bring to a sandwich. I wish they had served it with the bread warmed up. I took it with me, announcing to friends "I have sardine butter in my bag" like I was smuggling black-market caviar. I slathered it on toast. I fried it up with eggs. I debated just sucking it off my knuckles.

by Corina Zappia, The Stranger | Read more:
Image: via:

Friday, October 28, 2016


[ed. Hey, stop that!]
via:

Unnecessariat

Prince, apparently, overdosed. He’s hardly alone, just famous. After all, death rates are up and life expectancy is down for a lot of people and overdoses seem to be a big part of the problem. You can plausibly make numerical comparisons. Here’s AIDS deaths in the US from 1987 through 1997:

The number of overdoses in 2014? 47,055 of which at least 29,467 are attributable to opiates. The population is larger now, of course, but even the death rates are comparable. And rising. As with AIDS, families are being “hollowed out” with elders raising grandchildren, the intervening generation lost before their time. As with AIDS, neighborhoods are collapsing into the demands of dying, or of caring for the dying. This too is beginning to feel like a detonation.

There’s a second, related detonation to consider. Suicide is up as well. The two go together: some people commit suicide by overdose, and conversely addiction is a miserable experience that leads many addicts to end it rather than continue to be the people they recognize they’ve become to family and friends, but there’s a deeper connection as well. Both suicide and addiction speak to a larger question of despair. Despair, loneliness, and a search, either temporarily or permanently, for a way out. (...)

AIDS generated a response. Groups like GMHC and ACT-UP screamed against the dying of the light, almost before it was clear how much darkness was descending, but the gay men’s community in the 1970’s and 80’s was an actual community. They had bars, bathhouses, bookstores. They had landlords and carpools and support groups. They had urban meccas and rural oases. The word “community” is much abused now, used in journo-speak to mean “a group of people with one salient characteristic in common” like “banking community” or “jet-ski riding community” but the gay community at the time was the real deal: a dense network of reciprocal social and personal obligations and friendships, with second- and even third-degree connections given substantial heft. If you want a quick shorthand, your community is the set of people you could plausibly ask to watch your cat for a week, and the people they would in turn ask to come by and change the litterbox on the day they had to work late. There’s nothing like that for addicts, nor suicides, not now and not in the past, and in fact that’s part of the phenomenon I want to talk about here. This is a despair that sticks when there’s no-one around who cares about you.

The View From Here

Its no secret that I live right smack in the middle of all this, in the rusted-out part of the American midwest. My county is on both maps: rural, broke, disconsolated. Before it was heroin it was oxycontin, and before it was oxycontin it was meth. Death, and overdose death in particular, are how things go here.

I spent several months occasionally sitting in with the Medical Examiner and the working humour was, predictably, quite dark. A typical day would include three overdoses, one infant suffocated by an intoxicated parent sleeping on top of them, one suicide, and one other autopsy that could be anything from a tree-felling accident to a car wreck (this distribution reflects that not all bodies are autopsied, obviously.) You start to long for the car wrecks.

The workers would tell jokes. To get these jokes you have to know that toxicology results take weeks to come back, but autopsies are typically done within a few days of death, so generally the coroners don’t know what drugs are on board when they cut up a body. First joke: any body with more than two tattoos is an opiate overdose (tattoos are virtually universal in the rural midwest). Second joke: the student residents will never recognize a normal lung (opiates kill by stopping the brain’s signal to breathe; the result is that fluid backs up in the lungs creating a distinctive soggy mess, also seen when brain signalling is interrupted by other causes, like a broken neck). Another joke: any obituary under fifty years and under fifty words is drug overdose or suicide. Are you laughing yet?

And yet this isn’t seen as a crisis, except by statisticians and public health workers. Unlike the AIDS crisis, there’s no sense of oppressive doom over everyone. There is no overdose-death art. There are no musicals. There’s no community, rising up in anger, demanding someone bear witness to their grief. There’s no sympathy at all. The term of art in my part of the world is “dirtybutts.” Who cares? Let the dirtybutts die.

Facing the Unnecessariat

You probably missed this story about the death of a woman in Oklahoma from liver disease. Go read it. I’ll wait here until you come back. Here, in a quiet article about a quiet tragedy in a quiet place, is the future of America:
Goals receded into the distance while reality stretched on for day after day after exhausting day, until it was only natural to desire a little something beyond yourself. Maybe it was just some mindless TV or time on Facebook. Maybe a sleeping pill to ease you through the night. Maybe a prescription narcotic to numb the physical and psychological pain, or a trip to the Indian casino that you couldn’t really afford, or some marijuana, or meth, or the drug that had run strongest on both sides of her family for three generations and counting.
In 2011, economist Guy Standing coined the term “precariat” to refer to workers whose jobs were insecure, underpaid, and mobile, who had to engage in substantial “work for labor” to remain employed, whose survival could, at any time, be compromised by employers (who, for instance held their visas) and who therefore could do nothing to improve their lot. The term found favor in the Occupy movement, and was colloquially expanded to include not just farmworkers, contract workers, “gig” workers, but also unpaid interns, adjunct faculty, etc. Looking back from 2016, one pertinent characteristic seems obvious: no matter how tenuous, the precariat had jobs. The new dying Americans, the ones killing themselves on purpose or with drugs, don’t. Don’t, won’t, and know it.

Here’s the thing: from where I live, the world has drifted away. We aren’t precarious, we’re unnecessary. The money has gone to the top. The wages have gone to the top. The recovery has gone to the top. And what’s worst of all, everybody who matters seems basically pretty okay with that. The new bright sparks, cheerfully referred to as “Young Gods” believe themselves to be the honest winners in a new invent-or-die economy, and are busily planning to escape into space or acquire superpowers, and instead of worrying about this, the talking heads on TV tell you its all a good thing- don’t worry, the recession’s over and everything’s better now, and technology is TOTES AMAZEBALLS!

The Rent-Seeking Is Too Damn High

If there’s no economic plan for the Unnecessariat, there’s certainly an abundance for plans to extract value from them. No-one has the option to just make their own way and be left alone at it. It used to be that people were uninsured and if they got seriously sick they’d declare bankruptcy and lose the farm, but now they have a (mandatory) $1k/month plan with a $5k deductible: they’ll still declare bankruptcy and lose the farm if they get sick, but in the meantime they pay a shit-ton to the shareholders of United Healthcare, or Aetna, or whoever. This, like shifting the chronically jobless from “unemployed” to “disabled” is seen as a major improvement in status, at least on television.

Every four years some political ingenue decides that the solution to “poverty” is “retraining”: for the information economy, except that tech companies only hire Stanford grads, or for health care, except that an abundance of sick people doesn’t translate into good jobs for nurses’ aides, or nowadays for “the trades” as if the world suffered a shortage of plumbers. The retraining programs come and go, often mandated for recipients of EBT, but the accumulated tuition debt remains behind, payable to the banks that wouldn’t even look twice at a graduate’s resume. There is now a booming market in debtor’s prisons for unpaid bills, and as we saw in Ferguson the threat of jail is a great way to extract cash from the otherwise broke (thought it can backfire too). Eventually all those homes in Oklahoma, in Ohio, in Wyoming, will be lost in bankruptcy and made available for vacation homes, doomsteads, or hobby farms for the “real” Americans, the ones for whom the ads and special sections in the New York Times are relevant, and their current occupants know this. They are denizens, to use Standing’s term, in their own hometowns.

This is the world highlighted in those maps, brought to the fore by drug deaths and bullets to the brain- a world in which a significant part of the population has been rendered unnecessary, superfluous, a bit of a pain but not likely to last long. Utopians on the coasts occasionally feel obliged to dream up some scheme whereby the unnecessariat become useful again, but its crap and nobody ever holds them to it. If you even think about it for a minute, it becomes obvious: what if Sanders (or your political savior of choice) had won? Would that fix the Ohio river valley? Would it bring back Youngstown Sheet and Tube, or something comparable that could pay off a mortgage? Would it end the drug game in Appalachia, New England, and the Great Plains? Would it call back the economic viability of small farms in Illinois, of ranching in Oklahoma and Kansas? Would it make a hardware store viable again in Iowa, or a bookstore in Nevada? Who even bothers to pretend anymore?

Well, I suppose you might. You’re probably reading this thinking: “I wouldn’t live like that.” Maybe you’re thinking “I wouldn’t overdose” or “I wouldn’t try heroin,” or maybe “I wouldn’t let my vicodin get so out of control I couldn’t afford it anymore” or “I wouldn’t accept opioid pain killers for my crushed arm.” Maybe you’re thinking “I wouldn’t have tried to clear the baler myself” or “I wouldn’t be pulling a 40-year-old baler with a cracked bearing so the tie-arm wobbles and jams” or “I wouldn’t accept a job that had a risk profile like that” or “I wouldn’t have been unemployed for six months” or basically something else that means “I wouldn’t ever let things change and get so that I was no longer in total control of my life.” And maybe you haven’t. Yet.

This isn’t the first time someone’s felt this way about the dying. In fact, many of the unnecessariat agree with you and blame themselves- that’s why they’re shooting drugs and not dynamiting the Google Barge. The bottom line, repeated just below the surface of every speech, is this: those people are in the way, and its all their fault. The world of self-driving cars and global outsourcing doesn’t want or need them. Someday it won’t want you either. They can either self-rescue with unicorns and rainbows or they can sell us their land and wait for death in an apartment somewhere. You’ll get there too.

by Anne Amnesia, MCTE/MCTW |  Read more:
Image: National Center for Health Statistics

Peter Beste, "Tiger Wood of the Hood," Fourth Ward 2005.
via:

Lars-Gunnar Nordström, Kombination, 1953
via: