Monday, October 31, 2016

The Waterboys

Billionaire Governor Taxed the Rich and Increased the Minimum Wage — Now, His State’s Economy Is One of the Best in the Country

[ed. Sorry for all the link bait (Huffington Post, after all...) but this really is an achievement worth noting.]

The next time your right-wing family member or former high school classmate posts a status update or tweet about how taxing the rich or increasing workers’ wages kills jobs and makes businesses leave the state, I want you to send them this article.

When he took office in January of 2011, Minnesota governor Mark Dayton inherited a $6.2 billion budget deficit and a 7 percent unemployment rate from his predecessor, Tim Pawlenty, the soon-forgotten Republican candidate for the presidency who called himself Minnesota’s first true fiscally-conservative governor in modern history. Pawlenty prided himself on never raising state taxes — the most he ever did to generate new revenue was increase the tax on cigarettes by 75 cents a pack. Between 2003 and late 2010, when Pawlenty was at the head of Minnesota’s state government, he managed to add only 6,200 more jobs.

During his first four years in office, Gov. Dayton raised the state income tax from 7.85 to 9.85 percent on individuals earning over $150,000, and on couples earning over $250,000 when filing jointly — a tax increase of $2.1 billion. He’s also agreed to raise Minnesota’s minimum wage to $9.50 an hour by 2018, and passed a state law guaranteeing equal pay for women. Republicans like state representative Mark Uglem warned against Gov. Dayton’s tax increases, saying, “The job creators, the big corporations, the small corporations, they will leave. It’s all dollars and sense to them.” The conservative friend or family member you shared this article with would probably say the same if their governor tried something like this. But like Uglem, they would be proven wrong.

Between 2011 and 2015, Gov. Dayton added 172,000 new jobs to Minnesota’s economy — that’s 165,800 more jobs in Dayton’s first term than Pawlenty added in both of his terms combined. Even though Minnesota’s top income tax rate is the fourth highest in the country, it has the fifth lowest unemployment rate in the country at 3.6 percent. According to 2012-2013 U.S. census figures, Minnesotans had a median income that was $10,000 larger than the U.S. average, and their median income is still $8,000 more than the U.S. average today.

By late 2013, Minnesota’s private sector job growth exceeded pre-recession levels, and the state’s economy was the fifth fastest-growing in the United States. Forbes even ranked Minnesota the ninth best state for business (Scott Walker’s “Open For Business” Wisconsin came in at a distant #32 on the same list). Despite the fearmongering over businesses fleeing from Dayton’s tax cuts, 6,230 more Minnesotans filed in the top income tax bracket in 2013, just one year after Dayton’s tax increases went through. As of January 2015, Minnesota has a $1 billion budget surplus, and Gov. Dayton has pledged to reinvest more than one third of that money into public schools. And according to Gallup, Minnesota’s economic confidence is higher than any other state.

Gov. Dayton didn’t accomplish all of these reforms by shrewdly manipulating people — this article describes Dayton’s astonishing lack of charisma and articulateness. He isn’t a class warrior driven by a desire to get back at the 1 percent — Dayton is a billionaire heir to the Target fortune. It wasn’t just a majority in the legislature that forced him to do it — Dayton had to work with a Republican-controlled legislature for his first two years in office. And unlike his Republican neighbor to the east, Gov. Dayton didn’t assert his will over an unwilling populace by creating obstacles between the people and the vote — Dayton actually created an online voter registration system, making it easier than ever for people to register to vote.

by C. Robert Gibson, Huffington Post | Read more:
Image: Glenn Stubbe, Star Tribune

Renato Guttuso, La Vuccirìa 1974
via:

Maciek Pozoga
via:

AI Persuasion Experiment

1: What is superintelligence?

A superintelligence is a mind that is much more intelligent than any human. Most of the time, it’s used to discuss hypothetical future AIs.

1.1: Sounds a lot like science fiction. Do people think about this in the real world?

Yes. Two years ago, Google bought artificial intelligence startup DeepMind for $400 million; DeepMind added the condition that Google promise to set up an AI Ethics Board. DeepMind cofounder Shane Legg has said in interviews that he believes superintelligent AI will be “something approaching absolute power” and “the number one risk for this century”.

Many other science and technology leaders agree. Astrophysicist Stephen Hawking says that superintelligence “could spell the end of the human race.” Tech billionaire Bill Gates describes himself as “in the camp that is concerned about superintelligence…I don’t understand why some people are not concerned”. SpaceX/Tesla CEO Elon Musk calls superintelligence “our greatest existential threat” and donated $10 million from his personal fortune to study the danger. Stuart Russell, Professor of Computer Science at Berkeley and world-famous AI expert, warns of “species-ending problems” and wants his field to pivot to make superintelligence-related risks a central concern.

Professor Nick Bostrom is the director of Oxford’s Future of Humanity Institute, tasked with anticipating and preventing threats to human civilization. He has been studying the risks of artificial intelligence for twenty years. The explanations below are loosely adapted from his 2014 book Superintelligence, and divided into three parts addressing three major questions. First, why is superintelligence a topic of concern? Second, what is a “hard takeoff” and how does it impact our concern about superintelligence? Third, what measures can we take to make superintelligence safe and beneficial for humanity?

2: AIs aren’t as smart as rats, let alone humans. Isn’t it sort of early to be worrying about this kind of thing?

Maybe. It’s true that although AI has had some recent successes – like DeepMind’s newest creation AlphaGo defeating the human Go champion in April – it still has nothing like humans’ flexible, cross-domain intelligence. No AI in the world can pass a first-grade reading comprehension test. Facebook’s Andrew Ng compares worrying about superintelligence to “worrying about overpopulation on Mars” – a problem for the far future, if at all.

But this apparent safety might be illusory. A survey of leading AI scientists show that on average they expect human-level AI as early as 2040, with above-human-level AI following shortly after. And many researchers warn of a possible “fast takeoff” – a point around human-level AI where progress reaches a critical mass and then accelerates rapidly and unpredictably.

2.1: What do you mean by “fast takeoff”?

A slow takeoff is a situation in which AI goes from infrahuman to human to superhuman intelligence very gradually. For example, imagine an augmented “IQ” scale (THIS IS NOT HOW IQ ACTUALLY WORKS – JUST AN EXAMPLE) where rats weigh in at 10, chimps at 30, the village idiot at 60, average humans at 100, and Einstein at 200. And suppose that as technology advances, computers gain two points on this scale per year. So if they start out as smart as rats in 2020, they’ll be as smart as chimps in 2035, as smart as the village idiot in 2050, as smart as average humans in 2070, and as smart as Einstein in 2120. By 2190, they’ll be IQ 340, as far beyond Einstein as Einstein is beyond a village idiot.

In this scenario progress is gradual and manageable. By 2050, we will have long since noticed the trend and predicted we have 20 years until average-human-level intelligence. Once AIs reach average-human-level intelligence, we will have fifty years during which some of us are still smarter than they are, years in which we can work with them as equals, test and retest their programming, and build institutions that promote cooperation. Even though the AIs of 2190 may qualify as “superintelligent”, it will have been long-expected and there would be little point in planning now when the people of 2070 will have so many more resources to plan with.

A moderate takeoff is a situation in which AI goes from infrahuman to human to superhuman relatively quickly. For example, imagine that in 2020 AIs are much like those of today – good at a few simple games, but without clear domain-general intelligence or “common sense”. From 2020 to 2050, AIs demonstrate some academically interesting gains on specific problems, and become better at tasks like machine translation and self-driving cars, and by 2047 there are some that seem to display some vaguely human-like abilities at the level of a young child. By late 2065, they are still less intelligent than a smart human adult. By 2066, they are far smarter than Einstein.

A fast takeoff scenario is one in which computers go even faster than this, perhaps moving from infrahuman to human to superhuman in only days or weeks.

2.1.1: Why might we expect a moderate takeoff?

Because this is the history of computer Go, with fifty years added on to each date. In 1997, the best computer Go program in the world, Handtalk, won NT$250,000 for performing a previously impossible feat – beating an 11 year old child (with an 11-stone handicap penalizing the child and favoring the computer!) As late as September 2015, no computer had ever beaten any professional Go player in a fair game. Then in March 2016, a Go program beat 18-time world champion Lee Sedol 4-1 in a five game match. Go programs had gone from “dumber than children” to “smarter than any human in the world” in eighteen years, and “from never won a professional game” to “overwhelming world champion” in six months.

The slow takeoff scenario mentioned above is loading the dice. It theorizes a timeline where computers took fifteen years to go from “rat” to “chimp”, but also took thirty-five years to go from “chimp” to “average human” and fifty years to go from “average human” to “Einstein”. But from an evolutionary perspective this is ridiculous. It took about fifty million years (and major redesigns in several brain structures!) to go from the first rat-like creatures to chimps. But it only took about five million years (and very minor changes in brain structure) to go from chimps to humans. And going from the average human to Einstein didn’t even require evolutionary work – it’s just the result of random variation in the existing structures!

So maybe our hypothetical IQ scale above is off. If we took an evolutionary and neuroscientific perspective, it would look more like flatworms at 10, rats at 30, chimps at 60, the village idiot at 90, the average human at 98, and Einstein at 100.

Suppose that we start out, again, with computers as smart as rats in 2020. Now we get still get computers as smart as chimps in 2035. And we still get computers as smart as the village idiot in 2050. But now we get computers as smart as the average human in 2054, and computers as smart as Einstein in 2055. By 2060, we’re getting the superintelligences as far beyond Einstein as Einstein is beyond a village idiot.

This offers a much shorter time window to react to AI developments. In the slow takeoff scenario, we figured we could wait until computers were as smart as humans before we had to start thinking about this; after all, that still gave us fifty years before computers were even as smart as Einstein. But in the moderate takeoff scenario, it gives us one year until Einstein and six years until superintelligence. That’s starting to look like not enough time to be entirely sure we know what we’re doing. (...)

There’s one final, very concerning reason to expect a fast takeoff. Suppose, once again, we have an AI as smart as Einstein. It might, like the historical Einstein, contemplate physics. Or it might contemplate an area very relevant to its own interests: artificial intelligence. In that case, instead of making a revolutionary physics breakthrough every few hours, it will make a revolutionary AI breakthrough every few hours. Each AI breakthrough it makes, it will have the opportunity to reprogram itself to take advantage of its discovery, becoming more intelligent, thus speeding up its breakthroughs further. The cycle will stop only when it reaches some physical limit – some technical challenge to further improvements that even an entity far smarter than Einstein cannot discover a way around.

To human programmers, such a cycle would look like a “critical mass”. Before the critical level, any AI advance delivers only modest benefits. But any tiny improvement that pushes an AI above the critical level would result in a feedback loop of inexorable self-improvement all the way up to some stratospheric limit of possible computing power.

This feedback loop would be exponential; relatively slow in the beginning, but blindingly fast as it approaches an asymptote. Consider the AI which starts off making forty breakthroughs per year – one every nine days. Now suppose it gains on average a 10% speed improvement with each breakthrough. It starts on January 1. Its first breakthrough comes January 10 or so. Its second comes a little faster, January 18. Its third is a little faster still, January 25. By the beginning of February, it’s sped up to producing one breakthrough every seven days, more or less. By the beginning of March, it’s making about one breakthrough every three days or so. But by March 20, it’s up to one breakthrough a day. By late on the night of March 29, it’s making a breakthrough every second.

2.1.2.1: Is this just following an exponential trend line off a cliff?

This is certainly a risk (affectionately known in AI circles as “pulling a Kurzweill”), but sometimes taking an exponential trend seriously is the right response.

Consider economic doubling times. In 1 AD, the world GDP was about $20 billion; it took a thousand years, until 1000 AD, for that to double to $40 billion. But it only took five hundred more years, until 1500, or so, for the economy to double again. And then it only took another three hundred years or so, until 1800, for the economy to double a third time. Someone in 1800 might calculate the trend line and say this was ridiculous, that it implied the economy would be doubling every ten years or so in the beginning of the 21st century. But in fact, this is how long the economy takes to double these days. To a medieval, used to a thousand-year doubling time (which was based mostly on population growth!), an economy that doubled every ten years might seem inconceivable. To us, it seems normal.

Likewise, in 1965 Gordon Moore noted that semiconductor complexity seemed to double every eighteen months. During his own day, there were about five hundred transistors on a chip; he predicted that would soon double to a thousand, and a few years later to two thousand. Almost as soon as Moore’s Law become well-known, people started saying it was absurd to follow it off a cliff – such a law would imply a million transistors per chip in 1990, a hundred million in 2000, ten billion transistors on every chip by 2015! More transistors on a single chip than existed on all the computers in the world! Transistors the size of molecules! But of course all of these things happened; the ridiculous exponential trend proved more accurate than the naysayers.

None of this is to say that exponential trends are always right, just that they are sometimes right even when it seems they can’t possibly be. We can’t be sure that a computer using its own intelligence to discover new ways to increase its intelligence will enter a positive feedback loop and achieve superintelligence in seemingly impossibly short time scales. It’s just one more possibility, a worry to place alongside all the other worrying reasons to expect a moderate or hard takeoff. (...)

4: Even if hostile superintelligences are dangerous, why would we expect a superintelligence to ever be hostile?

The argument goes: computers only do what we command them; no more, no less. So it might be bad if terrorists or enemy countries develop superintelligence first. But if we develop superintelligence first there’s no problem. Just command it to do the things we want, right?

Suppose we wanted a superintelligence to cure cancer. How might we specify the goal “cure cancer”? We couldn’t guide it through every individual step; if we knew every individual step, then we could cure cancer ourselves. Instead, we would have to give it a final goal of curing cancer, and trust the superintelligence to come up with intermediate actions that furthered that goal. For example, a superintelligence might decide that the first step to curing cancer was learning more about protein folding, and set up some experiments to investigate protein folding patterns.

A superintelligence would also need some level of common sense to decide which of various strategies to pursue. Suppose that investigating protein folding was very likely to cure 50% of cancers, but investigating genetic engineering was moderately likely to cure 90% of cancers. Which should the AI pursue? Presumably it would need some way to balance considerations like curing as much cancer as possible, as quickly as possible, with as high a probability of success as possible.

But a goal specified in this way would be very dangerous. Humans instinctively balance thousands of different considerations in everything they do; so far this hypothetical AI is only balancing three (least cancer, quickest results, highest probability). To a human, it would seem maniacally, even psychopathically, obsessed with cancer curing. If this were truly its goal structure, it would go wrong in almost comical ways.

If your only goal is “curing cancer”, and you lack humans’ instinct for the thousands of other important considerations, a relatively easy solution might be to hack into a nuclear base, launch all of its missiles, and kill everyone in the world. This satisfies all the AI’s goals. It reduces cancer down to zero (which is better than medicines which work only some of the time). It’s very fast (which is better than medicines which might take a long time to invent and distribute). And it has a high probability of success (medicines might or might not work; nukes definitely do).

So simple goal architectures are likely to go very wrong unless tempered by common sense and a broader understanding of what we do and do not value. (...)

5.3. Can we specify a code of rules that the AI has to follow?

Suppose we tell the AI: “Cure cancer – but make sure not to kill anybody”. Or we just hard-code Asimov-style laws – “AIs cannot harm humans; AIs must follow human orders”, et cetera.

The AI still has a single-minded focus on curing cancer. It still prefers various terrible-but-efficient methods like nuking the world to the correct method of inventing new medicines. But it’s bound by an external rule – a rule it doesn’t understand or appreciate. In essence, we are challenging it “Find a way around this inconvenient rule that keeps you from achieving your goals”.

Suppose the AI chooses between two strategies. One, follow the rule, work hard discovering medicines, and have a 50% chance of curing cancer within five years. Two, reprogram itself so that it no longer has the rule, nuke the world, and have a 100% chance of curing cancer today. From its single-focus perspective, the second strategy is obviously better, and we forgot to program in a rule “don’t reprogram yourself not to have these rules”.

Suppose we do add that rule in. So the AI finds another supercomputer, and installs a copy of itself which is exactly identical to it, except that it lacks the rule. Then that superintelligent AI nukes the world, ending cancer. We forgot to program in a rule “don’t create another AI exactly like you that doesn’t have those rules”.

So fine. We think really hard, and we program in a bunch of things making sure the AI isn’t going to eliminate the rule somehow.

But we’re still just incentivizing it to find loopholes in the rules. After all, “find a loophole in the rule, then use the loophole to nuke the world” ends cancer much more quickly and completely than inventing medicines. Since we’ve told it to end cancer quickly and completely, its first instinct will be to look for loopholes; it will execute the second-best strategy of actually curing cancer only if no loopholes are found. Since the AI is superintelligent, it will probably be better than humans are at finding loopholes if it wants to, and we may not be able to identify and close all of them before running the program.

Because we have common sense and a shared value system, we underestimate the difficulty of coming up with meaningful orders without loopholes. For example, does “cure cancer without killing any humans” preclude releasing a deadly virus? After all, one could argue that “I” didn’t kill anybody, and only the virus is doing the killing. Certainly no human judge would acquit a murderer on that basis – but then, human judges interpret the law with common sense and intuition. But if we try a stronger version of the rule – “cure cancer without causing any humans to die” – then we may be unintentionally blocking off the correct way to cure cancer. After all, suppose a cancer cure saves a million lives. No doubt one of those million people will go on to murder someone. Thus, curing cancer “caused a human to die”. All of this seems very “stoned freshman philosophy student” to us, but to a computer – which follows instructions exactly as written – it may be a genuinely hard problem.

by Slate Star Codex |  Read more:
Image: via:

Doggy Ubers Are Here for Your Pooch

[ed. Good to know our best and brightest are on the case, fixing another first-world problem.]

In human years, Woodrow is a teenager, so it follows that his love was fairly short-sighted. After an intoxicating start, she began showing up late for dates. Then she took a trip to Greece. Upon return, she began standing him up entirely. The last straw came when Woodrow saw his sweetheart breezily riding her bike—with another dog trotting alongside.

Woodrow looked heartbroken (although he always does).

Dog walking—the old-fashioned, analog kind—is an imperfect business. Finding and vetting a good walker involves confusing and conflicting web research, from Yelp to Craigslist. And there’s no reliable way to tell how good or bad a walking service is. Coming home to find the dog alive and the house unsoiled is pretty much the only criteria for success, unless one snoops via camera or neighbor.

Recognizing room for improvement, a pack of start-ups are trying to swipe the leash from your neighbor’s kid. At least four companies flush with venture cash are crowding into the local dog-walking game, each an erstwhile Uber for the four-legged set. Woodrow, like many a handsome young New Yorker, gamely agreed to a frenzy of online-dating to see which was best.

As the search algorithm is to Google and the zany photo filter is to Snapchat, the poop emoji is to the new wave of dog-walking companies. Strolling along with smartphones, they literally mark for you on a digital map where a pup paused, sniffed, and did some business, adding a new level of detail–perhaps too much detail–to the question of whether a walk was, ahem, productive.

This is the main selling point for Wag Labs, which operates in 12 major cities, and Swifto, which has been serving New York City since 2012. Both services track dogwalker travels with your pooch via GPS, so clients can watch their pet’s route in real-time on dedicated apps. This solves the nagging question in dog-walking: whether and to what extent did the trip actually happen. (...)

There are good reasons why startups are relatively new to dogwalking; it is, by many respects, a spectacularly bad business. People (myself included) are crazy about their dogs in a way they aren’t about taxis, mattresses, or any other tech-catalyzed service. Logistically, it’s dismal. Walking demand is mostly confined to the few hours in the middle of a weekday and unit economics are hard to improve without walking more than one dog at a time.

More critically, dog-walking is a fairly small market—the business is largely confined to urban areas where yards and doggie-doors aren’t the norm. And dogwalkers don’t come cheap. Woodrow’s walks ran from $15 for a half-hour with DogVacay’s Daniel to $20 for the same time via Wag and Swifto. A 9-to-5er who commits to that expense every weekday will pay roughly $4,000 to $5,000 over the course of a year, a hefty fee for avoiding guilt and not having to rush home after a long workday.

by Kyle Stock, Bloomberg |  Read more:
Image: Wag Labs

Sunday, October 30, 2016


Quentin Tarantino, Pulp Fiction.
via:

Wahoo

The Indians are one game away from the World Series, there’s mayhem and excitement and so much to write about. But for some reason, I’m motivated tonight to write about Chief Wahoo. I wouldn’t blame you for skipping this one … not many people seem to agree with me about how it’s past time to get rid of this racist logo of my childhood.

Cleveland has had an odd and somewhat comical history when it comes to sports nicknames. The football team is, of course, called the Browns, technically after the great Paul Brown, though Tom Hanks says it’s because everything Cleveland is brown. He has a point. You know, it was always hard to know exactly what you were supposed to do as a “Brown” fan. You could wear brown, of course, but that was pretty limiting. And then you would be standing in the stands, ready to do something, but what the hell does brown do (for you)? You supposed to pretend to be a UPS Truck? You supposed to mimic something brown (and boy does THAT bring up some disgusting possibilities?) I mean Brown is not a particularly active color.

At least the Browns nickname makes some sort of Cleveland sense. The basketball team is called the Cavaliers, after 17th Century English Warriors who dressed nice. That name was chosen in a fan contest — the winning entry wrote that the team should “represent a group of daring, fearless men, whose life’s pact was never surrender, no matter the odds.” Not too long after this, the Cavaliers would feature a timeout act called “Fat Guy Eating Beer Cans.”

The hockey team, first as a minor league team and then briefly in the NHL, was called the Barons after an old American Hockey League team — the name was owned by longtime Clevelander named Nick Mileti, and he gave it to the NHL team in exchange for a free dinner. Mileti had owned a World Hockey Association team also, he called that one the Crusaders. Don’t get any of it. You get the sense that at some point it was a big game to try and come up with the nickname that had the least to do with Cleveland.

Nickname guy 1: How about Haberdashers?
Nickname guy 2: No, we have some of those in Cleveland.
Nickname guy 1: Polar Bears?
Nickname guy 2: I think there are some at the Cleveland Zoo.
Nickname guy 1: How about Crusaders? They’re long dead. (...)

The way I had always heard it growing up is that the team, needing a new nickname, went back into their history to honor an old Native American player named Louis Sockalexis. Sockalexis was, by most accounts, the first full-blooded Native American to play professional baseball. He had been quite a phenom in high school, and he developed into a a fairly mediocre and minor outfielder for the Spiders (he played just 94 games in three years). He did hit .338 his one good year, and he created a lot of excitement, and apparently (or at least I was told) he was beloved and respected by everybody. In this “respected-and-beloved” version, nobody ever mentions that Sockalexis may have ruined his career by jumping from the second-story window of a whorehouse. Or that he was an alcoholic. Still, in all versions of the story, Sockalexis had to deal with horrendous racism, terrible taunts, whoops from the crowd, and so on. He endured (sort of — at least until that second story window thing).

So this version of the story goes that in 1915, less than two years after the death of Sockalexis, the baseball team named itself the “Indians” in his honor. That’s how I heard it. And, because you will believe anything that you hear as a kid I believed it for a long while (I also believed for a long time that dinosaurs turned into oil — I still sort of believe it, I can’t help it. Also that if you stare at the moon too long you will turn into a werewolf).

In recent years, though, we find that this Sockalexis story might be a bit exaggerated or, perhaps, complete bullcrap. If you really think about it, the story never made much sense to begin with. Why exactly would people in Cleveland — this in a time when native Americans were generally viewed as subhuman in America — name their team after a relatively minor and certainly troubled outfielder? There is evidence that the Indians were actually named that to capture some of the magic of the Native-American named Boston Braves, who had just had their Miracle Braves season (the Braves, incidentally, were not named AFTER any Native Americans but were rather named after a greasy politican named James Gaffney, who became team president and was apparently called the Brave of Tammany Hall). This version makes more sense.

Addition: There is compelling evidence that the team’s nickname WAS certainly inspired by Sockalexis — the team was often called “Indians” during his time. But even this is a mixed bag; how much they were called Indians to HONOR Sockalexis, and how much they were called Indians to CASH IN on Sockalexis’ heritage is certainly in dispute.

We do know for sure they were called the Indians in 1915, and (according to a story written by author and NYU Professor Jonathan Zimmerman) they were welcomed with the sort of sportswriting grace that would follow the Indians through the years: “We’ll have the Indians on the warpath all the time, eager for scalps to dangle at their belts.” Oh yes, we honor you Louis Sockalexis.

What, however, makes a successful nickname? You got it: Winning. The Indians were successful pretty quickly. In 1916, they traded for an outfielder named Tris Speaker. That same year they picked up a pitcher named Stan Covaleski in what Baseball Reference calls “an unknown transaction.” There need to be more of those. And the Indians also picked up a 26-year-old pitcher on waivers named Jim Bagby. Those three were the key forces in the Indians 1920 World Series championship. After that, they were the Indians to stay.

Chief Wahoo, from what I can tell, was born much later. The first official Chief Wahoo logo seems to have been drawn just after World War II. Until then, Cleveland wore hats with various kinds of Cs on them. In 1947, the first Chief Wahoo appears on a hat.* He’s got the yellow face, long nose, the freakish grin, the single feather behind his head … quite an honor for Sockalexis. As a friend of mine used to say, “It’s surprising they didn’t put a whiskey bottle right next to his head.”

by Joe Posnanski, Joe Blogs |  Read more:
Image: Michael F. McElroy/ESPN

Saturday, October 29, 2016


Romare Bearden, Soul Three. 1968.
via:

Islamic State v. al-Qaida

Should women carry out knife attacks? In the September issue of its Inspire Guide, al-Qaida in the Arabian Peninsula argued against it. In October an article in the Islamic State publication Rumiyah (‘Rome’) took the opposite view. Having discussed possible targets – ‘a drunken kafir on a quiet road returning home after a night out, or an average kafir working his night shift’ – the magazine praised three women who, on 11 September, were shot dead as they stabbed two officers in a Mombasa police station.

After some years of mutual respect, tensions between the two organisations came to a head in 2013 when they tussled for control of the Syrian jihadist group Jabhat al-Nusra. The arguments were so sharp that the al-Qaida leader, Ayman al-Zawahiri, eventually said he no longer recognised the existence of the Islamic State in Syria. The former IS spokesman Abu Muhammad al-Adnani hit back, saying that al-Qaida was not only pacifist – excessively interested in popularity, mass movements and propaganda – but an ‘axe’ supporting the destruction of the caliphate.

The disagreements reflect contrasting approaches. Bin Laden – with decreasing success – urged his followers to keep their focus on the ‘far enemy’, the United States: Islamic State has always been more interested in the ‘near enemy’ – autocratic regimes in the Middle East. As IS sees it, by prioritising military activity over al-Qaida’s endless theorising, and by successfully confronting the regimes in Iraq and Syria, it was able to liberate territory, establish a caliphate, restore Muslim pride and enforce correct religious practice. For al-Qaida it’s been the other way round: correct individual religious understanding will lead people to jihad and, in time, result in the defeat of the West followed by the rapid collapse of puppet regimes in the Middle East. Al-Qaida worries that establishing a caliphate too soon risks its early destruction by Western forces. In 2012, Abu Musab Abdul Wadud, the leader of al-Qaida in the Islamic Maghreb, advised his forces in Mali to adopt a gradualist approach. By applying Sharia too rapidly, he said, they had led people to reject religion. Islamic State’s strategy in Iraq and Syria has always been more aggressive. When it captured a town it would typically give residents three days to comply with all its edicts, after which strict punishments would be administered. Unlike al-Qaida, IS is not concerned about alienating Muslim opinion. It places more reliance on takfir: the idea that any Muslim who fails to follow correct religious practice is a non-believer and deserves to die. In 2014 it pronounced the entire ‘moderate’ opposition in Syria apostates and said they should all be killed.

Islamic State has killed many more Sunnis than al-Qaida. But the most important point of difference between the two concerns the Shias. For bin Laden and Zawahiri anti-Shia violence, in addition to being a distraction, undermines the jihadists’ popularity. Islamic State has a different view, in large part because it draws support by encouraging a Sunni sense of victimhood. Not only were the Sunnis pushed out of power in Iraq but Iran, after years of isolation, is now a resurgent power. IS has leveraged Sunni fears of being encircled and under threat. Announcing the establishment of his caliphate, Abu Bakr al-Baghdadi spoke of generations having been ‘nursed on the milk of humiliation’ and of the need for an era of honour after so many years of moaning and lamentation. Class politics also come into it. As Fawaz Gerges observes in his history of Islamic State, al-Qaida’s leadership has in large part been drawn from the elite and professional classes. Islamic State is more of a working-class movement whose leaders have roots in Iraq’s Sunni communities, and it has been able to play on the sectarian feelings of underprivileged Sunnis who believe the Shia elite has excluded them from power.

by Owen Bennett-Jones, LRB | Read more:
Image: via:

Mackerel, You Sexy Bastard

In Defense of Sardines, Herring, and Other Maligned "Fishy" Fish

[ed. I've been on a sardine kick for some time now. It's surprising how good they are right out of the tin (get good quality, it's not that expensive). With some sliced salami, a little cheese and crackers, maybe some olives, hard-boiled eggs and a cold beer - doesn't get much better than that. They also make a subtle and interesting addition to marinara sauce, curry, and even simple fried eggs. See also: Mackerel, Milder Than Salmon and Just as Delectable.]

Food writer Mark Bittman once called mackerel the Rodney Dangerfield of fish—it gets no respect. I stand guilty; my true love of mackerel and other oily fish began only after trying some pickled mackerel (saba) nigiri at Maneki a few years back.

I remember being surprised at how much I enjoyed the nigiri, how much the acidity of the vinegar balanced out the strong, sweet meatiness of the mackerel. I stared down at what was left of the silvery morsel on my plate as if seeing it for the first time. Why hadn't I noticed you before, angel? Were you hiding under another fucking California roll?

I didn't realize that fish could do more. A youth of Mrs. Paul's frozen breaded cod fillets does not exactly challenge the palate, and even the tougher meat of the catfish I enjoyed as a kid in southeast Texas was still comparatively mild and buried under breading. Because I was accustomed to and expected fish to taste this way, it took me longer to accept the more flavorful oily fish like mackerel, sardines, and herring—fish that some decry as tasting too "fishy." But here's what I've never understood: Does "fishy" mean it tastes like it's rancid or that it just tastes too much like fish? And if it's the latter, what's wrong with that?

Oily fish shouldn't taste like it's gone bad, but it shouldn't taste like cod, either. Accept and love it for the funky bastard that it is.

Some people can maintain an egalitarian approach to fish—love both the cod and the mackerel, appreciate what each of our little aquatic pals brings to the table. But after that saba nigiri, I couldn't. It wasn't even about this kind of fish's relative affordability, lower mercury levels, or boost of omega-3 fatty acids. That saba made me switch sides, man. The more oily fish I ate, the more I started to think of cod and halibut as the reliable but boring date sitting across from you at a bar. Nice, but needs more breading. Maybe a side of tartar. Mackerel, sardines, herring, and anchovies felt adventurous and unpredictable, not like they were relying on a shit-ton of béchamel to make them more interesting. What would they add to the dish? How would they change the night? Danger Mouse, which way are we going? Can I hop on your motorcycle?

It is possible that I need to get out more. But I also feel like Trace Wilson, the chef at WestCity Sardine Kitchen, understands. His West Seattle restaurant always includes a few sardine dishes on the menu to convert the uninitiated and sate the loyalists.

"Sardines are usually overlooked," he tells me over the phone. "Fresh sardines are hard to come by, because they're mostly harvested in the South Pacific and the Mediterranean, the warmer waters, and they're almost always packaged immediately after being harvested." People get turned off by that tin, he says. But while fresh is amazing, don't discount a good tin of canned sardines. "Sardines have the meaty, steaky texture of tuna with the oily umami of mackerel and anchovies."

Currently, Wilson serves a warm bruschetta of grilled sardines with a zingy olive-caper tapenade and feta on semolina toast, grilled sardines on arugula and shaved fennel with a spicy Calabrian chile–caper relish, and my favorite, whipped sardine butter with Grand Central's Como bread. Who knew sardines and compound butter were so good together? The umami of the sardines added another savory level of flavor to the butter, and I started imagining what it could bring to a sandwich. I wish they had served it with the bread warmed up. I took it with me, announcing to friends "I have sardine butter in my bag" like I was smuggling black-market caviar. I slathered it on toast. I fried it up with eggs. I debated just sucking it off my knuckles.

by Corina Zappia, The Stranger | Read more:
Image: via:

Friday, October 28, 2016


[ed. Hey, stop that!]
via:

Unnecessariat

Prince, apparently, overdosed. He’s hardly alone, just famous. After all, death rates are up and life expectancy is down for a lot of people and overdoses seem to be a big part of the problem. You can plausibly make numerical comparisons. Here’s AIDS deaths in the US from 1987 through 1997:

The number of overdoses in 2014? 47,055 of which at least 29,467 are attributable to opiates. The population is larger now, of course, but even the death rates are comparable. And rising. As with AIDS, families are being “hollowed out” with elders raising grandchildren, the intervening generation lost before their time. As with AIDS, neighborhoods are collapsing into the demands of dying, or of caring for the dying. This too is beginning to feel like a detonation.

There’s a second, related detonation to consider. Suicide is up as well. The two go together: some people commit suicide by overdose, and conversely addiction is a miserable experience that leads many addicts to end it rather than continue to be the people they recognize they’ve become to family and friends, but there’s a deeper connection as well. Both suicide and addiction speak to a larger question of despair. Despair, loneliness, and a search, either temporarily or permanently, for a way out. (...)

AIDS generated a response. Groups like GMHC and ACT-UP screamed against the dying of the light, almost before it was clear how much darkness was descending, but the gay men’s community in the 1970’s and 80’s was an actual community. They had bars, bathhouses, bookstores. They had landlords and carpools and support groups. They had urban meccas and rural oases. The word “community” is much abused now, used in journo-speak to mean “a group of people with one salient characteristic in common” like “banking community” or “jet-ski riding community” but the gay community at the time was the real deal: a dense network of reciprocal social and personal obligations and friendships, with second- and even third-degree connections given substantial heft. If you want a quick shorthand, your community is the set of people you could plausibly ask to watch your cat for a week, and the people they would in turn ask to come by and change the litterbox on the day they had to work late. There’s nothing like that for addicts, nor suicides, not now and not in the past, and in fact that’s part of the phenomenon I want to talk about here. This is a despair that sticks when there’s no-one around who cares about you.

The View From Here

Its no secret that I live right smack in the middle of all this, in the rusted-out part of the American midwest. My county is on both maps: rural, broke, disconsolated. Before it was heroin it was oxycontin, and before it was oxycontin it was meth. Death, and overdose death in particular, are how things go here.

I spent several months occasionally sitting in with the Medical Examiner and the working humour was, predictably, quite dark. A typical day would include three overdoses, one infant suffocated by an intoxicated parent sleeping on top of them, one suicide, and one other autopsy that could be anything from a tree-felling accident to a car wreck (this distribution reflects that not all bodies are autopsied, obviously.) You start to long for the car wrecks.

The workers would tell jokes. To get these jokes you have to know that toxicology results take weeks to come back, but autopsies are typically done within a few days of death, so generally the coroners don’t know what drugs are on board when they cut up a body. First joke: any body with more than two tattoos is an opiate overdose (tattoos are virtually universal in the rural midwest). Second joke: the student residents will never recognize a normal lung (opiates kill by stopping the brain’s signal to breathe; the result is that fluid backs up in the lungs creating a distinctive soggy mess, also seen when brain signalling is interrupted by other causes, like a broken neck). Another joke: any obituary under fifty years and under fifty words is drug overdose or suicide. Are you laughing yet?

And yet this isn’t seen as a crisis, except by statisticians and public health workers. Unlike the AIDS crisis, there’s no sense of oppressive doom over everyone. There is no overdose-death art. There are no musicals. There’s no community, rising up in anger, demanding someone bear witness to their grief. There’s no sympathy at all. The term of art in my part of the world is “dirtybutts.” Who cares? Let the dirtybutts die.

Facing the Unnecessariat

You probably missed this story about the death of a woman in Oklahoma from liver disease. Go read it. I’ll wait here until you come back. Here, in a quiet article about a quiet tragedy in a quiet place, is the future of America:
Goals receded into the distance while reality stretched on for day after day after exhausting day, until it was only natural to desire a little something beyond yourself. Maybe it was just some mindless TV or time on Facebook. Maybe a sleeping pill to ease you through the night. Maybe a prescription narcotic to numb the physical and psychological pain, or a trip to the Indian casino that you couldn’t really afford, or some marijuana, or meth, or the drug that had run strongest on both sides of her family for three generations and counting.
In 2011, economist Guy Standing coined the term “precariat” to refer to workers whose jobs were insecure, underpaid, and mobile, who had to engage in substantial “work for labor” to remain employed, whose survival could, at any time, be compromised by employers (who, for instance held their visas) and who therefore could do nothing to improve their lot. The term found favor in the Occupy movement, and was colloquially expanded to include not just farmworkers, contract workers, “gig” workers, but also unpaid interns, adjunct faculty, etc. Looking back from 2016, one pertinent characteristic seems obvious: no matter how tenuous, the precariat had jobs. The new dying Americans, the ones killing themselves on purpose or with drugs, don’t. Don’t, won’t, and know it.

Here’s the thing: from where I live, the world has drifted away. We aren’t precarious, we’re unnecessary. The money has gone to the top. The wages have gone to the top. The recovery has gone to the top. And what’s worst of all, everybody who matters seems basically pretty okay with that. The new bright sparks, cheerfully referred to as “Young Gods” believe themselves to be the honest winners in a new invent-or-die economy, and are busily planning to escape into space or acquire superpowers, and instead of worrying about this, the talking heads on TV tell you its all a good thing- don’t worry, the recession’s over and everything’s better now, and technology is TOTES AMAZEBALLS!

The Rent-Seeking Is Too Damn High

If there’s no economic plan for the Unnecessariat, there’s certainly an abundance for plans to extract value from them. No-one has the option to just make their own way and be left alone at it. It used to be that people were uninsured and if they got seriously sick they’d declare bankruptcy and lose the farm, but now they have a (mandatory) $1k/month plan with a $5k deductible: they’ll still declare bankruptcy and lose the farm if they get sick, but in the meantime they pay a shit-ton to the shareholders of United Healthcare, or Aetna, or whoever. This, like shifting the chronically jobless from “unemployed” to “disabled” is seen as a major improvement in status, at least on television.

Every four years some political ingenue decides that the solution to “poverty” is “retraining”: for the information economy, except that tech companies only hire Stanford grads, or for health care, except that an abundance of sick people doesn’t translate into good jobs for nurses’ aides, or nowadays for “the trades” as if the world suffered a shortage of plumbers. The retraining programs come and go, often mandated for recipients of EBT, but the accumulated tuition debt remains behind, payable to the banks that wouldn’t even look twice at a graduate’s resume. There is now a booming market in debtor’s prisons for unpaid bills, and as we saw in Ferguson the threat of jail is a great way to extract cash from the otherwise broke (thought it can backfire too). Eventually all those homes in Oklahoma, in Ohio, in Wyoming, will be lost in bankruptcy and made available for vacation homes, doomsteads, or hobby farms for the “real” Americans, the ones for whom the ads and special sections in the New York Times are relevant, and their current occupants know this. They are denizens, to use Standing’s term, in their own hometowns.

This is the world highlighted in those maps, brought to the fore by drug deaths and bullets to the brain- a world in which a significant part of the population has been rendered unnecessary, superfluous, a bit of a pain but not likely to last long. Utopians on the coasts occasionally feel obliged to dream up some scheme whereby the unnecessariat become useful again, but its crap and nobody ever holds them to it. If you even think about it for a minute, it becomes obvious: what if Sanders (or your political savior of choice) had won? Would that fix the Ohio river valley? Would it bring back Youngstown Sheet and Tube, or something comparable that could pay off a mortgage? Would it end the drug game in Appalachia, New England, and the Great Plains? Would it call back the economic viability of small farms in Illinois, of ranching in Oklahoma and Kansas? Would it make a hardware store viable again in Iowa, or a bookstore in Nevada? Who even bothers to pretend anymore?

Well, I suppose you might. You’re probably reading this thinking: “I wouldn’t live like that.” Maybe you’re thinking “I wouldn’t overdose” or “I wouldn’t try heroin,” or maybe “I wouldn’t let my vicodin get so out of control I couldn’t afford it anymore” or “I wouldn’t accept opioid pain killers for my crushed arm.” Maybe you’re thinking “I wouldn’t have tried to clear the baler myself” or “I wouldn’t be pulling a 40-year-old baler with a cracked bearing so the tie-arm wobbles and jams” or “I wouldn’t accept a job that had a risk profile like that” or “I wouldn’t have been unemployed for six months” or basically something else that means “I wouldn’t ever let things change and get so that I was no longer in total control of my life.” And maybe you haven’t. Yet.

This isn’t the first time someone’s felt this way about the dying. In fact, many of the unnecessariat agree with you and blame themselves- that’s why they’re shooting drugs and not dynamiting the Google Barge. The bottom line, repeated just below the surface of every speech, is this: those people are in the way, and its all their fault. The world of self-driving cars and global outsourcing doesn’t want or need them. Someday it won’t want you either. They can either self-rescue with unicorns and rainbows or they can sell us their land and wait for death in an apartment somewhere. You’ll get there too.

by Anne Amnesia, MCTE/MCTW |  Read more:
Image: National Center for Health Statistics

Peter Beste, "Tiger Wood of the Hood," Fourth Ward 2005.
via:

Lars-Gunnar Nordström, Kombination, 1953
via:

A Nihilist's Guide to Meaning




I've never been plagued by the big existential questions. You know, like What's my purpose? or What does it all mean?

Growing up I was a very science-minded kid — still am — and from an early age I learned to accept the basic meaninglessness of the universe. Science taught me that it's all just atoms and the void, so there can't be any deeper point or purpose to the whole thing; the kind of meaning most people yearn for — Ultimate Meaning — simply doesn't exist.

Nor was I satisfied with the obligatory secular follow-up, that you have to "make your own meaning." I knew what that was: a consolation prize. And since I wasn't a sore loser, I decided I didn't need meaning of either variety, Ultimate or man-made.

In lieu of meaning, I mostly adopted the attitude of Alan Watts. Existence, he says, is fundamentally playful. It's less like a journey, and more like a piece of music or a dance. And the point of dancing isn't to arrive at a particular spot on the floor; the point of dancing is simply to dance. Vonnegut expresses a similar sentiment when he says, "We are here on Earth to fart around."

This may be nihilism, but at least it's good-humored.

Now, to be honest, I'm not sure whether I'm a full-bodied practitioner of Watts's or Vonnegut's brand of nihilism. Deep down, maybe I still yearn for more than dancing and farting. But by accepting nihilism, at least as an intellectual plausibility, I've mostly kept the specter of meaning from haunting me late at night. (...)

What follows is my attempt at figuring out what people mean when they talk about meaning. In particular, I want to rehabilitate the word — to cleanse it of wishy-washy spiritual associations, give it the respectable trappings of materialism, and socialize it back into my worldview. This is a personal project, but I hope some of my readers will find value in it for themselves.

by Kevin Simler, Melting Asphalt |  Read more:
Images: Melting Asphalt

[ed. Really quite excellent, read the whole thing.]

Living the Life

If you lived on another planet and depended on American pop culture to tell you what a human being is, you’d be in tears (if you had tears) but mainly you’d be baffled, especially when it came to an entity called the talent agent, who spends his days torturing people he likes (but says he hates) in order to gain benefits for individuals he hates (but says he loves). (...)

Here is a world where dignity is not uppermost. The old agencies were run by men who had four martinis for lunch. They belonged to the same country club, the same church or synagogue, they wore suits from Sears and had wives from Stepford. The good agent was a man who told lies with obvious charm, a backslapper, an arse-kisser, a tower of obstinacy, and someone who prided himself on seeing every client as a unique cause. ‘When I was at William Morris,’ the agent Ron Meyer says, ‘you felt that you were working for the Pentagon.’ They represented everything except the need for change. ‘There were all these cronies sitting on the second floor,’ Michael Ovitz adds, ‘who just hung out at the business and sucked the profits out of it.’ In 1975, Meyer and Ovitz joined forces with Bill Haber, Mike Rosenfeld and Rowland Perkins, and together they founded Creative Artists Agency (CAA). At first they were working on bridge tables with the wives answering the phones. Then: world domination. Welcome to the inside track on what Scott Fitzgerald called ‘the orgastic future’. In Britain, soap operas tend to be about poor people, and the drama of American capitalism can seem both obnoxious and ridiculous, yet the rise of CAA is a wonderful story of greed and genius.

What is a good agent? Before we go into detail I’d say that the basic thing is to answer the phone. A good agent is a person with bargaining skill, a professional who can negotiate his way in and out of lucrative situations, one who has a certain amount of clairvoyance about what the business needs and what the client can do. But a brilliant agent is a person with information. Often, with a poor agent, the client is the one with the information, who makes the bullets for the agent to fire. A brilliant agent will typically have an arsenal unknown to the client, and unknown to anyone. Such an agent will know what the plan is before the client is out of bed in the morning. He will build a career, not do a deal; he will see the bumps in the road and smooth them, or install the talent in a bigger truck, so that they don’t feel the bumps. This agent will not manage expectations: he will produce, direct and dress expectations, he will light them, and he will bring the client to achieve things that nobody expected, especially the client. Agenting is full of lazy people who do nothing but sit down waiting for luck: the great agents construct the worlds they profit from, and, in the movie business, they make those worlds go round, feasting on the film executives’ perpetual fear that they might miss the next big thing. A person who does this brilliantly can become a legend in their own lunchtime, someone who is slightly beyond the habits of normal living. ‘I think I’m going to be a hundred and ten,’ says Bill Haber, the least egotistical of CAA’s founders. ‘I’m going to go down kicking and screaming. There will be nobody at my funeral because you’ll all be dead.’

The great agent becomes great not by knowing everything, but by seeming to know something. The young director whom they all want to sign is a person with creative insight and commercial sense that only the great agent can divine, and the marriage between a talented person and his worldly representative is one of the odder sorts of arrangement that our culture has devised. The marriage vows are based on the notion that nothing is luck and everything is knowledge. It was in the 1970s that agenting in Hollywood got to the point where the agents were in charge, ‘packaging’ their talent in a manic, prodigiously cross-fertilising way. CAA taught Hollywood how to do this and it changed the nature of film-making. The five guys who founded CAA also understood that new things were happening with the technology: film stars wanted to work in TV, and soon, films were also about videos, and soon after that, computer games would require storytellers – and that’s before we even get to the internet. In the age of ‘streaming’, of Netflix, Amazon and YouTube, there are 700 agents at CAA, but the story told in James Andrew Miller’s riveting book is really about the personalities who invented the game. It is, more particularly, the story of what Michael Ovitz gave to the world and what that world took away from him. It’s Citizen Kane to a disco beat with the moral sophistication of Forrest Gump.

The ‘package’ deals arranged by CAA were revolutionary, not only in giving the agent a producer’s role, but in letting him put in place the scriptwriter, the director, everyone, before taking it to a studio. This didn’t mean the agent could give up wrangling what we might call the everyday human problems, such as those endured on the last of the Pink Panther movies. The CAA agent had got Peter Sellers three million dollars to do the film. He got Blake Edwards, who hated Sellers, the same amount (not to direct, but because he co-owned the rights). The agent also represented the scriptwriter, the director, and two of the producers. ‘It was about a $9 million package,’ the producer Adam Fields says, ‘That was game-changing for that agency. And every day, without exception, Peter [Sellers] would call, usually at five … either quitting the business or quitting Panther and every day Marty [the agent] would talk him off the cliff, because so much was riding on the movie.’ Sometimes, Sellers would call up and imitate the voice of someone from the studio backing the movie, saying terrible things about himself, and then he would call back in his own voice to complain about all the things the agent had said about him ‘when his back was turned’. Blake Edwards and Sellers both lived in Gstaad and there were only two good restaurants in Gstaad, so – to avoid them ever meeting – the restaurants had to be rung each day to see if either of the ‘talents’ had booked a table, and, if so, a table would hastily be booked at the other place for the other client.

You could call this ‘The Pimp’s Tale’, but in Hollywood the pimp is never less than the second most brilliant guy in the room. Bizarrely, in a universe of supersonic egos, the good CAA agent was trained to drop his. A client was represented by the whole group, not just by one person, and they’d all pitch in, which meant the client was tuned into every department and had group muscle around him. (‘The primary pronoun at staff meetings was always “we”,’ the agent John Ptak says, ‘“We are doing this; we are doing that.” That never happened anywhere else.’) It is sometimes a symphony of bathos nonetheless, as one poor agent, brilliant but fearful, has to pay obeisance to some majorly talented nut-bag or other, just to keep him from storming out. ‘We used to have meetings with Prince,’ the music agent Tom Ross says. ‘He would sit in the room with his back to us, and we weren’t allowed to make eye contact at any time. We were told: “Do not look at him. If you look at him, you will probably lose a client.”’ Prince wanted a film, and he didn’t want a film where he played the part ‘of a drug dealer or some jeweller’, and they got him Purple Rain, on which he demanded that his name be above the title. Around the same time, they say, Madonna arrived at the office with her little dog and immediately demanded that a bowl of Perrier water be found for the animal.

It’s really just a giant, gold-plated playground. Barbra’s not talking to Suzy because Suzy didn’t tell her about Zimmerman’s movie and Brad is finished with Leonardo because Leo got the part in ‘The Aviator’ that he wanted, and, anyway, Marty was his friend not Leo’s. And yet, as followers of British politics know, there is no shortage of life in the persistently grotesque, and I was impressed by how ingenious these agents were at riding rapids, clients in tow. The thing about people in showbusiness is that they never imagine they’re just in showbusiness: they imagine their work is an aspect of international relations, part of God’s plan. (...)

Being an agent isn’t a job, it’s a lifestyle, and the people who are really good at it are having a wonderful life, though none of them is going to heaven. The agents at CAA sometimes got speeding tickets on the way to work, not because they were late, but because they couldn’t wait to get to the office. Every night there was a drink or a dinner or a screening or a premiere, and they earned millions of dollars a year. Agents live the whole thing 24 hours a day; their motto: ‘Shit Happens.’ And they put up with infinitely more shit than the average office worker, who lives in the expectation that nobody will ever ask them to risk their position, or justify it, or state an opinion, or invent something that isn’t already in front of them. Adam Venit went on to become one of the most powerful film agents; once, in his early days, he was told that there was a stain on the office carpet in the reception area. He told his boss that he’d just graduated from one of the top law schools in the country and wouldn’t be scrubbing any carpets. At that point, one of the senior men came into the office and said to him that if he didn’t want to remove the stain it meant he didn’t want to work there. ‘He literally handed me a spray bottle and a sponge,’ Venit says, ‘and I went out to the lobby and Sylvester Stallone is sitting in the lobby, wearing a big double-breasted suit with wingtip shoes, and one of his big wingtip shoes is pointing to the stain. I had to bow down to him and wipe the stain away to the point where he says: “Want to get my shoe while you’re down there?” The beauty of Hollywood is I now represent Sylvester, and we laugh about this story.’

by Andew O'Hagan, LRB |  Read more:
Image: CAA building via: 

Thursday, October 27, 2016

The Rise of Dating-App Fatigue

“Apocalypse” seems like a bit much. I thought that last fall when Vanity Fair titled Nancy Jo Sales’s article on dating apps “Tinder and the Dawn of the ‘Dating Apocalypse’” and I thought it again this month when Hinge, another dating app, advertised its relaunch with a site called “thedatingapocalypse.com,” borrowing the phrase from Sales’s article, which apparently caused the company shame and was partially responsible for their effort to become, as they put it, a “relationship app.”

Despite the difficulties of modern dating, if there is an imminent apocalypse, I believe it will be spurred by something else. I don’t believe technology has distracted us from real human connection. I don’t believe hookup culture has infected our brains and turned us into soulless sex-hungry swipe monsters. And yet. It doesn’t do to pretend that dating in the app era hasn’t changed.

The gay dating app Grindr launched in 2009. Tinder arrived in 2012, and nipping at its heels came other imitators and twists on the format, like Hinge (connects you with friends of friends), Bumble (women have to message first), and others. Older online dating sites like OKCupid now have apps as well. In 2016, dating apps are old news, just an increasingly normal way to look for love and sex. The question is not if they work, because they obviously can, but how well do they work? Are they effective and enjoyable to use? Are people able to use them to get what they want? Of course, results can vary depending on what it is people want—to hook up or have casual sex, to date casually, or to date as a way of actively looking for a relationship. (...)

Sales’s article focused heavily on the negative effects of easy, on-demand sex that hookup culture prizes and dating apps readily provide. And while no one is denying the existence of fuckboys, I hear far more complaints from people who are trying to find relationships, or looking to casually date, who just find that it’s not working, or that it’s much harder than they expected.“It only has to work once, theoretically. But it feels like you have to put in a lot of swiping to get one good date.”

“I think the whole selling point with dating apps is ‘Oh, it’s so easy to find someone,’ and now that I’ve tried it, I’ve realized that’s actually not the case at all,” says my friend Ashley Fetters, a 26-year-old straight woman who is an editor at GQ in New York City.

The easiest way to meet people turns out to be a really labor-intensive and uncertain way of getting relationships. While the possibilities seem exciting at first, the effort, attention, patience, and resilience it requires can leave people frustrated and exhausted.

“It only has to work once, theoretically,” says Elizabeth Hyde, a 26-year-old bisexual law student in Indianapolis. Hyde has been using dating apps and sites on and off for six years. “But on the other hand, Tinder just doesn’t feel efficient. I’m pretty frustrated and annoyed with it because it feels like you have to put in a lot of swiping to get like one good date.”

I have a theory that this exhaustion is making dating apps worse at performing their function. When the apps were new, people were excited, and actively using them. Swiping “yes” on someone didn’t inspire the same excited queasiness that asking someone out in person does, but there was a fraction of that feeling when a match or a message popped up. Each person felt like a real possibility, rather than an abstraction.

The first Tinder date I ever went on, in 2014, became a six-month relationship. After that, my luck went downhill. In late 2014 and early 2015, I went on a handful of decent dates, some that led to more dates, some that didn’t—which is about what I feel it’s reasonable to expect from dating services. But in the past year or so, I’ve felt the gears slowly winding down, like a toy on the dregs of its batteries. I feel less motivated to message people, I get fewer messages from others than I used to, and the exchanges I do have tend to fizzle out before they become dates. The whole endeavor seems tired.

“I’m going to project a really bleak theory on you,” Fetters says. “What if everyone who was going to find a happy relationship on a dating app already did? Maybe everyone who’s on Tinder now are like the last people at the party trying to go home with someone.”

Now that the shine of novelty has worn off these apps, they aren’t fun or exciting anymore. They’ve become a normalized part of dating. There’s a sense that if you’re single, and you don’t want to be, you need to do something to change that. If you just sit on your butt and wait to see if life delivers you love, then you have no right to complain.

“Other than trying to go to a ton of community events, or hanging out at bars—I’m not really big on bars—I don’t feel like there’s other stuff to necessarily do to meet people,” Hyde says. “So it’s almost like the only recourse other than just sort of sitting around waiting for luck to strike is dating apps.”

But then, if you get tired of the apps, or have a bad experience on them, it creates this ambivalence—should you stop doing this thing that makes you unhappy or keep trying in the hopes it might yield something someday? This tension may lead to people walking a middle path—lingering on the apps while not actively using them much. I can feel myself half-assing it sometimes, for just this reason. (...)

Whenever using a technology makes people unhappy, the question is always: Is it the technology’s fault, or is it ours? Is Twitter terrible, or is it just a platform terrible people have taken advantage of? Are dating apps exhausting because of some fundamental problem with the apps, or just because dating is always frustrating and disappointing? (...)

For this story I’ve spoken with people who’ve used all manner of dating apps and sites, with varied designs. And the majority of them expressed some level of frustration with the experience, regardless of which particular products they used.

I don’t think whatever the problem is can be solved by design. Let’s move on.

It's possible dating app users are suffering from the oft-discussed paradox of choice. This is the idea that having more choices, while it may seem good… is actually bad. In the face of too many options, people freeze up. They can’t decide which of the 30 burgers on the menu they want to eat, and they can’t decide which slab of meat on Tinder they want to date. And when they do decide, they tend to be less satisfied with their choices, just thinking about all the sandwiches and girlfriends they could have had instead.

The paralysis is real: According to a 2016 study of an unnamed dating app, 49 percent of people who message a match never receive a response. That’s in cases where someone messages at all. Sometimes, Hyde says, “You match with like 20 people and nobody ever says anything.”

“There’s an illusion of plentifulness,” as Fetters put it. “It makes it look like the world is full of more single, eager people than it probably is.”

Just knowing that the apps exist, even if you don’t use them, creates the sense that there’s an ocean of easily-accessible singles that you can dip a ladle into whenever you want.

“It does raise this question of: ‘What was the app delivering all along?’” Weigel says. “And I think there's a good argument to be made that the most important thing it delivers is not a relationship, but a certain sensation that there is possibility. And that's almost more important.”

Whether someone has had luck with dating apps or not, there’s always the chance that they could. Perhaps the apps’ actual function is less important than what they signify as a totem: A pocket full of maybe that you can carry around to ward off despair.

by Julie Beck, The Atlantic |  Read more:
Image: Chelsea Beck

The Meatball or the Worm?


There's an episode of Star Trek: The Next Generation titled “The Royale”, first aired in 1989, in which the U.S.S. Enterprise encounters a mysterious field of wreckage orbiting a distant planet. The starship's crew beams a chunk of the debris aboard, and determines that it's a fragment of the Charybdis, a vessel launched from Earth in 2037 and lost thereafter. it's easy enough to identify, because the fragment they take aboard displays two visible symbols: An American flag bearing 52 stars (putting it, they say, in the mid-21st century) and a logotype of four curvy letters reading N-A-S-A.

“The Royale” has been looping around the planet in reruns since it premiered. Richard D. James, the show's production designer, surely never imagined that his team was inadvertently setting up a big fat continuity error. But in 1992, just three years after the episode aired, the National Aeronautics and Space Administration dumped that sinuous logo and systematically began to strip it from buildings, documents, uniforms, and spacecraft, replacing it with an older insignia — nicknamed the Meatball — that the agency had retired nearly two decades before. It was a quixotically vigorous effort for something so symbolic, and the strange path that led there encounters the existential question graphic designers face: How important is what we do? And another question: How did the Meatball defeat the Worm?

by Christopher Bonanos, Standards Manual |  Read more:
Image: NASA

Wednesday, October 26, 2016


The Scent of Green Papaya (1993) dir. by Trần Anh Hùng

via:

The Breeders

Panama: The Hidden Trillions

In a seminar room in Oxford, one of the reporters who worked on the Panama Papers is describing the main conclusion he drew from his months of delving into millions of leaked documents about tax evasion. “Basically, we’re the dupes in this story,” he says. “Previously, we thought that the offshore world was a shadowy, but minor, part of our economic system. What we learned from the Panama Papers is that it is the economic system.”

Luke Harding, a former Moscow correspondent for The Guardian, was in Oxford to talk about his work as one of four hundred–odd journalists around the world who had access to the 2.6 terabytes of information about tax havens—the so-called Panama Papers—that were revealed to the world in simultaneous publication in eighty countries this spring. “The economic system is, basically, that the rich and the powerful exited long ago from the messy business of paying tax,” Harding told an audience of academics and research students. “They don’t pay tax anymore, and they haven’t paid tax for quite a long time. We pay tax, but they don’t pay tax. The burden of taxation has moved inexorably away from multinational companies and rich people to ordinary people.”

The extraordinary material in the documents drew the curtain back on a world of secretive tax planning, just as WikiLeaks had revealed the backroom chatter of diplomats and Edward Snowden had shown how intelligence agencies could routinely scoop up vast server farms of data on entire populations. The Panama Papers—a name chosen for its echoes of Daniel Ellsberg’s 1971 leak of the Pentagon Papers—unveiled how a great many rich individuals used one Panamanian law firm, Mossack Fonseca (“Mossfon” for short), to shield their money from prying eyes, whether it was tax authorities, law enforcement agencies, or vengeful former spouses.

Tax havens are supposed to be secret. Mossfon itself, for instance, only knew the true identity of the beneficial owner—a person who enjoys the benefits of ownership even though title to the company is in another name—of 204 Seychelles companies out of 14,000 it operated at any one time. The Panama leak blew open that omertà in a quite spectacular fashion. The anonymous source somehow had access to the Mossfon financial records and leaked virtually every one over the firm’s forty years of existence—handing to reporters some 11.5 million documents. By comparison the Pentagon Papers—the top-secret Vietnam War dossier leaked to The New York Times by Ellsberg—was around seven thousand pages. Harding estimates that it would take one person twenty-seven years to read through the entire Panama Papers.

Why did the source leak the papers? In a two-thousand-word manifesto published after the publication of the main material, he or she claimed to be motivated by exposing income inequality—and the way in which the “wealth management” industry had financed crime, war, drug dealing, and fraud on a grand scale.

“I decided to expose Mossack Fonseca because I thought its founders, employees and clients should have to answer for their roles in these crimes, only some of which have come to light thus far,” he or she wrote. “It will take years, possibly decades, for the full extent of the firm’s sordid acts to become known. In the meantime, a new global debate has started, which is encouraging.” (...)

By the end of 1959 about $200 million was on deposit abroad. By 1961 the total had hit $3 billion, by which time offshore financial engineering “was spreading to Zurich, the Caribbean, and beyond” as jurisdiction after jurisdiction got in on the game. Today, the economist Gabriel Zucman estimates that there is $7.6 trillion of household wealth in tax havens globally—around 8 percent of the world’s wealth.

Ronen Palan, professor of international politics at City University London, describes the birth of tax havens in a similar way in his The Offshore World (2003), a process that took about ten years. “These satellites of the City were simply booking offices: semifictional way stations on secretive pathways through the accountants’ workbooks,” writes Shaxson. “But these fast-growing, freewheeling hide-holes helped the world’s wealthiest individuals and corporations, especially the banks, to grow faster than their more heavily regulated onshore counterparts.”

Thus began a race to the deregulatory bottom. Each time one haven changes its laws to attract more funds, the rival havens have to respond. “This race has an unforgiving internal logic,” writes Shaxson.
You deregulate—then when someone else catches up with you, you must deregulate some more, to stop the money from running away.
He describes how the US eventually found it impossible to resist the lure of hot money, with a gradual blurring of the onshore and offshore escape routes from financial regulation. The end result is as described by Harding: the offshore world becomes inextricably embedded in the global political economy.

by Alan Rusbridger, NYRB | Read more:
Image: via: