Monday, December 28, 2015

The Year Media Started Doubting the Web

In 2015, news on the Internet no longer belonged to the web alone.

This was the year that Snapchat, previously best known as a messaging service for ephemeral photos, launched its own version of the news called Discover. It was the year that Facebook figured news articles on the web loaded too slowly, so it decided to make them instant. It was the year that Apple’s latest version of iOS came with its own app for reading news on iPhones. Meanwhile, Twitter got into the news game with Moments, its attempt to make the service easier for n00bs to understand by using real humans to curate the news tweet by tweet.

It’s no coincidence that the proliferation of platforms serving up articles and videos, a plethora of news, entertainment, and sports, all happened in the same year. Publishers have grown steadily more dependent on Google and Facebook over the past decade for directing attention to their sites. But as audiences spend more and more of their time on mobile, that dependency has become more acute. The biggest tech companies are all vying for mobile users’ attention, which they’ve increasingly lured to apps and away from the web, publishers’ traditional online home. But the Facebooks, Twitters, and Snapchats of the world need interesting stuff for audiences to see once they’re there. And for that they need publishers.

In 2015, publishers cautiously sought to find out whether ceding some control to platforms could yield a beneficial symbiosis. In 2016, we’ll find out whether moving beyond the web helps the makers of news gain bigger, more interested audiences, or if they’re just small-time vassals who have no choice but to pay tribute to their attention-grabbing overlords.

Writers Blocked

One of the main factors contributing to a shift away from the web is that we’re spending less of our time there. Or, rather, we’re spending more of our time on our phones.

For publishers, that’s a problem. Most people spend the majority of their time on smartphones in a handful of apps like Facebook. They’re not on the web, and they’re also not likely to download and switch among apps from every news organization whose stories they may want to read. Native apps for publishers are not only costly to design and produce, but also unlikely to reach as wide an audience as, say, Facebook already does. Sure, The New York Times and BuzzFeed may find a loyal following with standalone apps. But to reach anyone beyond diehards, even the biggest publishers depend on social media.

But the increasing magnetism of mobile wasn’t the only important shift this year. Even as digital ad spending could soon exceed ad dollars spent on TV, 2015 was also the year that blocking ads on the web went mainstream as even Apple began supporting ad-blocking on its mobile devices.

For advertisers, the popularity of ad-blocking became a very real worry. The Interactive Advertising Bureau, an industry trade group, publicly apologized for the fact that digital advertising has gotten out of hand, stoking demand for software that could block the pervasive annoyance of online sales pitches. “The rise of ad blocking poses a threat to the internet and could potentially drive users to an enclosed platform world dominated by a few companies,” wrote Scott Cunningham, the senior vice president of tech and ad operations at IAB.

Anxiety around ad-blockers could mean that advertisers direct more of their dollars to platforms and less to web-dependent publishers directly. And if that’s where the dollars start to head, publishers see they need to head there as well. Not only do Facebook’s Instant Articles, say, or Apple News offer a more streamlined user experience for consuming news, but many also offer a significant portion of ad revenues—ads that advertisers know can’t be blocked.

by Julia Greenberg, Wired |  Read more:
Image: Bryan Derballa via:

Why Life Is Absurd

[ed. As 2015 comes to a close I'll be reposting a few favorites out of this year's archive.]  

In the 1870s, Leo Tolstoy became depressed about life’s futility. He had it all but so what? In “My Confession,” he wrote: “Sooner or later there will come diseases and death (they had come already) to my dear ones and to me, and there would be nothing left but stench and worms. All my affairs, no matter what they might be, would sooner or later be forgotten, and I myself should not exist. So why should I worry about these things?”

Life’s brevity bothered Tolstoy so much that he resolved to adopt religious faith to connect to the infinite afterlife, even though he considered religious belief “irrational” and “monstrous.” Was Tolstoy right? Is life so short as to make a mockery of people and their purposes and to render human life absurd?

In a famous 1971 paper, “The Absurd,” Thomas Nagel argues that life’s absurdity has nothing to do with its length. If a short life is absurd, he says, a longer life would be even more absurd: “Our lives are mere instants even on a geological time scale, let alone a cosmic one; we will all be dead any minute. But of course none of these evident facts can be what makes life absurd, if it is absurd. For suppose we lived forever; would not a life that is absurd if it lasts 70 years be infinitely absurd if it lasted through eternity?”

This line of reasoning has a nice ring to it but whether lengthening an absurd thing will relieve it of its absurdity depends on why the thing is absurd and how much you lengthen it. A longer life might be less absurd even if an infinite life would not be. A short poem that is absurd because it is written in gibberish would be even more absurd if it prattled on for longer. But, say I decided to wear a skirt so short it could be mistaken for a belt. On my way to teach my class, a colleague intercepts me:

“Your skirt,” she says, “is absurd.”

“Absurd? Why?” I ask.

“Because it is so short!” she replies.

“If a short skirt is absurd, a longer skirt would be even more absurd,” I retort.

Now who’s being absurd? The skirt is absurd because it is so short. A longer skirt would be less absurd. Why? Because it does not suffer from the feature that makes the short skirt absurd, namely, a ridiculously short length. The same goes for a one-hour hunger strike. The point of a hunger strike is to show that one feels so strongly about something that one is willing to suffer a lack of nourishment for a long time in order to make a point. If you only “starve” for an hour, you have not made your point. Your one-hour hunger strike is absurd because it is too short. If you lengthened it to one month or one year, you might be taken more seriously. If life is absurd because it’s short, it might be less absurd if it were suitably longer.

Absurdity occurs when things are so ill-fitting or ill-suited to their purpose or situation as to be ridiculous, like wearing a clown costume to a (non-circus) job interview or demanding that your dog tell you what time it is. Is the lifespan of a relatively healthy and well-preserved human, say somewhere between 75 and 85, so short as to render it absurd, ill-suited to reasonable human purposes? (...)

What if we lived for, say, 500 or 1,000 years? Would our ambition tend to grow to scale, making life seem absurdly short for human purposes, whatever its length? Is it human nature to adopt outsized ambitions, condemning ourselves to absurdity by having conceptions of reasonable achievement that we don’t have the time to realize? Why haven’t we scaled down our ambitions to fit the time we have? Is the problem our nature or our lifespan?

There may be no way to be sure but consider the fact that, although we have ambitions unsuited to our lifespan, we don’t seem to consistently adopt ambitions unsuited to our species in respects other than time. It’s not absurd to us that we cannot fly or hibernate. We don’t think the fact that we can hold our breath for minutes rather than hours or memorize a few pages rather than a tome makes human life meaningless. We don’t find that our inability to read each other’s minds, speak to animals, glow in the dark, run 60 miles an hour, solve complex equations in our heads simultaneously or lift thousand-pound weights makes a sad mockery of human existence. This makes it more likely that, given a longer lifespan, life might seem less absurdly short for our purposes.

Just as a lifespan can be too short, it can be too long. For many, it is far too long already. Many people are bored with life, irritated by the human condition, exhausted from suffering, tired of living. For those for whom life is too long, a longer life would be worse and, quite possibly, more absurd. For some, however, life seems too long because it’s too short, meaning life is rendered so absurd by being short that even a short absurd life feels too long because it is pointless. A life made absurd because it is too short would be rendered less absurd if it were significantly longer.

A million-year or infinite life might be too long for human nature and purposes too, though such a life would be so radically different that we can only speculate. An infinite life might become tedious, and people world-weary. Lifetime love commitments, a source of meaning now, would likely cease to exist. A million-year or infinite lifespan might be too long and slip into absurdity. To everything its time. Both a too short lifespan and a too long lifespan present absurdist challenges to a meaningful life.

by Rivka Weinberg, NY Times | Read more:
Image: Leif Parsons
[ed. Repost]

Sunday, December 27, 2015

The Return of the Harmonica

In the late 1960s, as the general manager of Don Wehr’s Music City in San Francisco, Reese Marin sold guitars, drums, keyboards, and amps to the biggest psychedelic rock bands of the late 1960s. His customers ranged from Big Brother and the Holding Company and Quicksilver Messenger Service to Jefferson Airplane and the Grateful Dead. Guitarists as musically diverse as Carlos Santana and Steve Miller could find what they were looking for at Don Wehr’s; so did jazz virtuosos George Benson and Barney Kessel, who would walk down Columbus Avenue from Broadway in North Beach—where the jazz clubs competed with strip joints for tourists—whenever they were in town.

These legends were some of the most demanding and finicky musicians on the planet. So it should have been easy for Marin to sell a couple of $5 harmonicas to Lee Oskar, whose melodic riffs on hits like “Cisco Kid,” “The World is a Ghetto,” and “Low Rider” gave one of the biggest bands of the 1970s, WAR, its signature sound. Oskar, however, heard imperfections in his chosen instrument that Marin didn’t know existed. Oskar was not tentative in his quest for what he considered a “gig-worthy” harmonica. “I spent all my money on harmonicas,” Oskar told me recently, “just to find 1 out of 10 that was any good.”

Marin says Oskar was exaggerating, but not by much. He was actually behind the counter when Oskar made his first of many visits to Don Wehr’s and asked to play all of the harmonicas the store had in stock in C, A, F, G, and E—the keys where rock bands live and die. On any given day, Marin maintained an inventory of 10 to 20 harmonicas in each key for each model they sold. That was a lot of harmonicas for Oskar to put his mouth on, so Marin decided to be firm. “I said, ‘You can’t play ’em unless you buy ’em,’” Marin told me, “and he said, ‘I don’t mind.’”

Shrugging, Marin rang him up, then Oskar proceeded to play every single harmonica on the sales counter, which he then divided into two piles—one for the gig-worthy harmonicas and another for the rejects, which were 80 to 90 percent of the total. “When he was done, I said, ‘Lee, what do you want me to do with all these harmonicas?’ and he said, ‘I don’t really care. I can’t use them.’” Marin ended up giving away a lot of used Lee Oskar-played harmonicas. “Lee did this over and over, every time he was in town,” says Marin. “It was crazy.”

Until relatively recently, playing a harmonica was sort of crazy, too, since doing so was essentially the same thing as destroying it. For harmonicas like the Hohner Marine Bands Oskar road-tested that day at Don Wehr’s, a player’s saliva would soak into the wood inside the instrument, causing it to swell. At the end of a gig, the wood would dry out and shrink. This process would repeat itself over and over, until the wood had swelled and shrunk so many times it would split and splinter, often causing a player’s lips to bleed. “I used to hack off the ends of the combs on my harmonicas with a carpet knife,” recalls Steve Baker, a London-born harmonica player and an authority on the Marine Band. Most players would never do that, of course, content to just toss their worn-out wrecks in the trash.

When players performed with their harmonicas, the wood inside would soak with saliva, dry out, and shrink. This process would repeat itself over and over, until the wood had swelled and shrunk so many times it would split and splinter, often causing a player’s lips to bleed. “I used to hack off the ends of the combs on my harmonicas with a carpet knife,” recalls one player.

For Hohner, this must have seemed like a very good business model. After all, the Marine Band had been Hohner’s most popular harmonica brand almost since 1896, the year it was introduced. In the United States, in the first half of the 20thcentury, American folk musicians and blues artists alike embraced the Marine Band as their own, giving the instrument originally designed to play traditional German folk tunes an aura of cool. With sales soaring after World War II, Hohner found itself making an instrument everybody wanted, even though it needed to be replaced regularly. How could a manufacturer’s product get any better than that?

Well, answered harmonica players and a small but influential community of harmonica customizers, how about an instrument that doesn’t wear out, is built to be serviced and tuned to a musician’s needs, and is made out of materials that don’t cause our lips to bleed? (...)

To understand why the Marine Band was such a favorite for musicians, it helps to know a little about how the instrument works, beginning with a mental picture of its guts. The Marine Band is what’s called a “diatonic” harmonica. It’s built out of five parts, which are stacked together like a sandwich (in fact, “tin sandwich” is just one of the instrument’s colorful aliases, “Mississippi saxophone” being another). In the center is the comb, on the top and bottom of which are two matching metal plates; those plates have been punched with rectangular holes, which align with the voids in the comb. Partially covering these holes are two rows of reeds, which vibrate in and out of the holes to produce a harmonica’s sound. Cover plates give the player something to grip, while openings at the back of the plates give the sound somewhere to go.

No single component of the Marine Band can claim credit for its signature sound, but if any part of a harp’s composition could be deemed especially critical, it would be the reeds. Unlike the reeds in wind instruments like saxophones and clarinets, which are made of organic material like bamboo, harmonica reeds are made of metal, usually the same stuff as the reed plate in which they vibrate. “It’s a dreadfully complicated topic,” Baker says. There’s the reed’s composition, how it’s hardened, and also its final degree of hardness. Lots of metals will work, but the degree of hardness is different for each one. And the parameters for a given material—bronze, stainless steel, or the brass alloys like Hohner uses—are very fine. “In the end,” Baker says, “it means people are trying out lots of shit until it works.”

For some reason, Hohner got all of this right with the Marine Band, which may explain why the company viewed with suspicion anything that did not conform to its sense of harmonica perfection. “Bending” notes, for example, must have seemed an especially black art.

Bent notes are one of the most recognizable auditory tropes in the blues, and any harmonica player who cannot get the note he’s playing to drop in pitch, or bend, might as well take up German folk tunes. “Until I started working for Hohner, they didn’t even know what happened when you bent notes,” Baker told me. Once upon a time, someone at Hohner must have understood how it worked, but in the late 1980s, Baker was the guy who explained it to Hohner again, right down to the physics of what bending does to the reeds (you can read his explanation for yourself in “The Harp Handbook,” published in 1990).

From Hohner’s perspective, bending notes represented a malfunction of the instrument, because it’s not what a Marine Band harmonica was designed to do. That, of course, does not mean it cannot be done, as any blues player knows.

The secret is in the reeds, two of which block the air in every hole, or channel, of a diatonic harmonica like a Marine Band. For those reeds to work together, the player needs to go for the throat—literally. In order to bend a note, a harmonica player has to physically change the length of the air column in his throat, which forces the higher pitch of the two reeds downward. Meanwhile, the opposing reed, which normally would only begin vibrating due to a blow air stream, starts vibrating in the draw air stream. It’s the interaction of these two pitches that creates a bent note. “When I explained all this to the people at Hohner,” Baker says, “they regarded it as a malfunction because notes in-between the 12-tone scale aren’t common in European classical or folk music.”

That explanation occurred some time after 1987, when Baker began consulting to Hohner. By then, Baker had learned what turned the company’s best-selling instrument into a piece of junk. For one thing, the milling tools used to cut those all-important reeds and reed slots were not being sharpened or replaced, causing sloppy work. In addition, the company’s protocols for tuning, which required all Hohner harmonicas, including Marine Bands, to be tuned three times, with rest periods in-between so the material could settle, were scrapped. “They cut out all of that because it was an easy way to make more money,” says Baker.

By this time, Lee Oskar had become so fed up with the quality of Marine Bands that he started his own harmonica company. “I had never thought of going into business to manufacture harmonicas,” Oskar says, “but I needed tools that could live up to my expectations.”

by Ben Marks, Craftsman |  Read more:
Image: via: and InterstateMusic.com

Leo Kottke

Saturday, December 26, 2015


Physics dad
via:

Like a Prayer


Even secular people need time out to meditate, reflect, and give thanks. Is prayer the answer?

My soul – if I have one, which is still up for debate – is an angry misfit type of soul. It’s not a soul that likes cashew cheese or people who talk about their spirit animals. My soul likes a nice yoga class as much as the next soul, but it wishes the blankets there weren’t so scratchy, and that they’d play better music, and that the lady across the room wouldn’t chat nervously through the whole goddamn thing like her soul has been snorting crystal meth all morning. My soul would like for all the other souls to shut the fuck up once in a while.

My soul is not necessarily allergic to spirituality or to religion itself. It just feels suspicious towards bossy, patriarchal gods dreamt up by bossy patriarchs. Not that my soul doesn’t recognise that it’s a product of its environment! My soul is the first to admit that if my mother weren’t agnostic and I weren’t raised Catholic and I didn’t have a premature existential crisis after watching Horton Hears a Who! (1970) when I was eight, I could just go to church like all the other people who don’t like cashew cheese or wind chimes or men in linen pants. Then I could file into a pew and fold my hands in prayer and ask forgiveness for being such an irritable jackass. Unfortunately, my soul has spent lots of time with the Lord, and my soul is just not that into Him.

I’m not alone on that front. In Religion for Atheists (2012), the philosopher Alain de Botton writes that although religions have a lot to offer – they ‘deliver sermons, promote morality, engender a spirit of community, make use of art and architecture, inspire travels, train minds and encourage gratitude at the beauty of spring’ – it can be hard for atheists to reap those benefits.

We might not need to know why we’re here, but most of us want to feel like we’re in touch with something bigger than our own fluctuating moods and needs, and that we’re pointed in the right direction. But prayer isn’t just a spiritual version of Google Earth. Beyond asking for guidance or expressing gratitude, it can be a way of nudging our intentions toward action. As Philip and Carol Zaleski explain in Prayer: A History (2005), ‘Prayer is speech, but much richer than speech alone. It is a peculiar kind of speech that acts, and a peculiar kind of action that speaks to the depths and heights of being.’

That sounds like a pretty tall order, until you consider how fundamental prayer has been to humankind since prehistoric times. There’s some evidence that Neanderthals buried their dead surrounded by flowers, and scholars have suggested that engraved bones from the site at Laugerie Basse in southwestern France depict humans engaged in prayer. Prayer has been used to ask for protection or rainfall, for inspiration, answers or healing, as well as in thanks or celebration or mourning. Prayer can communicate adoration or devotion, ecstasy or ‘mystical union’ according to the Zaleskis, who must be Jeff Buckley fans. But however prayer is used, it makes simple sense that it should feel more received than invented. So where does that leave those of us intent on inventing a prayer for ourselves out of thin air?

by Heather Havrilesky, Aeon |  Read more:
Image: Vilhelm Hammershoi

Should AI Be Open?

All this likewise indubitably belonged to history, and would have to be historically assessed; like the Murder of the Innocents, or the Black Death, or the Battle of Paschendaele. But there was something else; a monumental death-wish, an immense destructive force loosed in the world which was going to sweep over everything and everyone, laying them flat, burning, killing, obliterating, until nothing was left…Nor have I from that time ever had the faintest expectation that, in earthly terms, anything could be salvaged; that any earthly battle could be won or earthly solution found. It has all just been sleep-walking to the end of the night.
   ~Malcolm Muggeridge

H.G. Wells’ 1914 sci-fi book The World Set Free did a pretty good job predicting nuclear weapons:
They did not see it until the atomic bombs burst in their fumbling hands…before the last war began it was a matter of common knowledge that a man could carry about in a handbag an amount of latent energy sufficient to wreck half a city
Wells’ thesis was that the coming atomic bombs would be so deadly that we would inevitably create a utopian one-world government to prevent them from ever being used. Sorry, Wells. It was a nice thought.

But imagine that in the 1910s and 1920s, the period’s intellectual and financial elites had started thinking really seriously along Wellsian lines. Imagine what might happen when the first nation – let’s say America – got the Bomb. It would be totally unstoppable in battle and could take over the entire world and be arbitrarily dictatorial. Such a situation would be the end of human freedom and progress.

So in 1920 they all pool their resources to create their own version of the Manhattan Project. Over the next decade their efforts bear fruit, and they learn a lot about nuclear fission. In particular, they learn that uranium is a necessary resource, and that the world’s uranium sources are few enough that a single nation or coalition of nations could obtain a monopoly upon them. The specter of atomic despotism is more worrying than ever.

They get their physicists working overtime, and they discover a variety of nuke that requires no uranium at all. In fact, once you understand the principles you can build one out of parts from a Model T engine. The only downside to this new kind of nuke is that if you don’t build it exactly right, its usual failure mode is to detonate on the workbench in an uncontrolled hyper-reaction that blows the entire hemisphere to smithereens. But it definitely doesn’t require any kind of easily controlled resource.

And so the intellectual and financial elites declare victory – no one country can monopolize atomic weapons now – and send step-by-step guides to building a Model T nuke to every household in the world. Within a week, both hemispheres are blown to very predictable smithereens.

II.

Some of the top names in Silicon Valley have just announced a new organization, OpenAI, dedicated to “advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole…as broadly and evenly distributed as possible.” Co-chairs Elon Musk and Sam Altman talk to Steven Levy:
Levy: How did this come about? […] 
Musk: Philosophically there’s an important element here: we want AI to be widespread. There’s two schools of thought?—?do you want many AIs, or a small number of AIs? We think probably many is good. And to the degree that you can tie it to an extension of individual human will, that is also good. […] 
Altman: We think the best way AI can develop is if it’s about individual empowerment and making humans better, and made freely available to everyone, not a single entity that is a million times more powerful than any human. Because we are not a for-profit company, like a Google, we can focus not on trying to enrich our shareholders, but what we believe is the actual best thing for the future of humanity. 
Levy: Couldn’t your stuff in OpenAI surpass human intelligence? 
Altman: I expect that it will, but it will just be open source and useable by everyone instead of useable by, say, just Google. Anything the group develops will be available to everyone. If you take it and repurpose it you don’t have to share that. But any of the work that we do will be available to everyone. 
Levy: If I’m Dr. Evil and I use it, won’t you be empowering me? 
Musk: I think that’s an excellent question and it’s something that we debated quite a bit. 
Altman: There are a few different thoughts about this. Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place.
Both sides here keep talking about who is going to “use” the superhuman intelligence a billion times more powerful than humanity, as if it were a microwave or something. Far be it from me to claim to know more than Sam Altman about anything, but I propose that the correct answer to “what would you do if Dr. Evil used superintelligent AI” is “cry tears of joy and declare victory”, because anybody at all having a usable level of control over the first superintelligence is so much more than we have any right to expect that I’m prepared to accept the presence of a medical degree and ominous surname.

A more Bostromian view would forget about Dr. Evil, and model AI progress as a race between Dr. Good and Dr. Amoral. Dr. Good is anyone who understands that improperly-designed AI could get out of control and destroy the human race – and who is willing to test and fine-tune his AI however long it takes to be truly confident in its safety. Dr. Amoral is anybody who doesn’t worry about that and who just wants to go forward as quickly as possible in order to be the first one with a finished project. If Dr. Good finishes an AI first, we get a good AI which protects human values. If Dr. Amoral finishes an AI first, we get an AI with no concern for humans that will probably cut short our future.

Dr. Amoral has a clear advantage in this race: building an AI without worrying about its behavior beforehand is faster and easier than building an AI and spending years testing it and making sure its behavior is stable and beneficial. He will win any fair fight. The hope has always been that the fight won’t be fair, because all the smartest AI researchers will realize the stakes and join Dr. Good’s team.

Open-source AI crushes that hope. Suppose Dr. Good and her team discover all the basic principles of AI but wisely hold off on actually instantiating a superintelligence until they can do the necessary testing and safety work. But suppose they also release what they’ve got on the Internet. Dr. Amoral downloads the plans, sticks them in his supercomputer, flips the switch, and then – as Dr. Good himself put it back in 1963 – “the human race has become redundant.”

The decision to make AI findings open source is a tradeoff between risks and benefits. The risk is letting the most careless person in the world determine the speed of AI research – because everyone will always have the option to exploit the full power of existing AI designs, and the most careless person in the world will always be the first one to take it. The benefit is that in a world where intelligence progresses very slowly and AIs are easily controlled, nobody will be able to use their sole possession of the only existing AI to garner too much power.

Unfortunately, I think we live in a different world – one where AIs progress from infrahuman to superhuman intelligence very quickly, very dangerously, and in a way very difficult to control unless you’ve prepared beforehand.

by Scott Alexander, Slate Star Codex |  Read more:
Image: Ex Machina

Friday, December 25, 2015

Lynch Mob

[ed. See also: At Harvard, Feelings Trump Knowledge.]

A Depression-era Lebanon Valley College leader with the last name Lynch has found himself thrust into the middle of a roiling 21st-century debate on campus civil rights.

Students at the private college in Annville have demanded administrators remove or modify Dr. Clyde A. Lynch's last name, as it appears on a campus hall, due to the associated racial connotations.

The demand was made at a forum on campus equality issues held Friday, capping a week of demonstrations calling for changes at the predominantly white institution. (...)

In the days that followed, commenters on pennlive.com leapt to defend Lynch, who served as the college's president from 1932 to 1950 when he died in office, saying he's been unfairly dragged into the fray by this modern-day movement.

A commenter going by the screen name "10xchamps," who identified himself as a recent graduate of the college, said "Anyone with half a brain would know that the name has nothing to do with racial connotations. It's the last name of a very generous donor who probably helped fund many of these students."

According to its website, Lynch led the college through the Great Depression and World War II, helping to raise $550,000 for a new physical education building which was named for him following his death. (...)

In response, student activists who made the demand said they'd be willing to settle for adding his first name and middle initial to the building instead of removing it altogether. At Friday's forum they acknowledged no known links between Dr. Clyde A. Lynch and the practice of "Lynching" but said as is, the building and last name harken back to a period in American history when Blacks were widely and arbitrarily killed by public hangings and "Lynch Mobs."  (...)

"I will not longer watch NFL football when John Lynch announces. Or watch Jane Lynch on TV. Too upsetting," a commenter named "gmaven" quipped.

by Colin Deppen, Penn Live |  Read more:
Image: via:

Applied Fashion

[ed. I don't know... is it just me, or is there a certain apprehension around party season this year?

Have you ever worn something completely wrong for the occasion? I don’t mean social-death wrong: we don’t live in such judgmental, Victorian times. I mean embarrassing-wrong: so wrong you want to go home and change, so wrong you wish the floorboards would open and let you descend into darkness. It may seem a quaint question to ask when the Naked Rambler has appeared in Britain’s Court of Appeal wearing nothing at all. And certainly, getting it really wrong is increasingly rare at a time when presidents appear in open-necked shirts and Downing Street advisers wander about in socks. But it can still, on occasion, happen.

“Occasion” is the key word: there are invitations to certain events which, once accepted, mean you have to play your part just as much as if you were an actor on stage. That includes wearing, more or less, the right costume. Parties, like plays, need to create an atmosphere, to weave a touch of magic, in order to take flight. They are fragile, airy confections, like spun sugar or candy floss; they hold their shape if all the ingredients come together, but if not, they collapse into a gritty pile. That, more than the attempt to exclude socially, is why the dress code still exists.

Dress codes on invitations tend to give men clear instructions: “black tie”, “lounge suits”. Both are unambiguous. For women they’re just the broadest of clues. Hence the phone-a-friend call asking “What are you going to wear?”, a question which lays bare the need of social animals to fit in with their tribe. I’ve always thought it would make a feminist point to turn up in a black tie one day – just the tie, not the full tuxedo – but I’d never have the nerve. (And incidentally, the one time it doesn’t look chic for a woman to turn up in a well-cut tux is when the dress code actually is black tie: it reads as protest or parody, rather than stylishness and wit.) Other dress codes I’ve come across include “dress to party”, “summer chic” and “dress up”, as well as the familiar, oxymoronic “smart casual”. None of them is specific, not even for men. But when decoded they all mean the same: “Be comfortable. No need to go over the top. But please make an effort, because we have.”

by Rebecca Willis, More Intelligent Life |  Read more:
Image: via:

Thursday, December 24, 2015

Why America Is Moving Left

In the late ’60s and ’70s, amid left-wing militancy and racial strife, a liberal era ended. Today, amid left-wing militancy and racial strife, a liberal era is only just beginning.

Understanding why requires understanding why the Democratic Party—and more important, the country at large—is becoming more liberal.

The story of the Democratic Party’s journey leftward has two chapters. The first is about the presidency of George W. Bush. Before Bush, unapologetic liberalism was not the Democratic Party’s dominant creed. The party had a strong centrist wing, anchored in Congress by white southerners such as Tennessee Senator Al Gore, who had supported much of Ronald Reagan’s defense buildup, and Georgia Senator Sam Nunn, who had stymied Bill Clinton’s push for gays in the military. For intellectual guidance, centrist Democrats looked to the Democratic Leadership Council, which opposed raising the minimum wage; to The New Republic (a magazine I edited in the early 2000s), which attacked affirmative action and Roe v. Wade; and to the Washington Monthly, which proposed means-testing Social Security.

Centrist Democrats believed that Reagan, for all his faults, had gotten some big things right. The Soviet Union had been evil. Taxes had been too high. Excessive regulation had squelched economic growth. The courts had been too permissive of crime. Until Democrats acknowledged these things, the centrists believed, they would neither win the presidency nor deserve to. In the late 1980s and the 1990s, an influential community of Democratic-aligned politicians, strategists, journalists, and wonks believed that critiquing liberalism from the right was morally and politically necessary.

George W. Bush wiped this community out. Partly, he did so by rooting the GOP more firmly in the South—Reagan’s political base had been in the West—aiding the slow-motion extinction of white southern Democrats that had begun when the party embraced civil rights. But Bush also destroyed centrist Democrats intellectually, by making it impossible for them to credibly critique liberalism from the right.

In the late 1980s and the 1990s, centrist Democrats had argued that Reagan’s decisions to cut the top income-tax rate from 70 percent to 50 percent and to loosen government regulation had spurred economic growth. When Bush cut the top rate to 35 percent in 2001 and further weakened regulation, however, inequality and the deficit grew, but the economy barely did—and then the financial system crashed. In the late ’80s and the ’90s, centrist Democrats had also argued that Reagan’s decision to boost defense spending and aid the Afghan mujahideen had helped topple the Soviet empire. But in 2003, when Bush invaded Iraq, he sparked the greatest foreign-policy catastrophe since Vietnam.

If the lesson of the Reagan era had been that Democrats should give a Republican president his due, the lesson of the Bush era was that doing so brought disaster. In the Senate, Bush’s 2001 tax cut passed with 12 Democratic votes; the Iraq War was authorized with 29. As the calamitous consequences of these votes became clear, the revolt against them destroyed the Democratic Party’s centrist wing. “What I want to know,” declared an obscure Vermont governor named Howard Dean in February 2003, “is why in the world the Democratic Party leadership is supporting the president’s unilateral attack on Iraq. What I want to know is, why are Democratic Party leaders supporting tax cuts?” By year’s end, Dean—running for president against a host of Washington Democrats who had supported the war—was the clear front-runner for his party’s nomination.

With the Dean campaign came an intellectual revolution inside the Democratic Party. His insurgency helped propel Daily Kos, a group blog dedicated to stiffening the liberal spine. It energized the progressive activist group MoveOn. It also coincided with Paul Krugman’s emergence as America’s most influential liberal columnist and Jon Stewart’s emergence as America’s most influential liberal television personality. In 2003, MSNBC hired Keith Olbermann and soon became a passionately liberal network. In 2004, The New Republic apologized for having supported the Iraq War. In 2005, The Huffington Post was born as a liberal alternative to the Drudge Report. In 2006, Joe Lieberman, the Democratic Party’s most outspoken hawk, lost his Democratic Senate primary and became an Independent. In 2011, the Democratic Leadership Council—having lost its influence years earlier—closed its doors.

By the time Barack Obama defeated Hillary Clinton for the Democratic presidential nomination in 2008, in part because of her support for the Iraq War, the mood inside the party had fundamentally changed. Whereas the party’s most respected thinkers had once urged Democrats to critique liberal orthodoxy, they now criticized Democrats for not defending that orthodoxy fiercely enough. The presidency of George W. Bush had made Democrats unapologetically liberal, and the presidency of Barack Obama was the most tangible result.

But that’s only half the story. Because if George W. Bush’s failures pushed the Democratic Party to the left, Barack Obama’s have pushed it even further. If Bush was responsible for the liberal infrastructure that helped elect Obama, Obama has now inadvertently contributed to the creation of two movements—Occupy and Black Lives Matter—dedicated to the proposition that even the liberalism he espouses is not left-wing enough.

by Peter Beinart, The Atlantic |  Read more:
Image: uncredited

Binge Reading Disorder

In 2008, Nicholas Carr wrote an article in the Atlantic called “Is Google Making Us Stupid?”—that became famous enough to merit its own Wikipedia page—in which he argues that the abundance of information that the internet provides is diminishing our abilities to actually comprehend what we read. Every article written about the article that I found mentioned this particular quote: “My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.”

Perhaps the reason Carr had to discard his flippers is because the sea just got too big and too populated for him to actually see anything. When you encounter so many sentences a day, even if they are well constructed, intelligent, and seemingly memorable, how do you actually remember one intelligent thought when a thousand others are clamoring for your attention?

A UC San Diego report published in 2009 suggests the average American’s eyes cross 100,500 words a day—text messages, emails, social media, subtitles, advertisements—and that was in 2008. Data collected by the marketing company Likehack tells us that the average social media user “reads”—or perhaps just clicks on—285 pieces of content daily, an estimated 54,000 words. If it is true, then we are reading a novel slightly longer than The Great Gatsby every day.

Of course, the word “read” is rather diluted in this instance. You can peruse or you can skim, and it’s still reading. I spoke with writer and avid reader John Sherman about his online reading habits. “Sometimes, when I say I read an article,” said Sherman, “what I actually mean is I read a tweet about that article.” He is hardly alone in this. Using information collected from the data analysis firm Chartbeat, Fahrad Manjoo writes at Slate that there is a very poor correlation between how far a reader scrolls down in an article and when he or she shares the article on Twitter. In fact, people are more likely to tweet a link to an article if they have not read it fully. “There is so much content out there, capital c, and a lot of it overlaps,” Sherman said. “It takes less time to respond to an idea than a complete argument.”

It takes even less time to respond to an idea or argument with somebody else’s article. Have you read this? No, but that’s like what I read in this other piece. Perhaps nothing depicts this exchange better than a particular Portlandia skit, in which Fred Armisen and Carrie Brownstein rat-a-tat back and forth about what they’ve read, begin tearing the pages out of a magazine and stuffing them in their mouths, and when they run across the street to lunge for a Yellow Pages, they get hit by a car. “Hey, can’t you read?” yells the driver.

Reading is a nuanced word, but the most common kind of reading is likely reading as consumption: where we read, especially on the Internet, merely to acquire information. Information that stands no chance of becoming knowledge unless it “sticks.”

by Nikkitha Bakshani, TMN | Read more:
Image: Rok Hodej

Flying Business Class as a Millennial

I flew business class for the first time in my life last week. It was an overnight, 10-hour flight for a work trip.

STOP, do not click the comment button. I am not a luxurious person! I don’t own designer anything. I hail from a family of proud “Dr. Thunder” drinkers.

The thing is, going into this trip I was already exhausted. All the items on those Internet “self-care” guides—showering, going outside—had fallen right off my increasingly lengthy to-do list. I knew sitting upright with my elbow touching another human for longer than a standard workday would only wear me out further, that I would “wake up” from this flight more wrecked than possibly ever, and that I would immediately have to jump into several days of marathon interviews.

Also, I love sleep. I love it so much, in fact, that it's a wonder I’m not better at it. I can’t sleep in cars, or in most hotels, or even in my own bed whenever work is going badly, or when it’s going suspiciously well. I can’t sleep after I eat a big meal, or after I accidentally say “you too!” back to someone who wished me a happy birthday. And I definitely, without a doubt, can’t sleep on planes.

The prospect of all of these forces converging made my brain feel like it was going to liquefy and dribble out through my nostrils.

So when the one-word question—“Upgrade?”—popped up on the check-in computer at Washington Reagan, I thought I would honor the spirit of the airport’s namesake by at least looking into the best thing unfettered capitalism has ever visited on mankind: business class.

Don’t worry, my momma raised me right: When the ticket-checker told me the cost of upgrading, I played hardball.

“I dunnnooooo,” I said, “that’s a liiiiiiiiiittle pricey.”

“Let me know what you decide,” he said, turning back to his computer. I excused myself to Google Wall Street Journal stories about what constitutes a good deal when upgrading. The price he was quoting me was hundreds of dollars less.

“Okay fine I’ll take it.”

One credit-card swipe later (so easy!) the man's attitude toward me brightened considerably. “Okay, as a first-class passenger, you now have access to the Admiral lounge.”

“What’s that?”

“Just go in that little black elevator to a special room. It’s one of your perks.”

I did so. Inside the wood-paneled room are: Old people, guys who look like they could be start-up founders, and women who looked like they could be actresses. ‘Tis not an ordinary path that leads to the Admiral lounge.

People were having extremely quiet in-person conversations and extremely expletive-filled phone calls. My fellow Admirals gave me the side-eye, but I flashed my business-class boarding pass at them, Pretty Woman-style. (Except of course it looked just like a regular boarding pass so the effect was diminished somewhat.)

I spent my time sending decisive-sounding emails and chugging a free glass of wine. When they announced my flight, I got to wait in the “priority” line, rather than the clearly inferior “main cabin” line immediately to its right.

Below is a brief log from inside the aircraft:

by Olga Khazan, The Atlantic |  Read more:
Image Mary Altaffer / AP

Magda Indigo, Poinsettia
via:

Wednesday, December 23, 2015

Children of the Yuan Percent: Everyone Hates China’s Rich Kids

[ed. They'll get ruthless soon enough. It's the nature of wealth.]

Emerging from a nightclub near Workers’ Stadium in Beijing at 1:30 a.m. on a Saturday in June, Mikael Hveem ordered an Uber. He selected the cheapest car option and was surprised when the vehicle that rolled up was a dark blue Maserati. The driver, a young, baby-faced Chinese man, introduced himself as Jason. Hveem asked him why he was driving an Uber—he obviously didn’t need the cash. Jason said he did it to meet people, especially girls. Driving around late at night in Beijing’s nightclub district, he figured he’d find the kind of woman who would be charmed by a clean-cut 22-year-old in a sports car.

When I heard this story from a friend who had also been in the car, I asked for the driver’s contact info. I introduced myself to Jason over WeChat, China’s popular mobile app, and asked for an interview. He replied immediately with a screen shot that included photos of women in various states of undress. “Best hookers in bj :),” he added. I explained there had been a misunderstanding, and we arranged to have coffee.

When we met at a cafe in Beijing’s business district, it was clear that Jason, whose surname is Zhang, was different from other young Chinese. He had a job, at a media company that produced reality TV shows, but didn’t seem especially busy. He’d studied in the U.S., but at a golf academy in Florida, and he’d dropped out after two years. His father was the head of a major HR company, and his mother was a government official. He wore a $5,500 IWC watch because, he said, he’d lost his expensive one. I asked him how much money he had. “I don’t know,” he said. “More than I can spend.” So this was it: I had found, in the wild, one of the elusive breed known in China as the fuerdai, or “second-generation rich.” (...)

It’s no surprise that most fuerdai, after summering in Bali and wintering in the Alps, reading philosophy at Oxford and getting MBAs from Stanford, are reluctant to take over the family toothpaste cap factory. Ping Fan, 36, who serves as executive deputy director of Relay, moved to Shanghai to start his own investment firm rather than work at his father’s real estate company in Liaoning province. He picked Shanghai, he said, “because it was far from my family.” After graduating from Columbia University, Even Jiang, 28, briefly considered joining her mother’s diamond import business, but they disagreed about the direction of the company. Instead, she went to work at Merrill Lynch, then returned to Shanghai to start a concierge service, inspired by the American Express service she used when living in Manhattan. Liu Jiawen, 32, whose parents own a successful clothing company in Hunan province, tried to start her own clothing line after graduating. “I wanted to show I could do it on my own,” she said. The company failed.

Along with riches, fuerdai often inherit a surplus of emotional trauma. The first generation of Chinese entrepreneurs came of age during a time that rewarded callousness. “They were the generation of the Cultural Revolution,” said Wang. “During that time, there was no humanity.” His grandfather, the principal of a middle school in Guizhou province, was humiliated by Red Guards. “They were raised cruelly—there was no mercy. It was survival of the fittest.” Many fuerdai have their parents’ same coldness, Wang said: “They’re really hard to be friends with.”

Zhang, the Uber driver, was sent to boarding school starting in kindergarten, even though his parents lived only a short distance from the school. Perhaps to compensate for their inattention, they gave him everything he wanted, including hundreds of toy cars. Last Christmas he bought himself the Maserati. “It’s like their childhood has not ended,” Wang said of his fellow rich kids. “Their childhood was not fully satisfied, so they always want to prolong the process of being children.” Thanks to China’s one-child policy, most fuerdai grew up without siblings. That’s why so many travel in packs on Saturday nights, Wang said. “They want to be taken care of. They want to be loved.”

For Zhang, partying is a way of staving off boredom. He used to go out clubbing five nights a week. “If I didn’t go, I couldn’t sleep,” he said. He doesn’t lack for companionship, he added. Two or three times a week, he’ll hire a high-end sex worker—a “booty call,” in his words—for $1,000 or more. Zhang prefers paying for sex to flirting with a girl under the pretense that he might date her. “This way is more direct,” he said. “I think this is a way of respecting women.” But some nights, sitting at home alone, he scrolls through the contacts on his phone only to reach the bottom without finding anyone he wants to call. When we first spoke, he said he had a girlfriend of three years who treated him well, but that he didn’t love her. “You’re the first person I’ve told that to,” he said.

Most fuerdai don’t talk about their problems so openly. “They have trust issues,” said Wayne Chen, 32, a second-generation investor from Shanghai. “They need a place to talk. They need a group.” Relay offers a setting in which they can speak honestly, without having to pretend. “It’s similar to a rehab center,” he said.

by Christopher Beam, Bloomberg | Read more:
Image:Ka Xiaoxi

Superman of Havana

The mayor’s son drew on his cigarette, thought back sixty years, paused, and made a chopping motion on his lower thigh—fifteen inches, give or take, from his groin to just above his knee. “The women said, ‘He has a machete.’”

The mayor’s son is in his seventies now, but he was a teenager back then, during the years of Havana’s original sin. He thought back to his father as a young man, a lotto numbers runner who rose to the mayoralty of the gritty Barrio de Los Sitios, in Centro Habana. His dad loved mingling with the stars that flocked to the capital, and he sometimes took his boy to meet them: Brando, Nat King Cole, and that old borrachón Hemingway. The mayor’s son once got blind drunk with Benny Moré, the famous Cuban crooner who had a regular gig at the Guadalajara.

But more revered than all the rest was the man of many names. El Toro. La Reina. The Man With the Sleepy Eyes. Outside Cuba, from Miami to New York to Hollywood, he was known simply as Superman. The mayor’s son never met the legendary performer, but everybody knew about him. The local boys talked about his gift. They gossiped about the women, the sex. “Like when you’re coming of age, reading your dad’s Playboys. That’s what the kids talked about,” he said. “The idea that this man was around in the neighborhood, it was mind-boggling in a way.”

Superman was the main attraction at the notorious Teatro Shanghai, in Barrio Chino—Chinatown. According to local lore, the Shanghai featured live sex shows. “If you’re a decent guy from Omaha, showing his best girl the sights of Havana, and you make the mistake of entering the Shanghai, you’ll curse Garcia and will want to wring his neck for corrupting the morals of your sweet baby,” Suppressed, a tabloid magazine, wrote in its 1957 review of the club.

After the revolution, the Shanghai shuttered. Many of the performers fled the country. Superman disappeared, like a ghost. No one knew his real name. There were no known photos of him. A man who was once famous well beyond Cuba’s shores—who was later fictionalized in The Godfather Part II and Graham Greene’s Our Man in Havana—was largely forgotten, a footnote in a sordid history.

In the difficult years that followed, people didn’t talk about those times, as if they never happened at all. “You didn’t want to make problems with the government,” the mayor’s son said. “People were afraid. People didn’t want to look back. Afterward, it was an entirely new story. It was like everything didn’t exist before. It was like Year Zero.”

And into that void, the story of Superman disappeared.

by Mitch Moxley, Roads and Kingdoms |  Read more:
Image: Michael Magers

Sting feat. Robert Downey Jr.