Tuesday, November 18, 2014


Edward Steichen, The Flatiron, 1904
via:

Can Climate Change Cure Capitalism?

Every fall, an international team of scientists announces how much carbon dioxide humanity has dumped into the atmosphere the previous year. This fall, the news wasn’t good. It almost never is. The only time the group reported a drop in emissions was 2009, when the global economy seemed on the verge of collapse. The following year, emissions jumped again, by almost 6 percent.

According to the team’s latest report, in 2013 global emissions rose by 2.3 percent. Contributing to this increase were countries like the United States, which has some of the world’s highest per capita emissions, and also countries like India, which has some of the lowest. “There is no more time,” one of the scientists who worked on the analysis, Glen P. Peters of the Center for International Climate and Environmental Research in Oslo, told The New York Times. “It needs to be all hands on deck now.”

A few days after the figures were released, world leaders met in New York to discuss how to deal with the results of this enormous carbon dump. Ban Ki-Moon, the secretary-general of the United Nations, had convened the summit to “catalyze climate action” and had asked the leaders to “bring bold announcements.” Once again, the news wasn’t good. It almost never is.

“There is a huge mismatch between the magnitude of the challenge and the response we heard here today,” Graça Machel, Nelson Mandela’s widow, told the summit in the final speech of the gathering. “The scale is much more than we have achieved.” This mismatch, which grows ever more disproportionate year after year, summit after summit, raises questions both about our future and about our character. What explains our collective failure on climate change? Why is it that instead of dealing with the problem, all we seem to do is make it worse?

These questions lie at the center of Naomi Klein’s ambitious new polemic, This Changes Everything: Capitalism vs. the Climate. “What is wrong with us?” Klein asks near the start of the book. Her answer turns upside-down the narrative that the country’s largest environmental groups have been telling.

According to these groups, climate change is a problem that can be tackled without major disruption to the status quo. All that’s needed are some smart policy changes. These will create new job opportunities; the economy will continue to grow; and Americans will be, both ecologically and financially, better off. Standing in the way of progress, so this account continues, is a vociferous minority of Tea Party–backed, Koch brothers–financed climate change deniers. Former president Jimmy Carter recently summed up this line of thinking when he told an audience in Aspen: “I would say the biggest handicap we have right now is some nutcases in our country who don’t believe in global warming.”

Klein doesn’t just disagree with Carter; she sees this line of thinking as a big part of the problem. Climate change can’t be solved within the confines of the status quo, because it’s a product of the status quo. “Our economic system and our planetary system are now at war,” she writes. The only hope of avoiding catastrophic warming lies in radical economic and political change. And this—again, according to Klein—is the good news. Properly understood, the buildup of CO2 in the atmosphere represents an enormous opportunity—one that, well, changes everything. “The massive global investments required to respond to the climate change threat” could, she writes,
deliver the equitable redistribution of agricultural lands that was supposed to follow independence from colonial rule and dictatorship; it could bring the jobs and homes that Martin Luther King dreamed of; it could bring jobs and clean water to Native communities; it could at last turn on the lights and running water in every South African township…. Climate change is our chance to right those festering wrongs at last—the unfinished business of liberation.
Klein begins by presenting the grim math of climate change. The world is, at this point, supposedly committed to holding warming to no more than 2 degrees Celsius (3.6 degrees Fahrenheit), a goal enshrined in a document known as the Copenhagen Accord, which President Barack Obama helped negotiate in 2009. This goal, as Klein points out, “has always been a highly political choice,” chosen more because it is—in theory at least—still attainable than because it actually represents a “safe” level of climate change. (“We feel compelled to note,” a group of prominent climate scientists has observed, “that even a ‘moderate’ warming of 2°C stands a strong chance of provoking drought and storm responses that could challenge civilized society.”)

What’s going to determine how much the planet on average warms is how much CO2 gets added to the atmosphere in total. To have a reasonable shot at limiting warming to two degrees, the general consensus among scientists is that aggregate emissions since industrialization began in the mid-eighteenth century must be held to a trillion metric tons. Almost 600 billion of those tons have already been emitted, meaning that humanity has already blown through more than half of its “carbon budget.” If current trends continue, it will burn through the rest in the next twenty-five years. Thus, what is essential to preserving the possibility of 2 degrees is reversing these trends, and doing so immediately.

A simple way to start cutting global emissions would be for all nations to reduce their CO2 output by the same proportion—say, by half. The obvious downside to this strategy is that it would, in effect, reward those countries that have contributed the most to the problem, while punishing those that have contributed the least. (One reason—perhaps the reason—the West is wealthier than the rest of the world is that it figured out much earlier how to exploit fossil fuels.) A more equitable approach would be to ask historically high emitters—and here we are talking about the nations of Europe and especially the US—to cut their emissions more deeply. And indeed, it’s pretty much taken as a given in climate policy circles that if there’s to be any hope at all of hewing to 2 degrees, the EU and the US will have to cut their emissions drastically—by 80 percent or more over the coming decades.

This is a terrifying predicament to find ourselves in. Even warming of 2 degrees may result in “drought and storm responses that could challenge civilized society.” Meanwhile, avoiding still-greater warming (and greater dangers) will require precisely those who’ve enjoyed the richest benefits of burning fossil fuels suddenly to forswear the practice. The situation justifies Klein’s sense of urgency and also her sense that there’s a disconnect between the soothing rhetoric of “Big Green” environmentalists and the enormity of the challenge. Can it really be that all that’s preventing us from making the policy changes that would avert disaster is a bunch of “nutcases?”

Klein traces our inaction to a much deeper, structural problem. Our economy has been built on the promise of endless growth. But endless growth is incompatible with radically reduced emissions; it’s only at times when the global economy has gone into free fall that emissions have declined by more than marginal amounts. What’s needed, Klein argues, is “managed degrowth.” Individuals are going to have to consume less, corporate profits are going to have to be reduced (in some cases down to zero), and governments are going to have to engage in the kind of long-term planning that’s anathema to free marketeers.

The fact that major environmental groups continue to argue that systemic change isn’t needed makes them, by Klein’s account, just as dishonest as the global warming deniers they vilify. Indeed, perhaps more so, since one of the deniers’ favorite arguments is that reducing emissions by the amount environmentalists say is necessary would spell the doom of capitalism. “Here’s my inconvenient truth,” she writes.
I think these hard-core ideologues understand the real significance of climate change better than most of the “warmists” in the political center, the ones who are still insisting that the response can be gradual and painless and that we don’t need to go to war with anybody.
Klein goes so far as to argue that the environmental movement has itself become little more than an arm (or perhaps one should say a column) of the fossil fuel industry. Her proof here is that several major environmental groups have received sizable donations from fossil fuel companies or their affiliated foundations, and some, like the Nature Conservancy, have executives (or former executives) of utility companies on their boards. “A painful reality behind the environmental movement’s catastrophic failure to effectively battle the economic interests behind our soaring emissions,” she writes, is that “large parts of the movement aren’t actually fighting those interests—they have merged with them.”

by Elizabeth Kolbert, NY Review of Books |  Read more:
Image: James Ferguson

Hoda Ashfar Westoxicated # 7 and #5
via:

Steve Albini on the Surprisingly Sturdy State of the Music Industry



Steve Albini is the producer (he prefers the term “recording engineer”) behind several thousand records. He is also a member of the band Shellac. In 1993, he published The Problem with Music, an essay expounding his belief that the major label-dominated industry of the time was inefficient, exploited musicians and led to below par music. On Saturday he gave the keynote address at Melbourne’s Face the Music conference in which he celebrated the fact the internet had both dismantled this system and addressed its inequalities:

I’m going to first explain a few things about myself. I’m 52 years old, I have been in bands continuously, and active in the music scene in one way or another since about 1978. At the moment I’m in a band, I also work as a recording engineer and I own a recording studio in Chicago. In the past I have also been a fanzine writer, radio club DJ, concert promoter and I ran a small record label. I was not terribly successful at any of those things, but I have done them, so they qualify as part of my CV.

I work every day with music and with bands and I have for more than 30 years. I’ve made a couple thousand records for independent bands and rock stars, for big labels and small ones. I made a record two days ago and I’ll be making one on Monday when I get off the plane. So I believe this puts me in a pretty good position to evaluate the state of the music scene today, as it relates to how it used to be and how it has been.

We’re all here to talk about the state of the music scene and the health of the music community. I’ll start by saying that I’m both satisfied and optimistic about the state of the music scene. And I welcome the social and technological changes that have influenced it. I hope my remarks today will start a conversation and through that conversation we can invoke an appreciation of how resilient the music community is, how supportive it can be and how welcoming it should be.

I hear from some of my colleagues that these are rough times: that the internet has cut the legs off the music scene and that pretty soon nobody will be making music anymore because there’s no money in it. Virtually every place where music is written about, there is some version of this troubling perspective. People who used to make a nice income from royalties, they’ve seen the royalties dry up. And people who used to make a living selling records are having trouble selling downloads as substitute for records, and they no longer make records.

So there is a tacit assumption that this money, lost money, needs to be replaced and a lot of energy has been spent arguing from where that money will come. Bitchiness about this abounds, with everybody insisting that somebody else should be paying him, but that he shouldn’t have to pay for anybody else. I would like to see an end to this dissatisfaction. (...)

There’s a lot of shade thrown by people in the music industry about how terrible the free sharing of music is, how it’s the equivalent of theft, etc. That’s all bullshit and we’ll deal with that in a minute. But for a minute I want you to look at the experience of music from a fan’s perspective, post-internet. Music that is hard to find was now easy to find. Music to suit my specific tastes, as fucked up as they might be, was now accessible by a few clicks or maybe posting a query on a message board. In response I had more access to music than I had ever imagined. Curated by other enthusiasts, keen to turn me on to the good stuff; people, like me, who want other people to hear the best music ever.

This audience-driven music distribution has other benefits. Long-forgotten music has been given a second life. And bands whose music that was ahead of its time has been allowed to reach a niche audience that the old mass distribution failed to find for them, as one enthusiast turns on the next and this forgotten music finally gets it due. There’s a terrific documentary about one such case, the Detroit band Death whose sole album was released in a perfunctory edition in, I believe, 1975 and disappeared until a copy of it was digitised and made public on the internet. Gradually the band found an audience, their music got lovingly reissued, and the band has resurrected, complete with tours playing to packed houses. And the band are now being allowed the career that the old star system had denied them. There are hundreds of such stories and there are speciality labels that do nothing but reissue lost classics like that once they surface.

Now look at the conditions from a band’s perspective, the conditions faced by a band. In contrast to back in the day, recording equipment and technology has simplified and become readily available. Computers now come pre-loaded with enough software to make a decent demo recording and guitar stores sell microphones and other equipment inexpensively that previously was only available at a premium from arcane speciality sources. Essentially every band now has the opportunity to make recordings.

And they can do things with those recordings. They can post them online in any number of places: Bandcamp, YouTube, SoundCloud, their own websites. They can link to them on message boards, Reddit, Instagram, Twitter and even in the comment streams of other music. “LOL,” “this sucks,” “much better,” “death to false metal,” “LOL”. Instead of spending a fortune on international phone calls trying to find someone in each territory to listen to your music, every band on the planet now has free, instant access to the world at its fingertips.

I cannot overstate how important a development that is. Previously, in the top-down paradigm allowed local industry to dictate what music was available in isolated or remote markets, markets isolated by location or language. It was inconceivable that a smaller or independent band could have market penetration into, say, Greece or Turkey, Japan or China, South America, Africa or the Balkans. Who would you ask to handle your music? How would you find him? And how would you justify the business and currency complications required to send four or five copies of a record there?

Now those places are as well-served as New York and London. Fans can find the music they like and develop direct relationships with the bands. It is absolutely possible – I’m sure it happens every day – that a kid in one of these far-flung places can find a new favourite band, send that band a message, and that singer of that band will read it and personally reply to it from his cell phone half a world away. How much better is that? I’ll tell you, it’s infinitely better than having a relationship to a band limited to reading it on the back of the record jacket. If such a thing were possible when I was a teenager I’m certain I would have become a right nuisance to the Ramones. (...)

In short, the internet has made it much easier to conduct the day-to-day business of being in a band and has increased the efficiency. Everything from scheduling rehearsals using online calendars, to booking tours by email, to selling merchandise and records from online stores, down to raising the funds to make a record is a new simplicity that bands of the pre-internet era would salivate over. The old system was built by the industry to serve the players inside the industry. The new system where music is shared informally and the bands have a direct relationship to the fans was built by the bands and the fans in the manner of the old underground. It skips all the intermediary steps. (...)

Imagine a great hall of fetishes where whatever you felt like fucking or being fucked by, however often your tastes might change, no matter what hardware or harnesses were required, you could open the gates and have at it on a comfy mattress at any time of day. That’s what the internet has become for music fans. Plus bleacher seats for a cheering section.

by Steve Albini, The Guardian |  Read more:
Image: Jayden Ostwald

Monday, November 17, 2014


Tomas van Houtryve
via: Art in a Time of Surveillance

Extreme Wealth Is Bad for Everyone—Especially the Wealthy

It’s an obvious point: people’s behavior can be changed. But it’s largely absent from the growing and increasingly heated discussion about the growing gap between the very rich and everyone else. The grotesque inequality between the haves and the have-nots is seldom framed as a problem that the haves might privately help to resolve. Instead, it is a problem the have-nots must persuade their elected officials to do something about, presumably against the wishes of the haves. The latest contribution to the discussion comes from Darrell West, a scholar at the Brookings Institution. “Wealth—its uses and abuses—is a subject that has intrigued me since my youth in the rural Midwest,” West writes in the introduction to his study of billionaires. From his seat in Washington, D.C., he has grown concerned about the effects on democracy of a handful of citizens controlling more and more wealth.

Drawing on the work of Thomas Piketty and Emmanuel Saez, West notes that the concentration of wealth in the top 1 percent of American citizens has returned to levels not seen in a century. One percent of the population controls a third of its wealth, and the problem is only getting worse: from 1979 to 2009 after-tax income for the top 1 percent rose by 155 percent while not changing all that much for everyone else. By another measure of inequality, which compares the income controlled by the top 10 percent with that of the bottom 40 percent, the United States is judged to come forty-fourth out of the eighty-six nations in the race, and last among developed nations. But the object of West’s interest is not the top 10 percent or even the top 1 percent, but the handful of the richest people on the planet—the 1,645 (according to Forbes) or 1,682 (the Knight Frank group) or 1,867 (China’s Start Property Group) or 2,170 (UBS Financial Services) people on the planet worth a billion dollars or more. (The inability to identify even the number of billionaires hints at a bigger problem: how little even those who claim an expertise about this class of people actually know about them.)

Billionaires seems to have been sparked by West’s belief that rich people, newly empowered to use their money in politics, are now more likely than usual to determine political outcomes. This may be true, but so far the evidence—and evidence here is really just a handful of anecdotes—suggests that rich people, when they seek to influence political outcomes, often are wasting their money. Michael Bloomberg was able to use his billions to make himself mayor of New York City (which seems to have worked out pretty well for New York City), but Meg Whitman piled $144 million of her own money in the streets of California and set it on fire in her failed attempt to become governor. Mitt Romney might actually have been a stronger candidate if he had less money, or at least had been less completely defined by his money. For all the angst caused by the Koch Brothers and Sheldon Adelson and their efforts to unseat Barack Obama, they only demonstrated how much money could be spent on a political campaign while exerting no meaningful effect upon it.

As West points out, many rich people are more interested in having their way with specific issues than with candidates, but even here their record is spotty. Perhaps they are having their way in arguments about raising federal estate tax; but the states with the most billionaires in them, California and New York, have among the highest tax rates on income and capital gains. If these billionaires are seeking, as a class, to minimize the sums they return to society, they are not doing a very good job of it. But of course they aren’t seeking anything, as a class: it’s not even clear they can agree on what their collective interests are. The second richest American billionaire, Warren Buffett, has been quite vocal about his desire for higher tax rates on the rich. The single biggest donor to political campaigns just now is Tom Steyer, a Democrat with a passion for climate change. And for every rich person who sets off on a jag to carve California into seven states, or to defeat Barack Obama, there are many more who have no interest in politics at all except perhaps, in a general way, to prevent them from touching their lives. Rich people, in my experience, don’t want to change the world. The world as it is suits them nicely. (...)

And it raises a bigger question: just how influential are the very rich? They are much in the news; often they own the news. But what are their deeper effects, as a class, on the rest of us? There was a time in America when a few rich people could elect a president (see McKinley), but they haven’t been very good at that lately. When was the last time a billionaire wrote a seminal book or achieved some dramatic scientific breakthrough or created some lasting work of art? Acts of the imagination are responses to needs and desires. The Knight Frank real estate agency’s report on billionaires describes them, nauseatingly, as people “driven by desire.” But desire, at least the profitable kind, is exactly what you lose when have more of everything than you could possibly need—especially when you are born with it. Ditto the willingness to suffer in the pursuit of excellence. The American upper middle class has spent a fortune teaching its children to play soccer: how many great soccer players come from the upper middle class? The more you think about the very rich, the more tempting it is to take the other side of this argument. True, people occasionally become very rich by changing the world as we know it, but in these cases money is the effect, not the cause. Mark Zuckerberg wasn’t rich when he created Facebook, and neither were Sergey Brin and Larry Page when they created Google.

It’s just not as easy as it seems to use money to change the world, even when what you are doing with the money is giving it away. West has a good chapter on billionaires’ activist philanthropy but, as any billionaire will tell you, this is as much a story of frustration as of success. (Zuckerberg has discovered this in the Newark public schools.) The big surprise about money, in this age of grotesque and growing economic inequality, may be its limits. At any rate, it’s not at all clear how the swelling heap of money controlled by the extremely rich is changing us.

What is clear about rich people and their money—and becoming ever clearer—is how it changes them.

by Michael Lewis, TNR | Read more:
Image: Mike Lee

The Copyright Monopoly Wars Are About to Repeat, But Much Worse

People sometimes ask me when I started questioning if the copyright monopoly laws were just, proper, or indeed sane. I respond truthfully that it was about 1985, when we were sharing music on cassette tapes and the copyright industry called us thieves, murderers, rapists, arsonists, and genocidals for manufacturing our own copies without their permission.

Politicians didn’t care about the issue, but handwaved away the copyright industry by giving them private taxation rights on cassette tapes, a taxation right that would later infest anything with digital storage capacity, ranging from games consoles to digital cameras.

In 1990, I bought my first modem, connecting to FidoNet, an amateur precursor to the Internet that had similar addressing and routing. We were basically doing what the Internet is used for today: chatting, discussing, sharing music and other files, buying and selling stuff, and yes, dating and flirting. Today, we do basically the same things in prettier colors, faster, and more realtime, on considerably smaller devices. But the social mechanisms are the same.

The politicians were absolutely clueless.

The first signal that something was seriously wrong in the heads of politicans was when they created a DMCA-like law in Sweden in 1990, one that made a server owner legally liable for forum posts made by somebody else on that server, if the server operator didn’t delete the forum post on notice. For the first time in modern history, a messenger had been made formally responsible for somebody else’s uttered opinion. People who were taking part in creating the Internet at the time went to Parliament to try to explain the technology and the social contract of responsibilities, and walked away utterly disappointed and desperate. The politicians were even more clueless than imagined.

It hasn’t gotten better since. Cory Doctorow’s observation in his brilliant speech about the coming war on general computing was right: Politicians are clueless about the Internet because they don’t care about the Internet. They care about energy, healthcare, defense, education, and taxes, because they only understand the problems that defined the structures of the two previous generations – the structures now in power have simply retained their original definition, and those are the structures that put today’s politicians in power. Those structures are incapable of adapting to irrelevance.

Enter bitcoin.

The unlicensed manufacturing of movie and music copies were and are such small time potatoes the politicians just didn’t and don’t have time for it, because energy healthcare defense. Creating draconian laws that threaten the Internet wasn’t an “I think this is a good idea” behavior. It has been a “copyright industry, get out of my face” behavior. The copryight industry understands this perfectly, of course, and throws tantrums about every five years to get more police-like powers, taxpayer money, and rent from the public coffers. Only when the population has been more in the face of politicians than the copyright industry – think SOPA, ACTA – have the politicians backpedaled, usually with a confused look on their faces, and then absentmindedly happened to do the right thing before going back to energy healthcare defense.

However, cryptocurrency like bitcoin – essentially the same social mechanisms, same social protocols, same distributed principles as BitTorrent’s sharing culture and knowledge outside of the copyright industry’s monopolies – is not something that passes unnoticed. Like BitTorrent showed the obsolescence of the copyright monopoly, bitcoin demonstrates the obsolescence of central banks and today’s entire financial sector. Like BitTorrent didn’t go head-to-head with the copyright monopoly but just circumvented it as irrelevant, bitcoin circumvents every single financial regulation as irrelevant. And like BitTorrent saw uptake in the millions, so does bitcoin.

Cryptocurrency is politically where culture-sharing was in about 1985.

Politicians didn’t care about the copyright monopoly. They didn’t. Don’t. No, they don’t, not in the slightest. That’s why the copyright industry has been given everything they point at. Now for today’s million dollar question: do you think politicians care about the authority of the central bank and the core controllability of funds, finances, and taxation?

YES. VERY MUCH.

This is going to get seriously ugly. But this time, we have a blueprint from the copyright monopoly wars. Cory Doctorow was right when he said this isn’t the war, this is just the first skirmish over control of society as a whole. The Internet generation is claiming that control, and the old industrial generation is pushing back. Hard.

by Rick Falkvinge, TorrentFreak |  Read more:
Image: uncredited

Why the Selfie Is Here to Stay

Selfies: such a cute niblet of a word, and yet I curse the day it was coined—it’s like a decal that won’t come unpeeled. Taking a picture of yourself with outstretched arm seems so innocent and innocuous, but what a pushy, wall-tiling tableau it has become—a plague of “duckfaces” and gang signs and James Franco (the Prince of Pose) staredowns. In my precarious faith in humankind’s evolution, I had conned myself into hoping, wishing, yearning that taking and sharing selfies would be a viral phase in the Facebook millennium, burning itself out like so many fads before, or at least receding into a manageable niche in the Internet arcade after reaching its saturation point. When Ellen DeGeneres snapped the all-star group selfie during the live broadcast of the 2014 Academy Awards, a say-cheese image that was re-tweeted more than two million times, it seemed as if that might be the peak of the selfie craze—what could top it? Once something becomes that commercialized and institutionalized, it’s usually over, but nothing is truly over now—the traditional cycles of out-with-the-old-in-with-the-new have been repealed, flattened into a continuous present. Nothing can undo the crabgrass profusion of the selfie, not even its capacity as an instrument of auto-ruination.

It has proved itself again and again to be a tool of the Devil in the wrong, dumb hands, as then congressman Anthony Weiner learned when he shared a selfie of his groin district, driving a stake through a once promising, power-hungry political career. A serial bank robber in Michigan was apprehended after posting a Facebook selfie featuring the gun presumably used in the holdups. A woman in Illinois was arrested after she modeled for a selfie wearing the outfit she had just nicked from a boutique. A pair of meth heads were busted for “abandonment of a corpse” after they partook of a selfie with a pal who had allegedly overdosed on Dilaudid, then uploaded the incriminating evidence to Facebook. Tweakers have never been known for lucid behavior, but one expects more propriety from professional men and women in white coats, which is why it was a shock-wave scandale when Joan Rivers’s personal ear-nose-and-throat doctor, Gwen Korovin, was accused of taking a selfie while Rivers was conked out on anesthesia. Korovin emphatically denies taking a sneaky self-peeky, and had the procedure been smooth sailing this story would have fluttered about as a one-day wonder, a momentary sideshow. But Rivers didn’t survive, she went down for the count, and Korovin’s name, fairly or not, was dragged through the immeasurable mire of the Internet. (...)

Times Square selfies, even those involving a shish kebab device, are an improvement over the more prevalent custom of visitors’ asking passersby such as myself, “Would you mind taking a picture of us?,” and offering me their camera. Selfies at least spare the rest of us on our vital rounds. But it is difficult to find any upside to the indulgence of selfies in public places intended as sites of remembrance and contemplation. There is a minor epidemic of visitors taking grinning selfies at the 9/11 Memorial pools. And it isn’t just students on school trips for whom social media is the only context they have; it’s also adults who treat the 9/11 Memorial as if it were just another sightseeing spot, holding their camera aloft and taking a selfie, indifferent or oblivious to the names of the dead victims of the 1993 and 2001 attacks inscribed on the bronze panels against which some of them are leaning. I consider myself fortunate that I was able to visit the Vietnam Veterans Memorial, in Washington, D.C., before the advent of the selfie: the reflective walls etched with the names didn’t serve as a backdrop for a personal photo op. Today no spot is safe from selfie antics. Outrage exploded over a teenage girl posting a grinning selfie in front of Auschwitz, outrage that was compounded when she reacted to the ruckus by chirping in response, “I’m famous yall.”

There are those who analyze and rationalize the taking of selfies at former concentration camps or some stretch of hallowed ground as being a more complex and dialectical phenomenon than idle, bovine narcissism—as being an exercise in transactional mediation between personal identity and historical legacy, “placing” oneself within a storied iconography. Sounds like heavy hooey to me, if only because the taking of selfies seems to be more of a self-perpetuating process whose true purpose is the production of other selfies—self-documentation for its own sake, a form of primping that accumulates into a mosaic that may become fascinating in retrospect or as boring as home movies. Turning yourself into a Flat Stanley in front of a landmark doesn’t seem like much of a quest route into a deeper interiority, just as the museum-goers who take selfies in front of famous paintings and sculptures are unlikely to be deepening their aesthetic appreciation. Consider the dope who, intending to nab an action selfie, reportedly climbed onto the lap of a 19th-century sculpture in an Italian museum, a copy of a Greek original, only to smash the figure, snapping off one of its legs above the knee. As if weary-on-their-feet museum guards didn’t have enough to deal with.

by James Wolcott, Vanity Fair |  Read more:
Image: Darrow/Arte & Imagini SRL/Corbis

Sunday, November 16, 2014

Retire Already: The Forever Professors


The 1994 law ending mandatory retirement at age 70 for university professors substantially mitigated the problem of age discrimination within universities. But out of this law a vexing new problem has emerged—a graying—yea, whitening—professoriate. The law, which allows tenured faculty members to teach as long as they want—well past 70, or until they’re carried out of the classroom on a gurney—means professors are increasingly delaying retirement past age 70 or even choosing not to retire at all.

Like so much else in American life, deciding when to retire from academe has evolved into a strictly private and personal matter, without any guiding rules, ethical context, or sense of obligation to do what’s best—for one’s students, department, or institution. Only the vaguest questions—and sometimes not even those—are legally permitted. An administrator’s asking, "When do you think you might retire?" can bring on an EEOC complaint or a lawsuit. Substantive departmental or faculty discussions about retirement simply do not occur.

University professors may be more educated than the average American, but now that there’s no mandatory retirement age, their decisions about when to leave prove that they are as self-interested as any of their countrymen. When professors continue to teach past 70, they behave in exactly the same way as when we decide to drive a car on a national holiday. Who among us stops to connect the dots between our decision to drive and a traffic jam, or that traffic jam and global warming?

Despite the boomer claim that 70 is the new 50, and the actuarial fact that those who live in industrialized countries and make it to the age of 65 have a life expectancy reaching well into the 80s, 70 remains what it has always been—old. By the one measure that should count for college faculty—how college students perceive their professors—it is definitively old. Keeping physically fit, wearing Levi’s, posting pictures on Instagram, or continually sneaking peeks at one’s iPhone don’t count for squat with students, who, after all, have grandparents who are 70, if not younger.

To invoke Horace, professors can drive out Nature with a pitchfork, but she’ll come right back in. Aging is Nature’s domain, and cannot be kneaded into a relativist cultural construct. It’s her means of leading us onto the off-ramp of life.

Professors approaching 70 who are still enamored with hanging out with students and colleagues, or even fretting about money, have an ethical obligation to step back and think seriously about quitting. If they do remain on the job, they should at least openly acknowledge they’re doing it mostly for themselves.

Of course, there are exceptions. Some professors, especially in the humanities, become more brilliant as they grow older—coming up with their best ideas and delivering sagacity to their students. And some research scientists haul in the big bucks even when they’re old. But those cases are much rarer than older professors vainly like to think. (...)

The average age for all tenured professors nationwide is now approaching 55 and creeping upward; the number of professors 65 and older more than doubled between 2000 and 2011. In spite of those numbers, according to a Fidelity Investments study conducted about a year ago, three-quarters of professors between 49 and 67 say they will either delay retirement past age 65 or—gasp!—never retire at all. They ignore, or are oblivious to, the larger implications for their students, their departments, and their colleges.

And they delude themselves about their reasons for hanging on. In the Fidelity survey, 80 percent of those responding said their primary reason for wanting to continue as faculty members was not that they needed the money but for "personal or professional" reasons. A Fidelity spokesman offered what seemed to me a naïve interpretation of that answer: "Higher-education employees, especially faculty, are deeply committed to their students, education, and the institutions they serve."

Maybe. But "commitment to higher education" covers some selfish pleasures.

by Laurie Fendrich, Chronicle of Higher Education | Read more:
Image: Scott Seymour

Leaving Shame on a Lower Floor


Among the many vertiginous renderings for the penthouse apartments at 432 Park Avenue, the nearly 1,400-foot-high Cuisenaire rod that topped off last month, is one of its master (or mistress) of the universe bathrooms, a glittering, reflective container of glass and marble. The image shows a huge egg-shaped tub planted before a 10-foot-square window, 90 or more stories up. All of Lower Manhattan is spread out like the view from someone’s private plane.

Talk about power washing.

The dizzying aerial baths at 432 Park, while certainly the highest in the city, are not the only exposed throne rooms in New York. All across Manhattan, in glassy towers soon to be built or nearing completion, see-through chambers will flaunt their owners, naked, toweled or robed, like so many museum vitrines — although the audience for all this exposure is probably avian, not human.

It seems the former touchstones of bathroom luxury (Edwardian England, say, or ancient Rome) have been replaced by the glass cube of the Apple store on Fifth Avenue. In fact, Richard Dubrow, marketing director at Macklowe Properties, which built 432 and that Apple store, described the penthouse “wet rooms” (or shower rooms) in just those terms.

Everyone wants a window, said Vickey Barron, a broker at Douglas Elliman and director of sales at Walker Tower, a conversion of the old Verizon building on West 18th Street. “But now it has to be ­ a Window.” She made air quotes around the word. “Now what most people wanted in their living rooms, they want in their bathrooms. They’ll say, ‘What? No View?’ ” (...)

From the corner bathrooms at 215 Chrystie Street, Ian Schrager’s upcoming Lower East Side entry designed by Herzog & De Meuron and with interior architecture by the English minimalist John Pawson, you can see the Chrysler Building and the 59th Street Bridge, if you don’t pass out from vertigo. The 19-foot-long bathrooms of the full-floor apartments are placed at the building’s seamless glass corners. It was Mr. Pawson who designed the poured concrete tub that oversees that sheer 90-degree angle.

Just looking at the renderings, this reporter had to stifle the urge to duck.

“Ian’s approach is always, If there’s a view, there should be glass,” Mr. Pawson said. “It’s not about putting yourself on show, it’s about enjoying what’s outside. Any exhibitionism is an unfortunate by-product. I think what’s really nice is that at this level you’re creating a gathering space. You can congregate in the bathroom, you can even share the bath or bring a chair in.”

by Penelope Green, NY Times |  Read more:
Image: DBOX for CIM Group & Macklowe Properties

Saturday, November 15, 2014


Mark Smith
via:

Taming the Wild Tuna

Kushimoto, Japan— Tokihiko Okada was on his boat one recent morning when his cellphone rang with an urgent order from a Tokyo department store. Its gourmet food section was running low on sashimi. Could he rustle up an extra tuna right away?

Mr. Okada, a researcher at Osaka’s Kinki University, was only too happy to oblige—and he didn’t need a fishing pole or a net. Instead, he relayed the message to a diver who plunged into a round pen with an electric harpoon and stunned an 88-pound Pacific bluefin tuna, raised from birth in captivity. It was pulled out and slaughtered immediately on the boat.

Not long ago, full farming of tuna was considered impossible. Now the business is beginning to take off, as part of a broader revolution in aquaculture that is radically changing the world’s food supply.

“We get so many orders these days that we have been catching them before we can give them enough time to grow,” said Mr. Okada, a tanned 57-year-old who is both academic and entrepreneur. “One more year in the water, and this fish would have been much fatter,” as much as 130 pounds, he added.

With a decadeslong global consumption boom depleting natural fish populations of all kinds, demand is increasingly being met by farm-grown seafood. In 2012, farmed fish accounted for a record 42.2% of global output, compared with 13.4% in 1990 and 25.7% in 2000. A full 56% of global shrimp consumption now comes from farms, mostly in Southeast Asia and China. Oysters are started in hatcheries and then seeded in ocean beds. Atlantic salmon farming, which only started in earnest in the mid-1980s, now accounts for 99% of world-wide production—so much so that it has drawn criticism for polluting local water systems and spreading diseases to wild fish.

Until recently, the Pacific bluefin tuna defied this sort of domestication. The bluefin can weigh as much as 900 pounds and barrels through the seas at up to 30 miles an hour. Over a month, it may roam thousands of miles of the Pacific. The massive creature is also moody, easily disturbed by light, noise or subtle changes in the water temperature. It hurtles through the water in a straight line, making it prone to fatal collisions in captivity.

The Japanese treasure the fish’s rich red meat so much that they call it “hon-maguro” or “true tuna.” Others call it the Porsche of the sea. At an auction in Tokyo, a single bluefin once sold for $1.5 million, or $3,000 a pound.

All this has put the wild Pacific bluefin tuna in a perilous state. Stocks today are less than one-fifth of their peak in the early 1960s, around the time Japanese industrial freezer ships began prowling the oceans, according to an estimate by an international governmental committee monitoring tuna fishing in the Pacific. The wild population is now estimated by that committee at 44,848 tons, or roughly nine million fish, down nearly 50% in the past decade.

The decline has been exacerbated by earlier efforts to cultivate tuna. Fishermen often catch juvenile fish in the wild that are then raised to adulthood in pens. The practice cuts short the breeding cycle by removing much of the next generation from the seas.

Scientists at Kinki University decided to take a different approach. Kinki began studying aquaculture after World War II in an effort to ease food shortages. Under the motto “Till the Ocean,” researchers built expertise in breeding fish popular in the Japanese diet such as flounder and amberjack.

In 1969, long before the world started craving fresh slices of fatty tuna, Kinki embarked on a quest to tame the bluefin. It sought to complete the reproduction cycle, with Pacific bluefin tuna eggs, babies, juveniles and adults all in the farming system.

Two scientists from Kinki went out to sea with local fishermen, seeking to capture juvenile tuna for raising in captivity. “We researchers always wanted to raise bluefin because it’s big and fast. It’s so special,” said one of the scientists, Hidemi Kumai, now 79 years old. “We knew from the beginning it was going to be a huge challenge.”

It was more than that. The moment the researchers grabbed a few juvenile fish out of a net, the skin started to disintegrate, killing them. It took four years just to perfect delicate fast-releasing hooks for capturing juveniles and moving them into pens.

“Local fishermen used to say to us, ‘Professors, you are crazy. Bluefin can’t live in confinement,’ ” Mr. Kumai recalled.

In 2011, Kinki lost more than 300 grown fish out of is stock of 2,600 after an earthquake-triggered tsunami hit a coastline 400 miles away. The tsunami triggered a quick shift in tide and clouded the water, causing the fish to panic and smash into nets. Last year, a typhoon decimated its stock. Again this summer, frequent typhoons kept the researchers on their toes as they waited for the breeding season to start. “Oftentimes, all we can do is pray,” said Mr. Okada as he threw a mound of mackerel into the pen using a spade.

It took nearly 10 years for fish caught in the wild to lay eggs at Kinki’s research pens. Then, in 1983, they stopped laying, and for 11 years, researchers couldn’t figure out the problem. The Kinki scientists now attribute the hiatus to intraday drops in water temperature, a lesson learned only after successful breeding at a separate facility in southern Japan.

In the summer of 1994, the fish finally produced eggs again. The researchers celebrated and put nearly 2,000 baby fish in an offshore pen. The next morning, most of them were dead with their neck bones broken. The cause was a mystery until a clue came weeks later. Some of the babies in the lab panicked when the lights came on after a temporary blackout and killed themselves.

Mr. Kumai and colleagues realized that sudden bright light from a car, fireworks or lightning caused the fish to panic and bump into each other or into the walls. The solution was to keep the lights on at all times.

For nearly five decades, Mr. Kumai has lived along a quiet inlet, steps from the university’s research pens. He calls the fish “my family.”

“These fish can’t protest with their mouths so they protest by dying,” he says. “We must listen to them carefully so we catch the problems before they resort to dying.”

by Yuka Hayashi, WSJ |  Read more:
Image: Jereme Souteyrat for the WSJ 

Robert Longo
via:

The Ice-Bucket Racket

Ever since the ice-bucket challenge swept the Internet this summer, raising more than $115 million for A.L.S. research, a legion of imitators has sprung up to try and cash in themselves. In the approaching holiday season, as fund-raising appeals swell, we can now smash a pie in our faces, snap selfies first thing in the morning or take a photo of ourselves grabbing our crotches, among other tasteful gestures, to express solidarity with various worthy causes. But the failure of these newer gimmicks to enjoy anywhere near the same popularity as the frigid original demonstrates the peculiar and finicky nature of our altruism — a psychological puzzle that both scientists and economists are trying to decipher. (...)

Most charitable efforts elicit our sympathy by showing us photographs of the afflicted and telling us tales of suffering. But just as people avert their eyes from beggars, most of us can shift our attention from stuff that depresses us. One great curiosity, and advantage, of the ice-bucket challenge was that it did very little to remind us of the disease that was its supposed inspiration.

Fund-raising professionals hoping to decode the magic of the challenge, however, will be dispirited to learn that this master game plan wasn’t exactly intentional. According to Josh Levin, a writer at Slate, the challenge appears to have instead emerged spontaneously from similar dares, like “polar plunges” into ice-cold lakes. At first, it simply consisted of using social media to dare others to dump a pail of ice water over themselves. Later, participants began donating $100 to any of a wide variety of charities. It became linked to A.L.S. only later, when a couple of pro golfers took the challenge and chose that as their good cause.

Sander van der Linden, a social psychologist at Princeton University who has researched attitude change, thinks several factors allowed the ice-bucket challenge to become a viral and fund-raising sensation. Its public nature forced people to either accept the task or suffer damage to their reputations. Other stimulants to action included peer pressure from friends, the “helper’s high” that results from aiding others and the fortuitous participation of celebrities like Bill Gates and Katy Perry. Particularly crucial was the 24-hour deadline that the challenge gave to either drench oneself or shell out."When you make people set specific goals, they become more likely to change behavior,” van der Linden told me. “People like setting goals, and they like achieving goals.”

Two additional features were particularly clever, according to van der Linden. One was the ingenious way that the challenge fed our collective narcissism by allowing us to celebrate with selfies or videos of our drenched faces and bodies on Facebook and Twitter. An even deeper motivation may have been the precisely calibrated amount of self-sacrifice involved. “If you’re going to elicit money from people, it helps to have some way of doing it that is at least slightly painful, since that makes the whole experience about more than just giving away what may be a relatively trivial amount of money,” van der Linden said.

by Ian McGugan, NY Times |  Read more:
Image: Joon Mo Kang

Friday, November 14, 2014

The Anxiety of the Forever Renter

What no economist has measured is this: There’s something fundamentally demeaning about being a renter, about having to ask permission to change the showerhead, about having to mentally deduct future losses from deposit checks for each nail hammered into the wall to hang family photos. There’s something degrading about the annual rent increase that comes with this implied taunt to its captive audience: What are you going to do, move out?

I’m not worried about what it would mean for us to be a Nation of Renters, whether that would fray the social fabric or unravel homeownership’s side effects on civic participation or crime rates. Some people are worried about this. “FDR mentioned that 'a nation of homeowners is unconquerable,'” I heard the chief economist for the National Association of Realtors a few months ago tell a room full of policymakers suspicious of the mortgage home interest deduction. “We have to think,” he pleaded, “that maybe there is something more than numbers to a homeownership society” – as if we might devolve into some kind of chaos if enough of us didn’t care enough about our property to own it.

What I am worried about is the dill plant on my second-floor windowsill. I rotate it a little bit every day because it only gets sun from the western exposure. It has been dying since the day I brought it home. I want to put it in the ground, or at least outside. For several weeks over the summer, I tried furtively growing oregano in a small pot on the communal front stoop of our 20-unit red-brick apartment building. I carried cups of water out to it late at night when I thought no one was looking.

Eventually, it disappeared.

Earlier this year, my husband and I took a deep breath, purchased a power tool and did something permanent about our kitchen-storage problem: We drilled metal Ikea pot racks into the wall. Today the room is happily lined with saucepans. But every time I see the property manager coming or going from the building, I worry that she’ll ask to enter our unit, where she’ll spy what we’ve done to drywall that doesn’t belong to us.

More recently, my husband called our property manager to announce a long-awaited addition to our household that we thought would be welcome.

“I just got a job,” he told her, literally on the day that he had just gotten a job. “And my wife said when I get a job, I can have a dog. So I’m calling to tell you I’m getting a dog.”

As it turns out, we will not be getting a dog.

“You can have a cat,” she offered. (...)

Now we have each been at this – renting – for about a decade, and we’re reaching that point, married, starting our 30s, when it starts to feel like time to live in a more dignified way. We want to grow herbs outdoors and shop in the heavy-duty hardware store aisles and change the color of our living room. We want to make irreversible choices about wall fixtures and rash decisions at the animal shelter.

I've been thinking about all of these things a lot lately, while reading about the convincing reasons why homeownership no longer makes as much sense as it used to. Workers are no longer tied to factories – and the bedroom communities that surround them – because no one works in factories anymore. Now people telecommute. They get transferred to Japan indefinitely. Companies no longer offer the implicit contract of lifetime employment for hard workers, and so hard workers think nothing of updating their résumés every day.

And I think about my own transience: I’ve lived in eight apartments in six cities over the past nine years. My husband and I like to pick up and move (most recently, just eight blocks down the street from our previous place) as if we were selecting a new grocery store. We have a motto as a couple, which applies equally to weekend and life plans: “We’ll see how we feel,” we say.

We have trouble thinking beyond the nearest horizon, not because we don’t like the idea of commitment, but because we want to be free to theoretically commit to anything that may come up tomorrow. What if an incredible job offer wants to relocate us to Riyadh? What if we wake up Saturday morning and decide that we’ve tired of Washington, D.C.? What if – as many of our friends have experienced – one of us loses a job?

We’re both afflicted with a dangerous daydreaming ability to envision ourselves living anywhere we step off a plane. We never take a trip and think, “It’s wonderful to visit friends in Seattle,” or “Chicago is a great place for tourists in the summertime.” We always think: What if we lived here? Maybe we should live here? We could live in Key West! My husband has never even been to Portland, but we still nurse a sneaking suspicion that we should probably be living there.

In this way, we are the quintessential young professionals of the new economy – restless knowledge workers who deal in “projects,” not “careers,” who can no sooner commit to a mortgage than we can a lifetime of desk work. Our attitude is a national epidemic. It’s harder to get a mortgage today than it was 10 years ago. But a lot of people also just don’t want one any more. At the height of the housing boom, 69 percent of American households owned their homes. Housing researcher Arthur Nelson predicted to me that number would fall to 62 percent by 2020, meaning every residence built between now and then will need to be a rental.

I haven’t been able to figure out in my own household, however, how this aversion to permanence can coexist with our rising ire about renting. And I don’t know how whole cities will accommodate this new demographic: the middle-class forever renter.

Both Nelson and Florida have floated the idea that we need some kind of hybrid rental/homeownership model, some system that decouples “renter” status from income class, while allowing professionals who would have been homeowners 20 years ago to live in a comparable setting without the millstone. Maybe we allow renters to customize their homes as if they owned them, or we enable condo owners to quickly unload property to rental agents.

Short of putting us all in houseboats, I don’t know what these hybrid homes would look like, how they’d be paid for or if anyone will be willing to build them. But I suspect the trick lies outside of the architectural and financial details, that it lies in removing that fear of the approaching property manager, that lack of control over a dying dill plant. It lies in creating a feeling of ownership without the actual deed.

by Emily Badger, CityLab |  Read more:
Image: Reuters