Friday, May 3, 2013

House of Un-Representatives

Not long ago, the congressman from northeast Texas, Louie Gohmert, was talking about how the trans-Alaska oil pipeline improved the sex lives of certain wild animals — in his mind, the big tube was an industrial-strength aphrodisiac. “When the caribou want to go on a date,” he told a House hearing, “they invite each other to head over to the pipeline.”

Gohmert, consistently on the short list for the most off-plumb member of Congress, has said so many crazy things that this assertion passed with little comment. Last year, he blamed a breakdown of Judeo-Christian values for the gun slaughter at a cinema in Colorado. Last week, he claimed the Muslim Brotherhood had deep influence in the Obama administration, and that the attorney general — the nation’s highest law enforcer — sympathized with terrorists.

You may wonder how he gets away with this. You may also wonder how Gohmert can run virtually unopposed in recent elections. The answer explains why we have an insular, aggressively ignorant House of Representatives that is not at all representative of the public will, let alone the makeup of the country.

Much has been said about how the great gerrymander of the people’s House — part of a brilliant, $30 million Republican action plan at the state level — has now produced a clot of retrograde politicians who are comically out of step with a majority of Americans. It’s not just that they oppose things like immigration reform and simple gun background checks for violent felons, while huge majorities support them.

Or that, in the aggregate, Democrats got 1.4 million more votes for all House positions in 2012 but Republicans still won control with a cushion of 33 seats.

Or that they won despite having the lowest approval rating in modern polling, around 10 percent in some surveys. Richard Nixon during Watergate and B.P.’s initial handling of a catastrophic oil spill had higher approval ratings.

But just look at how different this Republican House is from the country they are supposed to represent. It’s almost like a parallel government, sitting in for some fantasy nation created in talk-radio land.

As a whole, Congress has never been more diverse, except the House majority. There are 41 black members of the House, but all of them are Democrats. There are 10 Asian-Americans, but all of them are Democrats. There are 34 Latinos, a record — and all but 7 are Democrats. There are 7 openly gay, lesbian or bisexual members, all of them Democrats.

Only 63 percent of the United States population is white. But in the House Republican majority, it’s 96 percent white. Women are 51 percent of the nation, but among the ruling members of the House, they make up just 8 percent. (It’s 30 percent on the Democratic side.)

It’s a stretch, by any means, to call the current House an example of representative democracy. Now let’s look at how the members govern:

by Timothy Egan, NY Times |  Read more:
Image: via

Wayne Bremser, eating lunch
via:

Guessing Game (unknown)
via:

Graphene Paint Could Generate Electricity

Scientists at the University of Manchester used wafers of graphene, the discovery of which won researchers a Nobel Prize, with thin layers of other materials to produce solar powered surfaces.

The resulting surfaces, which were paper thin and flexible, were able to absorb sunlight to produce electricity at a level that would rival existing solar panels.

These could be used to create a kind of “coat” on the outside of buildings to generate power needed to run appliances inside while also carrying other functions too, such as being able to change colour.

The researchers are now hoping to develop the technology further by producing a paint that can be put onto the outside of buildings.

But the scientists also say the new material could also allow a new generation of super-thin hand-held devices like mobile phones that can be powered by sunlight.

Professor Kostya Novoselov, one of the Nobel Laureates who discovered graphene, a type of carbon that forms sheets just one atom thick, said: “We have been trying to go beyond graphene by combining it with other one atom thick materials.

“What we have been doing is putting different layers of these materials one on top of the other and what you get is a new type of material with a unique set of properties.

“It is like a book – one page contains some information but together the book is so much more.

by Richard Gray, The Telegraph |  Read more:
Photo: The University of Manchester

The Real Tragedy

In the belief system called economics, it is an article of faith that commons are inherently tragic. Almost by definition, they are tragic because they are prone to overuse. What belongs to all belongs to none, and only private or state ownership can rescue a commons from the sad fate that will otherwise befall it.

The standard reference for this belief is an article that appeared in Science in 1968 called “The Tragedy of the Commons.” Though the author, Garrett Hardin, was a biologist, his article was strangely lacking in scientific inquiry. It was more like economics—an extrapolation from assumptions rather than an investigation of reality.

Hardin assumed that all commons are free-for-alls. He bid his readers to “picture” a hypothetical pasture peopled with hypothetical herders. These herders existed outside of any social structure and lacked even a capacity to talk with one another. They all behaved according to what the economics texts call “rationality”: they let their herds loose in the pasture in a single-minded effort to maximize their own gain, with no thought for the future or for anybody else. Under those assumptions, tragedy is a foregone conclusion.

What Hardin overlooked is that people do not necessarily behave as economists assume they do. As historian E. P. Thompson observed, Hardin failed to grasp “that commoners themselves were not without common sense.” Thompson was referring specifically to the common-field agriculture of his own England. Households had their own plots but shared land for hunting, foraging, and grazing. They pooled their implements and labor for joint maintenance and combined their herds to fertilize their respective plots. The destruction Hardin declared to be inevitable simply did not happen. To the contrary, the system worked well for hundreds of years. (...)

Hardin’s essay won applause in environmental quarters mainly because it was not really about the commons. It was a case for population control, and the tragedy thesis served as a grim parable to that end. From the start, however, anthropologists and others who actually studied commons objected to Hardin’s fabricated thesis; indeed, Elinor Ostrom won a Nobel Prize in economics for explaining the longevity of commons. Eventually, Hardin modified his stance. He acknowledged that overuse is not due to common ownership per se, but to the absence of rules governing access and use.

Overused commons do exist, of course. Fisheries are an example; the atmosphere is another. When overuse occurs, there generally has been a breakdown in the social structures that once governed use, or the scale of breakdown of such structures is difficult to establish.

Privatizing Commons

The real tragedy surrounding the commons has been the invasion by corporate, governmental, and other external forces. Native Americans did not eradicate the buffalo on the western plains; white hunters and soldiers did. Local people in Appalachia did not slice the tops off mountains; outside corporations did. It is therefore strange that the reigning ideology focuses on the self-destruction of commons when the scale of outside devastation is so much greater.

by Jonathan Rowe, Guernica |  Read more:
Image from Flickr via Jer Kunz

The Section: Knights of Soft Rock

By 1979, guitarist Waddy Wachtel thought he'd seen everything. He had shown up for morning studio sessions to find Warren Zevon already wasted; he'd seen California Gov. Jerry Brown, Linda Ronstadt's then-boyfriend, retreat from a room of stoned rockers after unexpectedly popping into one of Ronstadt's sessions; he'd walked offstage after playing with Carole King and into a brawl with her boyfriend. But he wasn't quite prepared for the strange, vexing behavior of James Taylor.

If anyone embodied the peaceful easy feeling of the decade, it was Taylor, whose inward-looking ballads and self-effacing stage presence hit the Seventies in its sweet spot. Women fell for the brooding guy on the cover of Sweet Baby James, men related to his reserved masculinity, and radio couldn't get enough of hits like "Handy Man" and "You've Got a Friend." But as Wachtel was learning on his first tour in Taylor's band, in 1979, another, far less relaxed Taylor lurked in the shadows. That Taylor was grappling with alcoholism and hard drugs and was in the midst of a troubled marriage to Carly Simon; their two-year-old son, Ben, had suffered from fevers in his infancy. Taylor had battled addiction before, and it was surfacing once again.

The hints of trouble began before the first show. Toasting the musicians and the tour at a local bar in Texas, Taylor downed two martinis in one gulp each. "I went, 'Uh-oh – that's not a good sign!'" recalls Wachtel, chilling in his home studio in the San Fernando Valley. At 65, he looks very much as he did in the 1970s: like a hippie librarian, with his round glasses and slight frame. On the bus the morning after gigs, Taylor would be seen nursing the same bottle from the night before. At one gig, Wachtel broke his pinky toe after tripping over stage cables and later asked Taylor for a painkiller from his stash. Taylor begrudgingly said yes – but Wachtel had to physically pry one out of Taylor's mouth when his boss wouldn't give it up. Then, one day when he was riding in the back seat of a car with Taylor, Wachtel watched as a female tollbooth clerk asked Taylor for an autograph. Looking groggy, Taylor scribbled something on a piece of paper, said, "Hi, darling, here you go," and handed it to her. Wachtel glanced over and saw what Taylor had scrawled: "You bitch, I'll kill you" – signed, sardonically, "James Taylor."

Wachtel still laughs at the memory: "He was hysterical!" But in a moment of seriousness, he says, "It was pretty intense. It was tough times."

For most of the Seventies, the singer-songwriter sound embodied by Taylor, Jackson Browne, King and Crosby, Stills and Nash dominated the charts and the radio, luring thousands of bell-bottomed fans to concert halls. Those acts – as well as Zevon, Ronstadt and many more – relied on a small, rarified group of backup musicians to shape that tight, gently rocking sound. Anyone who geeked out on liner notes back then will recognize the most prominent names: guitarist Danny Kortchmar, drummer Russell Kunkel, bassist Leland Sklar and keyboardist Craig Doerge – known collectively as "the Section" – plus Wachtel and stringed-instrument wizard David Lindley. One or more of them can be heard on seemingly every one of the era's defining tracks: King's "It's Too Late" and "Sweet Seasons"; Taylor's "You've Got a Friend" and his remake of "How Sweet It Is (to Be Loved by You)"; Browne's "Doctor My Eyes"; Zevon's "Werewolves of London"; Ronstadt's "Poor Poor Pitiful Me"; Joni Mitchell's "Carey"; and entire albums by Taylor (JT, Mud Slide Slim and the Blue Horizon) and Browne (Running on Empty, Hold Out).

"They were the best," says David Crosby, who hired all of them as part of his and Graham Nash's band. To Crosby, who worked with the previous generation of studio players in L.A., Kortchmar and crew were a different breed: truly sensitive musicians who knew how to get inside the emotion of a song. "They weren't just playing their instruments," he says. "That was a major change. It put them one up on all the session players before them. They took it way past 'That's a B flat.'"

Albums like Running on Empty set a high-water mark for skillful, soulful rock musicians. "It's one of my favorite records ever," says Dawes drummer Griffin Goldsmith, who had to learn the songs when his band backed Browne on tour. "It was so intimidating to play those songs because those tracks are incredible. It's a testament to what great players they are."

To critics, Taylor, Browne, and Crosby, Stills and Nash personified everything tame about Seventies rock, and the musicians who accompanied them were inevitably guilty by association. "We were the 'Mellow Mafia,'" says Kortchmar. He recalls a particularly nasty write-up of Taylor from the time: "We had [writer] Lester Bangs threatening to stab a bottle of Ripple into James. What the fuck is he talking about? James is doing 'Fire and Rain,' 'Country Road,' about Jesus and questions and deep shit."

"I can understand you have to put a label on something," says Kunkel, "but it wasn't mellow when we were playing with Warren Zevon or playing 'Running on Empty.'"

As much as the people who hired them, the Section were all strong, sometimes pugnacious characters: Kortchmar was the designated rocker, almost a Laurel Canyon version of Al Pacino; Sklar's mountain-man whiskers and Kunkel's balding pate, quasi-mullet and muscular upper arms were as totemic as the music. The records they helped craft may have been laid-back, but the scene backstage was often another matter. "When I think about the drunkenness and driving home from studios in the middle of the night, it's miraculous that we're here," says Wachtel. "You could get away with a lot back then."

by David Browne, Rolling Stone |  Read more: 
Photo: Michael Putland/Getty Images

Thursday, May 2, 2013


Michel Lefebvrea sisyphean task

Streams of Consciousness

Social-media tools allow anyone with a Facebook or Twitter account to play a role in determining how many readers a story reaches. And online communities such as the heavily trafficked Reddit enable readers to submit links to their favorite content, and vote up or down the content submitted by others, thereby changing a given item’s prominence on the site. The result is that the mainstream-media oligopoly is now just one force deciding what “the news” is and how important a story or image might be.

“Over the last 100 years, you go from a point when a newspaper would be able to set the tone and the five top stories of the day, to what Walter Cronkite and his cohort would say on the evening news, and then to the explosion of cable news, and now the Internet,” says Gabriel Snyder, 36, the editor of The Atlantic Wire and former editor in chief of Gawker. “We’ve gone from having just a few handfuls of places that might set the agenda to this proliferation that is reaching a near infinite number of people who can define what the top story is today.”

Since many young people share on social media what they consume online, their notion of what makes an item good is tied to an outward, rather than inward-looking, set of priorities. “Media is now a way for readers to communicate, not just consume content,” says Jonah Peretti, 39, the founder of BuzzFeed who earlier helped to launch The Huffington Post. As he points out, people pause before sharing an article or video to ponder what it says about them that they are promoting it. “Social sharing is about your identity,” says Peretti. “You want to say, ‘Look, I’m smart, or charitable, or funny.’”

Callie Schweitzer, 24, director of marketing and communications for Vox Media, a fast-growing network of new online publications, agrees. “How we get and share news has become much more reflective of who we are,” she says. “People are proud to have gotten something first, and they want to be known for having found the cool piece of video first.” Also working to develop its editorial style with an eye toward shareability is Quartz, a business website launched last fall by Atlantic Media. Zach Seward, 27, a senior editor there, argues: “Putting the lede in the lede is burying the lede; get it in the headline! If there is a striking fact or statistic that tells the story, it should be the headline—the kind of thing you want to tweet.”

And what would you want to tweet? In essence, any factoid that a follower might find remarkable and therefore clickworthy. Pieces of content that pop on social media tend to have a certain “wow” factor. Editors routinely mention visuals—usually photographs, but sometimes charts or other graphics—as being enormously helpful in making something go viral in social media. Social-media companies agree. “Tumblr is a very visual medium,” says Mark Coatney, media outreach director for the image-friendly microblogging platform. “Twitter rewards words; Tumblr rewards visually presented info, whether great photography or graphics that grab your eye.”

Hard news—especially the depressing kind—is less popular than lighter lifestyle coverage on social media. “If you look at stories being shared, no one shares news,” observes Alex Leo, 30, head of Web products for Thomson Reuters Digital and a former senior editor at HuffPost. “No one ever emails ‘73 People Killed in Iraq.’ They email stories like ‘Sitting Kills You.’” Sure enough, on the day I spoke with Leo, The New York Times’s five most-emailed stories were a Style section feature called “The End of Courtship?”, a Travel section list of “46 Places to Go in 2013,” a column by Woody Allen riffing on hypochondria, and advice pieces on parenting and money management.

By posting observations and arguments on everything from personal blogs and discussion boards to Twitter feeds and comment threads, every young person is now, on some level, an amateur journalist. As bandwidth and connection speeds have increased, they are also publishing vast quantities of photos and videos with the help of services like Instagram, Flickr, and YouTube.

Increasingly, established news outlets are turning to these on-the-ground snippets of raw material to report on important social issues, from the Occupy protests to the presidential election. Twitter has famously been used for disseminating eyewitness accounts of events such as the Arab Spring uprising. Instagram, a swiftly growing service that is essentially Twitter for photographs instead of text, allows anyone to take a photo and effortlessly post it online. Instagram shots taken during Hurricane Sandy, for example, went viral on social-media outlets and were even published by mainstream news organizations. (...)

The New York Times has figured out at least one way to appeal to Tumblr’s photo-crazy users: “The Lively Morgue,” which posts several photographs from the Times’s vast archives every week. “That’s a way the Times can make a Times-y Tumblr blog, but fun and lively,” says Aron Pilhofer, 47, the newspaper’s editor of interactive news.

If you’re wondering why the Times cares about having a successful Tumblr presence, you’re clearly over 40. Tumblr, which the average middle-aged American has probably never heard of, is an Internet behemoth, heavily skewed toward the young. There are some 100 million Tumblr blogs, drawing 172 million monthly unique visitors. Roughly 60 percent of Tumblr’s audience is under the age of 34, and more than half of that group is under 24. “Go to where young people are; don’t expect them to come to you,” says Jessica Bennett, 31, executive editor of Tumblr until her department was eliminated in April.

In addition to being a forum for reaching younger readers, Tumblr is a launching pad for content throughout social media. When Starbucks announced that it was introducing a new larger cup size in 2011, graphic artist Andrew Barr of the National Post of Canada made an illustration showing that it was larger than the capacity of the average human stomach. A Web producer posted it to the National Post Art & Design Tumblr blog, and it was reblogged widely, picked up by the Huffington Post, Gizmodo, and Buzzfeed, and discussed by Anderson Cooper on CNN. It received thousands of retweets and Facebook likes.

So photos aren’t the only kind of image that goes viral. Rather it is content that makes the person you share it with feel something, whether shock, amazement, or delight. While that may mean random ephemera like the infamous video of a chain-smoking toddler in Indonesia, it can also describe serious enterprise reporting. Vice Media has broken through with short gonzo documentaries like The Vice Guide to Karachi. “We’re exploring the insanity of the modern condition,” says Jason Mojica, Vice’s lead video producer, adding that the Vice website tries to focus on “things that make you say, ‘Holy shit! I can’t believe this exists!’”

by Ben Adler, Columbia Journalism Review |  Read more:
Illustration: Daniel Chang

Gustav Vigeland (Vigeland Museum and Park, Oslo, Norway)
Photo by Tine Berge
via:

Led Zeppelin



Hilltown, limbolo
via:

All Our Little Lives

This past Friday, David Thorpe (@Arr) tweeted, referencing a hashtag he’d created back in 2011, “let’s bring back #followateen for 2013. Here’s how it works: find a teen, follow it, and report on its life.” By the middle of the day, the #followateen hashtag yielded hundreds of results. The tweeters were adults, for the most part in their twenties and thirties, each talking about “my” teen as though the teenage Twitter user were a virtual pet they’d adopted. “My teen hates school because you have to wear pants there. I love my teen.” “My teen doesn’t want a part-time job, but he does want a hoodie.” Many of the #followateen tweets are legitimately hilarious, and the mediating narration — not retweeting “your” teen but instead paraphrasing them — is part of the comedic effect. The Buzzfeed article explaining the phenomenon cautioned that if your teen interacts with you or follows you back, “the game is over, and you must start again with a new teen.” The teens function like exhibits under glass, or like the Tamagotchi pets of the late ’90s, to which many Twitter users compared the hashtag.

Besides the comments on proms and crushes and parents and school and #yolo, the most common theme on #followateen is people pointing out that #followateen is creepy. It’s a good point. Of course it’s creepy. It’s really creepy. If you haven’t yet noticed, Twitter is, itself, creepy. The language is creepy and the concept is creepy. The form is creepy and the content is creepy and the fact of all our relative habituation to it is very, very creepy. The word follow is creepy, evoking heavy-breathing stalkers. Cult leaders have followers, and hapless victims get followed down dark alleyways. Follow implies obsession, lack of autonomy, predators, and silent threats. (...)

Twitter is a self-curated world of choose-your-own-adventure voyeurism. It becomes interesting when you realize that you can just sit behind the scenes of someone’s life and listen to them talk to themselves, when you realize how many inner monologues — those of friends, celebrities, strangers — are waiting there naked-faced in a neat backward scroll. Voyeurism is not widely acknowledged as useful, and social media are constantly being asked to justify their efficacy. Although Twitter succeeds as a mechanism for self-promotion and offers a way to connect with strangers or friends of friends, its main utility is as entertainment. We have all wished at times that we could be there for someone else’s argument, gossip session, or first date: Twitter gets us pretty close. Twitter is where we go to be creepy, and #followateen demonstrates this: It is precisely what has made Twitter so popular, so successful, and so addictive.

Teens are always interesting. In a teen’s life, something is always going wrong. Very little actually happens, but all of it is of enormous consequence. Or at least that’s how we assume it feels, from our definitively creepy position of adult voyeur. Many tweets in the #followateen feed are extremely condescending, as is Thorpe’s original tweet. The description of a “little teen life” minimizes the teen. The appeal of #followateen as characterized is intrinsically connected to the smallness and inconsequence of the teen’s life. After all, we’re all sick of being grownups, sick of caring about large things like jobs and bills and marriage and aging. It’s probably no coincidence that #followateen caught on like wildfire right as all taxes were due in the U.S. If only our lives were smaller, and if only we still had so few big things to care about that the small things could feel big. In a teen’s experience, everything is a crisis — school, clothes, parents, cars, prom, shoes, backpacks, homework. Every tiny thing is crucial and worth crying about — or, in this case, worth tweeting about. Teens are the ideal tweeters because they are never happy and always interesting.

But none of this actually distinguishes the teens from their creepy audience, as much as those of us watching might like to believe it does. Teens don’t have “little” lives because they’re teens but because all our lives are small. We stumble though the pointless minutiae of the day to day. Tiny events that seem like crises are made large only in the telling. What #followateen admits is not that teenagers’ lives are smaller than our own, but that teenagers are the only ones who are doing the internet right.The social internet is determined by teenagers. Our use of the medium and all its memes and codes and approved and appropriated and habituated constructions and formal devices are all adapted from the language of teenagers using the internet. The Twitter account of a 16-year-old complaining about homework and boys can be seen simply as the true and correct use of Twitter.

by Helena Fitzgerald, TNI |  Read more:
Image via

Bob Brozman (March, 1954 - April, 2013)


Bob Brozman, a guitarist and self-described “roving guitar anthropologist” who collaborated with musicians from Northern Ireland to Guinea to India to Papua New Guinea, died on April 23 at his home in Santa Cruz, Calif. He was 59.

The cause was suicide, said Mike Pruger, the coroner’s deputy in Santa Cruz County.

Mr. Brozman’s music was rooted in the blues, but the open tunings, syncopations and microtonal inflections of the blues inspired him to soak up styles worldwide.

He was a traveler and collector who learned to play many other stringed instruments, from the Andean charango to the Greek baglama. He visited musicians around the world at their homes, studying with them and collaborating with them on recordings that brought new twists to traditional styles. He was especially fond of island cultures where, he told Songlines magazine, “musical instruments and ideas are left behind without much instruction and then left to percolate in isolation.”

His main instrument was the National steel guitar: a gleaming Art Deco-style instrument with a broad dynamic range, often played with a slide and associated with deep blues. He wrote a book, “The History and Artistry of National Resonator Instruments,” and designed a lower-pitched guitar, the Baritone Tricone (with three cone-shaped resonators), for the company, which is now called National Reso-Phonic Guitars.

He recorded dozens of albums, including solo projects and collaborations with musicians like the Hawaiian slack-key guitarist Ledward Kaapana, the Indian slide guitarist Debashish Bhattacharya, the Guinean kora player Djeli Moussa Diawara, the Okinawan sanshin player and singer Takashi Hirayasu and the accordionist René Lacaille from the Indian Ocean island of Réunion. He also made instructional videos about ukulele, bottleneck blues, Caribbean rhythms and Hawaiian guitar.

Mr. Brozman approached traditional styles with curiosity, respect and energy. “I don’t expect them to meet me halfway musically,” he said of his collaborators in an interview with the British magazine Guitar. “I try to meet up about three-quarters of the way towards them.”

by Jon Pareles, NY Times |  Read more:

Wednesday, May 1, 2013

Cultural Revolution


The cultural nature of politics, the political nature of culture
: these have formed the main quandary debated by left intellectuals, mainly among themselves (and there lies much of the trouble), over the twenty some years since the oldest of us went off to colleges where Theory and Cultural Studies were all the impotent rage. For two decades, our thinking has turned on this culture/politics axis, both when we were spinning our wheels and when it seemed like we were getting somewhere. There are always fresh phenomena for the familiar problematic: only recently, for example, have American intellectuals, “cultural producers,” and college grads with humanities degrees adopted a basically sociological understanding of culture, including their own, or have TV show-runners displayed a notable quotient of South Asian faces. Still, all new left-wing cultural-political analyses share an old question: is this or that cultural object shoring up an unjust society, or undermining it? The question applies not just to novels, TV shows, new diets, and social media platforms, but also, more uncomfortably, to the essays and books that we left intellectuals write about these things.

The best general formulation of the problem may still be Herbert Marcuse’s essay “The Affirmative Character of Culture” (1937). For Marcuse, even when art or entertainment didn’t flatter power outright, culture as such tended to affirm, rather than negate, the existing social order: the very foretaste of a happier life offered by one kind of art, or the commiseration over present-day reality offered by another kind, helped people to endure the way things were. A dialectician, Marcuse did allow that culture could also, sometimes, negate, and seduce or incite you toward revolution — but his emphasis fell on culture as accommodation to the status quo. And this dominant pessimism about the capacity of culture to do the work of politics, occasionally relieved by a hesitant optimism, could be said to characterize the whole tradition of so-called Western Marxism to which Marcuse and the rest of the Frankfurt School belonged, many of whose unfinished projects and unresolved questions came to be inherited, knowingly or not, by French critical sociology and American cultural studies. Western Marxism (not just Marcuse, Adorno, and Benjamin but Lukács, Sartre, Althusser, et cetera) paid special attention to culture and ideology and correspondingly neglected the issues of political strategy and economic analysis that so preoccupied earlier generations of Marxist thinkers. As Perry Anderson pointed out in Considerations on Western Marxism, this cultural turn, beginning in the ’20s and in full swing by the ’30s, took place amid political disappointment: the defeat of working-class revolt in Germany, the hardening of the Soviet Union into Stalinist deformity, fascist victory in the Spanish Civil War, and so on.  (...)

Logically, there seem to be three possible results of the mounting economic insecurity of intellectuals and “culture producers” amid a general population scoured by the same blast. The possibilities are hardly exclusive; all three are to some extent inevitable, and already taking place. It’s the proportions in which they’re realized that will answer for our own time a question about the relationship between intellectuals and the general populace classically formulated by Marxism in terms of “hegemony” and “cultural revolution.”

One possibility, and the worst, would be to see the next decades exacerbate the class character of culture. In this scenario, since very few people not already wealthy would risk careers as writers or artists, certain vital strains of culture would become, more exclusively than today, the expression of an upper-class stratum. A basic relegation of literature, art, and philosophy to pastimes of the idly rich (as, say, in prerevolutionary France) doesn’t seem impossible.

A second possibility, closer to realization today, would be the confinement of important varieties of culture not to a single socioeconomic stratum but to demographic archipelagos amid rising seas of mass corporate product. Young people might give up hopes of gainful employment through art or serious writing — without giving up the production or consumption of those things. Holding down uninspiring and ill-paid day-jobs, they would huddle together in select neighborhoods of big cities and devote their evenings and weekends to culture (and laundry, shopping, and cleaning). This doesn’t sound so bad; it sounds in fact like the cozily disappointed existence, streaked with fear of unemployment, of half the people we know.

But the confinement of much cultural production to the leisure hours of a few bohemian enclaves entails real costs for the resulting culture. Challenging art and radical thought, with no hope of a large audience truly susceptible to being challenged, slip easily into administering “provocativeness” to the jadedly unprovokable. The idea of an avant-garde leading a general charge becomes, as it has, impossible; the infantry of a would-be popular audience has deserted, and an officer corps with no troops merely redesigns its uniforms according to cycles of fashion. Squabbles over medals and rank take the place of what Gramsci called the war of position; cultural hegemony — a prevailing climate of opinion — is left, uncontested, to capitalism. (...)

We are witnessing and sometimes personally experiencing a sharp de-classing of intellectuals. Our precious credentials are increasingly useless for generating income and — let us hope — social prestige, too. This should mean that most intellectuals view ourselves as sinking, economically, into the lower-middle or working class, and that “meritocratic” markers — the contents of our bookshelves and iPods; our degrees — accord us less and less social status in our own and others’ eyes. Not to say there won’t remain a self-protective cultural elite hoarding its prestige: the hostility to criticism among mutually appreciative writers, artists, and academics — an aversion to meaningful disputes — is contemporary evidence of such a siege mentality. But we can also hope for something else: perhaps intellectuals’ increasing exposure to socioeconomic danger will give a new political dangerousness and reality to what some of us produce. Might the continuing commitment of de-classed left intellectuals and radical artists to their vocations, in spite of withered prospects and eroding prestige, give our work an antisystemic force, and credibility, it has lacked?

In recent decades, varieties of politics among intellectuals, hipsters, artists, and academics have seemed to outsiders, and increasingly to ourselves, like just so many types of functionally affirmative, system-stabilizing, content-neutral cultural capital. In the years ahead it may become easier, while much else becomes harder, for both left intellectuals and our intended audience to believe that we do what we do and say what we say for the sake of conviction, not capital. Artists and intellectuals, to go on existing in serious numbers without much help from universities, corporate publishers, wealthy families, and rich patrons, will be groups marked by some sacrifice. And if we want to work hard—“Il faut travailler, rien que travailler,” Cézanne wrote to Rilke: probably the one common motto of artists and thinkers — many of us may quit the demographic islands where our very concentration drives up the rent. Released, unprotected, into the dark fields of the republic, we would find new things to say and, with luck, new people to say them to.

by The Editors, N+1 |  Read more:
Image: Maya Lin, "Storm King Wavefield."

The Death Of Blogs? Or Of Magazines?


As part of his “eulogy for the blog”, Marc Tracy touches upon the evolution of the Dish – which he praises as “a soap opera pegged to the news cycle”:
[T]oday, Google Reader is dying, Media Decoder is dead, and Andrew Sullivan’s The Daily Dish is alive in new form. This year, Sullivan decided that he was a big enough brand, commanding enough attention and traffic, to strike out on his own. At the beginning of the last decade, the institutions didn’t need him. Today, he feels his best chance for survival is by becoming one of the institutions, complete with a staff and a variety of content. What wasn’t going to work was continuing to have, merely, a blog. 
We will still have blogs, of course, if only because the word is flexible enough to encompass a very wide range of publishing platforms: Basically, anything that contains a scrollable stream of posts is a “blog.” What we are losing is the personal blog and the themed blog. Less and less do readers have the patience for a certain writer or even certain subject matter.
I wish he had some solid data to back that point up. Of course, blogs have evolved – and this one clearly has from its early days. What began as one person being mean to Maureen Dowd around 12.30 am every night is now an organism in which my colleagues and I try to construct both a personal and yet also diverse conversation in real time. But that doesn’t mean the individual blogger – small or large – is disappearing. Our entire model requires, as it did from the get-go, links to other sites and blogs – and we have not detected a shortage.

One reason we have had to grow and evolve – and this started as far back as 2003 – is that the web conversation has grown exponentially since this blog started (when Bill Clinton was president). Yes, many bloggers now get employed by more general sites, or move on to more complex forms (think of Nate Silver, a lone blogger when the Dish first championed his work and now part of an informational eco-system). But every page on the web is equally accessible as every other page. Blogs will never die – but they might form a smaller part of a much larger online eco-system of discourse.

My own view is that one particular form of journalism is actually dying because of this technological shift – and it’s magazines, not blogs. When every page in a magazine can be detached from the others, when readers rarely absorb a coherent assemblage of writers in a bound paper publication, but pick and choose whom to read online where individual stories and posts overwhelm any single collective form of content, the magazine as we have long known it is effectively over.

Without paper and staples, it doesn’t fall apart so much as explodes into many pieces hurtling into the broader web. Where these pieces come from doesn’t matter much to the reader. So what’s taking the place of magazines are blog-hubs or group-blogs with more links, bigger and bigger ambitions and lower costs. Or aggregated bloggers/writers/galley slave curators designed by “magazines” to be sold in themed chunks. That’s why the Atlantic.com began as a collection of bloggers and swiftly turned them all into chopped up advertizing-geared “channels.” That form of online magazine has nothing to do with its writing as such or its writers; it’s a way to use writers to procure money from corporations. And those channels now include direct corporate-written ad copy, designed to look as much like the actual “magazine” as modesty allows.

by Andrew Sullivan, The Dish |  Read more:
Photo: uncredited

How Wall Street Defanged Dodd-Frank

The mood was triumphant on the morning of July 21, 2010, when Barack Obama, not quite two years into his presidency, strode to a podium inside the Ronald Reagan Building, a few blocks from the White House. As he prepared to sign the Dodd-Frank Wall Street Reform and Consumer Protection Act—the sweeping legislative package designed to prevent another spectacular financial collapse—into law, the president first acknowledged the miracle of having a bill to sign at all. “Passing this…was no easy task,” he told the crowd of hundreds. “We had to overcome the furious lobbying of an array of powerful interest groups and a partisan minority determined to block change.”

Indeed, some 3,000 lobbyists had swarmed the Capitol in hopes of killing off pieces of the proposed bill—nearly six lobbyists for every member of Congress. For Michael Barr, then an assistant secretary at the Treasury Department, the trench warfare spurred by Dodd-Frank left him shellshocked. “You pick a page at random,” says Barr, now a law professor at the University of Michigan, “and I’ll tell you about all the issues on that page where the fighting was intense.” Remarkably, despite the onslaught, Dodd-Frank “got stronger rather than weaker the closer we got to passage, which is incredibly unusual,” says Lisa Donner, executive director of Americans for Financial Reform, one of a handful of advocacy groups that fought tenaciously for the bill.

That sense of victory barely lasted barely the morning. The same financial behemoths that had fought so ferociously to block Dodd-Frank were not going to let the mere fact of the bill’s passage ruin their plans. “Halftime,” shrugged Scott Talbott, chief lobbyist for the Financial Services Roundtable, a lobbying group representing 100 of the country’s largest financial institutions. It was 5:30 am on a Friday when a joint House-Senate conference committee approved the bill’s final language. By Sunday, an industry lawyer named Annette Nazareth—a former top official at the Securities and Exchange Commission whose firm counts JPMorgan Chase and Goldman Sachs among its clients—had already sent off a heavily annotated copy of the 848-page bill to colleagues at her old agency. According to a congressional staffer whose boss was a key architect of Dodd-Frank, Nazareth is one of two “generals” running the campaign to undo the bill. The other is Eugene Scalia, a fearsome litigator and son of the Supreme Court justice.

After Dodd-Frank’s passage, lobbyists for the big banks and industry trade groups divided themselves into eighteen working groups, each organized around a different element of the new law. “That’s when the real work began,” Talbott tells me. One working group focused on derivatives reform, including the requirement that these complex financial instruments now be sold on open exchanges in the fashion of stocks and bonds. Another focused on efforts to hammer out the so-called Volcker Rule, which would limit the ability of federally insured banks to wager on risky ventures. A third tackled the new Consumer Financial Protection Bureau (CFPB), created to protect ordinary consumers from Wall Street deceptions involving mortgages, credit cards and other major profit centers for the banks.

In the months leading up to Dodd-Frank’s passage, the big story was the staggering sums of money being spent by the industry to defeat the bill—more than $1 billion on lobbying alone, according to one estimate. Yet, incredibly, the financial sector dramatically increased its spending after Dodd-Frank was signed. Whereas commercial banks such as Wells Fargo, Citigroup and JPMorgan Chase, along with their trade groups, spent $55 million lobbying in 2010 (the year Dodd-Frank became law), they would collectively spend $61 million in 2011 and again in 2012, according to OpenSecrets.org. The twenty-eight lobbyists Talbott has on the payroll at the Financial Services Roundtable makes it relative small fry. The American Bankers Association has ninety-one lobbyists representing its interests, while the US Chamber of Commerce has 183. Goldman Sachs has fifty-one lobbyists, JPMorgan Chase sixty, and even the obscure-sounding Securities Industry and Financial Markets Association is armed to the teeth, hiring the services of forty-nine lobbyists.

Even so, those numbers don’t begin to capture the army of people being paid exorbitant sums to beat back reform. “The lobbyists are just the point of the spear,” said Ed Mierzwinski, director of consumer programs for the US Public Interest Research Group (PIRG). “There are also the regulatory lawyers, the research staffs, the PR people and all those loyal think tank supporters shilling for the banks.”

Dodd-Frank’s Achilles’ heel is that it leaves the tough work of writing the actual regulations to existing federal agencies like the Federal Reserve and the Securities and Exchange Commission, which had failed so miserably at protecting the public interest in the run-up to the 2008 crash, as well as to backwater independent agencies like the Commodity Futures Trading Commission (CFTC), which was tasked with regulating a derivatives market that played a central role in the collapse of the global economy.

The story of how Wall Street lobbyists worked the halls of Congress, blocking the appointment of Elizabeth Warren, Obama’s first choice to head the CFPB, or pushing bills aimed at defanging Dodd-Frank, is fairly well-known by now. But it was the stealthy work of battalions of regulatory lawyers, who descended on the private offices of regulators deep inside the bureaucracy, that has proven more crucial to the industry’s effort to pick off pieces of Dodd-Frank. There, a kind of ground war has been going on for almost three years, with the regulators waging hand-to-hand combat to defend every clause and comma in Dodd-Frank, and the lawyers fighting to insert any loophole they can to protect their clients’ extraordinary profits. This is how the miracle that was the making of Dodd-Frank—hailed as the most comprehensive financial reform since the 1930s—became a slow-moving horror movie called “The Unmaking of Dodd-Frank”: a perfect case study of the ways an industry with nearly unlimited resources can avoid a set of tough-minded reforms it doesn’t like.

by Gary Rivlin, The Nation |  Read more:
AP Photo/Mary Altaffer