Wednesday, March 19, 2014

John Hammond

Rage Against the Machines

Anybody who grew up in America can tell you it’s a pretty violent country, and every consumer knows that our mass culture was reflecting that fact long before it began spewing the stuff in videogames. So on the surface, it seems strange that special powers should be attributed to games. What gives?  (...)

But if there is something dangerous about videogames now, it’s not the specter of players transforming into drooling sociopaths by enacting depraved fantasies. Instead of forensically dissecting the content packaged in games, we should look closely at the system of design and distribution that’s led them out of teen bedrooms and into the hands of a broader audience via computers and smartphones. It’s not Doom or Mortal Kombat or Death Race we should fear, in other words; it’s Candy Crush Saga, Angry Birds, and FarmVille.

To understand what is really distinctive about videogames, it helps to see how their operation runs like a racket: how the experience is designed to offer players a potentially toxic brew of guilty pleasure spiced with a kind of extortion and how they profit by stoking addiction. We might remember why we looked sideways at machine-enabled gaming in the first place—because it was a mode of play that seemed to normalize corrupt business practices in the guise of entertainment. Because the industry often seems like just another medium for swindlers. (...)

The new model of videogame delivery is “free-to-play” (F2P). At first it was limited to massively multiplayer online games (MMOs) like Neopets and MapleStory, which primarily relied on kids pestering their parents to fund their accounts so that they could buy in-game goods. These games always offer the first taste for free, and then ratchet up the attraction of paying for a more robust or customized gaming environment. In 2007, Facebook released a platform for developers to make free-to-play apps and games run within the social network’s ecosystem. Then came the iPhone, the Apple App Store, and all the copycats and spinoffs that it inspired. By 2010, free-to-play had become the norm for new games, particularly those being released for play online, via downloads, on social networks, or on smartphones—a category that is now quickly overtaking disc-based games. The point is to sell, sell, sell; the games give users opportunities to purchase virtual items or add-ons like clothing, hairstyles, or pets for their in-game characters.

In 2009, Facebook gaming startup darling Zynga launched a free-to-play game called FarmVille that went on to reach more than 80 million players. It offered a core experience for free, with add-ons and features available to those with enough “farm cash” scrip. Players can purchase farm cash through real-money transactions, earn it through gameplay accomplishments, or receive it as a reward for watching video ads or signing up for unrelated services that pay referral fees to game operators. Former Zynga CEO Mark Pincus sought out every possible method for increasing revenues. “I knew I needed revenues, right fucking now,” Pincus told attendees of a Berkeley startup mixer in 2009. “I did every horrible thing in the book just to get revenues right away.”

Every horrible thing in the book included designing a highly manipulative gameplay environment, much like the ones doled out by slot machines and coin-ops. FarmVille users had to either stop after they expended their in-game “energy” or pay up, in which case they could immediately continue. The in-game activities were designed so that they took much longer than any single play session could reasonably last, requiring players to return at prescheduled intervals to complete those tasks or else risk losing work they’d previously done—and possibly spent cash money to pursue. Players were prodded to spread notices and demands among their Facebook friends in exchange for items or favors that were otherwise inaccessible. As with slots and coin-ops, the occasional calculated anomaly in a free-to-play game doesn’t alter the overall results of the system, but only recharges the desire for another surprise, another epiphany; meanwhile, the expert player and the jackpot winner are exceptions that prove the rule.

FarmVille’s mimicry of the economically obsolete production unit of the family farm, in short, proved all too apt—like the hordes of small farmers sucked into tenantry and debt peonage during the first wave of industrialization in America, the freeholders on FarmVille’s vast virtual acreage soon learned that the game’s largely concealed infrastructure was where all the real fee-gouging action was occurring. Even those who kept their wallets tucked away in their pockets and purses would pay in other ways—by spreading “viral” invitations to recruit new farmers, for example. FarmVille users might have been having fun in the moment, but before long, they would look up to discover they owed their souls to the company store.

by Ian Bogost, Baffler |  Read more:
Image: Micael Duffy

Tuesday, March 18, 2014

The Human Heart of Sacred Art


There is a passage in Marilynne Robinson's novel Gilead, in which the main character John Ames, a pastor, is walking to his church, and comes across a young couple ahead of him in the street:
The sun had come up brilliantly after a heavy rain, and the trees were glistening and very wet. On some impulse, plain exuberance, I suppose, the fellow jumped up and caught hold of a branch, and a storm of luminous water came pouring down on the two of them, and they laughed and took off running, the girl sweeping water off her hair and her dress as if she were a little bit disgusted, but she wasn't. It was a beautiful thing to see, like something from a myth. I don't know why I thought of that now, except perhaps because it is easy to believe in such moments that water was made primarily for blessing, and only secondarily for growing vegetables or doing the wash. I wish I had paid more attention to it.
It is a wonderful, luminous passage, typical of Robinson's ability to discover the poetic even in the most mundane. Robinson is a Christian, indeed a Calvinist (though, improbably, she tends to see John Calvin more as a kind of Erasmus-like humanist than as the firebrand preacher who railed against the human race as constituting a "teeming horde of infamies"), whose life and writing is suffused with religious faith. Robinson's fiction possesses an austere beauty, "a Protestant bareness" as the critic James Wood has put it,[1] that recalls both the English poet George Herbert and "the American religious spirit that produced Congregationalism and nineteenth-century Transcendentalism and those bareback religious riders Emerson, Thoreau and Melville".

There is in Robinson's writing a spiritual force that clearly springs from her religious faith. It is nevertheless a spiritual force that transcends the merely religious. "There is a grandeur in this vision of life", Darwin wrote in The Origin of Species, expressing his awe at nature's creation of "endless forms most beautiful and most wonderful". The springs of Robinson's awe are different from those of Darwin's. And yet she too finds grandeur in all that she touches, whether in the simple details of everyday life or in the great moral dilemmas of human existence. Robinson would probably describe it as the uncovering of a divine presence in the world. But it is also the uncovering of something very human, a celebration of our ability to find the poetic and the transcendent, not through invoking the divine, but as a replacement for the divine.

One does not, of course, have to be religious to appreciate religiously inspired art. One can, as a non-believer, listen to Mozart's Requiem or Nusrat Fateh Ali Khan's qawwli, look upon Michaelangelo's Adam or the patterns of the Sheikh Lotfollah Mosque in Isfahan in Iran, read Dante's Divine Comedy or Lao Zi's Daode Jing, and be drawn into a world of awe and wonder. Many believers may question whether non-believers can truly comprehend the meaning of religiously-inspired art. We can, however, turn this round and ask a different question. What is it that is "sacred" about sacred art? For religious believers, the sacred, whether in art or otherwise, is clearly that which is associated with the holy and the divine. The composer John Tavener, who died at the end of last year, was one of the great modern creators of sacred music. A profoundly religious man – he was a convert to Russian Orthodoxy – Tavener's faith and sense of mysticism suffused much of his music. Historically, and in the minds of most people today, the sacred in art is, as it was with Tavener, inextricably linked with religious faith.

There is, however, another sense in which we can think about the sacred in art. Not so much as an expression of the divine but, paradoxically perhaps, more an exploration of what it means to be human; what it is to be human not in the here and now, not in our immediacy, nor merely in our physicality, but in a more transcendental sense. It is a sense that is often difficult to capture in a purely propositional form, but which we seek to grasp through art or music or poetry. Transcendence does not, however, necessarily have to be understood in a religious fashion – that is, solely in relation to some concept of the divine. It is rather a recognition that our humanness is invested not simply in our existence as individuals or as physical beings, but also in our collective existence as social beings and in our ability, as social beings, to rise above our individual physical selves and to see ourselves as part of a larger project, to cast upon the world, and upon human life, a meaning or purpose that exists only because we as human beings create it.
by Kenan Malik, Eurozine | Read more
Image: Richard Pluck. Source: Flickr

How "Revolution" Became an Adjective

    In case of rain, the revolution will take place in the hall.
    -- Erwin Chargaff

For the last several years, the word “revolution” has been hanging around backstage on the national television talk-show circuit waiting for somebody, anybody -- visionary poet, unemployed automobile worker, late-night comedian -- to cue its appearance on camera. I picture the word sitting alone in the green room with the bottled water and a banana, armed with press clippings of its once-upon-a-time star turns in America’s political theater (tie-dyed and brassiere-less on the barricades of the 1960s countercultural insurrection, short-haired and seersucker smug behind the desks of the 1980s Reagan Risorgimento), asking itself why it’s not being brought into the segment between the German and the Japanese car commercials.

Surely even the teleprompter must know that it is the beast in the belly of the news reports, more of them every day in print and en blog, about income inequality, class conflict, the American police state. Why then does nobody have any use for it except in the form of the adjective, revolutionary, unveiling a new cellphone app or a new shade of lipstick?

I can think of several reasons, among them the cautionary tale told by the round-the-clock media footage of dead revolutionaries in Syria, Egypt, and Tunisia, also the certain knowledge that anything anybody says (on camera or off, to a hotel clerk, a Facebook friend, or an ATM) will be monitored for security purposes. Even so, the stockpiling of so much careful silence among people who like to imagine themselves on the same page with Patrick Henry -- “Give me liberty, or give me death” -- raises the question as to what has become of the American spirit of rebellion. Where have all the flowers gone, and what, if anything, is anybody willing to risk in the struggle for “Freedom Now,” “Power to the People,” “Change We Can Believe In”?

My guess is next to nothing that can’t be written off as a business expense or qualified as a tax deduction. Not in America at least, but maybe, with a better publicist and 50% of the foreign rights, somewhere east of the sun or west of the moon. (...)

I inherited the instinct as a true-born American bred to the worship of both machinery and money; an appreciation of its force I acquired during a lifetime of reading newspaper reports of political uprisings in the provinces of the bourgeois world state -- in China, Israel, and Greece in the 1940s; in the 1950s those in Hungary, Cuba, Guatemala, Algeria, Egypt, Bolivia, and Iran; in the 1960s in Vietnam, France, America, Ethiopia, and the Congo; in the 1970s and 1980s in El Salvador, Poland, Nicaragua, Kenya, Argentina, Chile, Indonesia, Czechoslovakia, Turkey, Jordan, Cambodia, again in Iran; over the last 24 years in Russia, Venezuela, Lebanon, Croatia, Bosnia, Libya, Tunisia, Syria, Ukraine, Iraq, Somalia, South Africa, Romania, Sudan, again in Algeria and Egypt.

The plot line tends to repeat itself -- first the new flag on the roof of the palace, rapturous crowds in the streets waving banners; then searches, requisitions, massacres, severed heads raised on pikes; soon afterward the transfer of power from one police force to another police force, the latter more repressive than the former (darker uniforms, heavier motorcycles) because more frightened of the social and economic upheavals they can neither foresee nor control.

All the shiftings of political power produced changes within the committees managing regional budgets and social contracts on behalf of the bourgeois imperium. None of them dethroned or defenestrated Adams’ dynamo or threw off the chains of Marx’s cash nexus. That they could possibly do so is the “romantic idea” that Albert Camus, correspondent for the French Resistance newspaper Combat during and after World War II, sees in 1946 as having been “consigned to fantasy by advances in the technology of weaponry.”

The French philosopher Simone Weil draws a corollary lesson from her acquaintance with the Civil War in Spain, and from her study of the communist Sturm und Drang in Russia, Germany, and France subsequent to World War I. “One magic word today seems capable of compensating for all sufferings, resolving all anxieties, avenging the past, curing present ills, summing up all future possibilities: that word is revolution... This word has aroused such pure acts of devotion, has repeatedly caused such generous blood to be shed, has constituted for so many unfortunates the only source of courage for living, that it is almost a sacrilege to investigate it; all this, however, does not prevent it from possibly being meaningless.”

by Lewis Lapham, Tom Dispatch |  Read more:
Image: via:

On a Strange Roof, Thinking of Home

In 2009 The Oxford American polled 134 Southern writers and academics and put together a list of the greatest Southern novels of all time based on their responses. All save one, The Adventures of Huckleberry Finn, were published between 1929 and 1960. What we think of when we think of “Southern fiction” exists now almost entirely within the boundaries of the two generations of writers that occupied that space. Asked to name great American authors, we’ll give answers that span time from Hawthorne and Melville to Whitman to DeLillo. Ask for great Southern ones and you’ll more than likely get a name from the Southern Renaissance: William Faulkner, Harper Lee, Flannery O’Connor, Walker Percy, Eudora Welty, Thomas Wolfe—all of them sandwiched into the same couple of post-Agrarian decades.

The two waves of Southern writers that crested in the wake of the Agrarian-Mencken fight, first in the 1930s and ’40s, and then in the ’50s and ’60s, didn’t build upon the existing tradition of Southern letters. They weren’t conceived of as new additions to the canon, but as an entirely new canon unto themselves, supplanting the old. They remade the popular notion of Southern literary culture, obscuring predecessors who had, in their time, seemed immortal.

“Southern,” as a descriptor of literature, is immediately familiar, possessed of a thrilling, evocative, almost ontological power. It is a primary descriptor, and alone among American literary geographies in that respect. Faulkner’s work is essentially “Southern” in the same way that Thomas Pynchon’s is essentially “postmodern,” but not, you’ll note, “Northeastern.” To displace Faulkner from his South would be to remove an essential quality; he would functionally cease to exist in a recognizable way.

It applies to the rest of the list, too (with O’Connor the possible exception, being inoculated somewhat by her Catholicism). It is impossible to imagine these writers divorced from the South. This is unusual, and a product of the unusual circumstances that gave rise to them. Faulkner, Lee, Percy, and Welty were no more Southern than Edgar Allen Poe or Sidney Lanier or Kate Chopin, and yet their writing, in the context of the South at that time, definitively was. There’s a universal appeal to their work, to be certain, but it’s also very much a regional literature, one grappling with a very specific set of circumstances in a fixed time, and correspondingly, one with very specific interests: the wearing away of the old Southern social structures, the economic uncertainty inherent in family farming, and overt, systematized racism (which, while undoubtedly still present in the South today, is very much changed from what it was).  (...)

Put a character in a tobacco field and give them a shotgun and an accent and it will evoke, without fail, a sense of the South; this is true. If they pop off with a “Hey there, y’all,” it will sound fitting, correct, like the accordion bleats that mark transitions between stories in a public radio program; useful in pushing you toward a desired emotional state, and fun to listen to when done well. But, on the other hand, it doesn’t mean anything. If this is, in fact, “Southern fiction,” then it is becoming as stale as it was a century ago—updated only in that, instead of regurgitating the Lost Cause ethos, it is now Faulkner’s South that’s subjected to the regional nostalgic impulse, a double reverberation.

There is nothing wrong with these writers because of this. It’s not that they’ve failed somehow to keep up, or are stupefying readers, or anything of the sort. It’s that this kind of writing is no longer reflective of the South—or, it reflects a South that is no longer. We wouldn’t think of someone writing whaling novels as quintessentially “New England” anymore, either. The South isn’t so homogenous a culture as it once was, and the societal tropes that Faulkner and Welty and even Barry Hannah grew up with and explored in their fiction are, in large part, gone. The rise of industrial-scale agribusiness, rapid suburbanization, the death of traditional industries like textiles, the corresponding growth of high-tech industries, a major increase in the Hispanic population: all these things and many more have contributed to a wildly different South than the one summoned in what we casually call “Southern writing.”

by Ed Winstead, Guernica | Read more:
Image: Alec Soth

Jonathan Curry
via:

DHS Wants to Track You Everywhere You Drive

[ed. See also: this.]

Immigrations, Customs Enforcement wants to firm up its relationship with Vigilant Solutions, the most dominant actor in the increasingly powerful license plate reader industry, to enable agents to more efficiently track down people they want to deport. Vigilant maintains a national database, called the National Vehicle Location Service, containing information revealing the sensitive driving histories of millions of law-abiding people. According to the company, the database currently contains nearly 2 billion discrete records of our movements, and grows by almost 100 million records per month.

In a widely reported but largely misunderstood solicitation for bids, DHS announced that it wants access to a nationwide license plate reader database, along with technology enabling agents to capture and view data from the field, using their smartphones. Reading the solicitation, I was struck by the fact that it almost perfectly describes Vigilant’s system. It’s almost as if the solicitation was written by Vigilant, it so comprehensively sketches out the contours of the corporation’s offerings.

Lots of news reports are misinterpreting DHS’ solicitation, implying that the agency wants to either build its own database or ask a contractor to build one. The department doesn’t intend to build its own license plate reader database, and it isn’t asking corporations to build one. Instead, it is seeking bids from private companies that already maintain national license plate reader databases. And because it’s the only company in the country that offers precisely the kind of services that DHS wants, there’s about a 99.9 percent chance that this contract will be awarded to Vigilant Solutions. (Mark my words.)

According to documents obtained by the ACLU, ICE agents and other branches of DHS have already been tapping into Vigilant’s data sets for years. So why did the agency decide to go public with this solicitation now? Your guess is as good as mine, but it may simply be a formality so that the agency can pretend as if there was actually robust competition in the bidding process. (As recent reporting about the FBI’s secretive surveillance acquisitions has shown, no-bid contracts for spy gear tend to raise eyebrows when they’re finally discovered.)

What’s the problem with a nationwide license plate tracking database, anyway? If you aren't the subject of a criminal investigation, the government shouldn't be keeping tabs on when you go to the grocery store, your friend's house, the abortion clinic, the antiwar protest, or the mosque. In a democratic society, we should know almost everything about what the government's doing, and it should know very little to nothing about us, unless it has a good reason to believe we're up to no good and shows that evidence to a judge. Unfortunately, that basic framework for an open, democracy society has been turned on its head. Now the government routinely collects vast troves of data about hundreds of millions of innocent people, casting everyone as a potential suspect until proven innocent. That's unacceptable.

by Admin, SOS |  Read more:
Image: uncredited

Fast Fashion


Over the past 15 years, the fashion industry has undergone a profound and baffling transformation. What used to be a stable three-month production cycle—the time it takes to design, manufacture, and distribute clothing to stores, in an extraordinary globe-spanning process—has collapsed, across much of the industry, to just two weeks. The “on-trend” clothes that were, until recently, only accessible to well-heeled, slender urban fashionistas, are now available to a dramatically broader audience, at bargain prices. A design idea for a blouse, cribbed from a runway show in Paris, can make it onto the racks in Wichita in a wide range of sizes within the space of a month.

Popularly known as “fast fashion,” this trend has inspired a great deal of media attention, but not many satisfying explanations as to how this huge shift came about, especially in the United States, and why it happened when it did. Some accounts attribute the new normal to top-down “process innovations” at big companies like Inditex, the parent company of Zara and the world’s largest—but hardly most typical—fast-fashion retailer. And at times, popular writing has simply lumped fast fashion in with the generally sped-up pace of life in the digital age, as if complex industrial systems were as fluid as our social media habits.

So the questions remain: Who is designing and manufacturing these garments in the U.S.? How are so many different suppliers producing such large volumes of clothes so quickly, executing coordinated feats of design, production, and logistics in a matter of days?

For my own part, I went looking for the answers in church.

Specifically, I paid a visit this past summer to the Ttokamsa Home Mission Church, a large, gray, industrial box of a building near a highway on the edge of Echo Park, a residential neighborhood in East Los Angeles. A well-known local institution among Korean Americans, the church is the spiritual home of the Chang family—the owners of Forever 21, the largest fast-fashion retailer based in the U.S. (Look on the bottom of any canary-yellow Forever 21 shopping bag and you’ll find the words “John 3:16.”)

With more than 630 locations worldwide, the Changs’ retail empire employs more than 35,000 people and made $3.7 billion in revenue in 2012. But in the pews at Ttokamsa, the Changs are in good company: The vast majority of their fellow parishioners are Korean families that also make their livelihoods in fast fashion.

As an anthropologist, I have been coming to Los Angeles with the photographer Lauren Lancaster for the past two years to study the hundreds of Korean families who have, over the last decade, transformed the city’s garment district into a central hub for fast fashion in the Americas. These families make their living by designing clothes, organizing the factory labor that will cut and sew them in places like China and Vietnam, and selling them wholesale to many of the most famous retailers in the U.S.—including Forever 21, Urban Outfitters, T.J. Maxx, Anthropologie, and Nordstrom.

I first became curious about the garment sector in Los Angeles after noticing that an increasingly large proportion of students at Parsons, the New York design school where I teach, were second-generation children of Korean immigrants from Southern California. Many of them were studying fashion marketing and design so they could return to Los Angeles to help scale up their parents’ businesses. These students and their contemporaries were, I came to understand, the driving force behind U.S. fast fashion—a phenomenon whose rise is less a story about corporate innovation than one about an immigrant subculture coming of age.

by Christina Moon, Pacific Standard |  Read more:
Image: Lauren Lancaster

Arthur Meyerson, Color of Light
via:

Monday, March 17, 2014

A Scientific Breakthrough Lets Us See to the Beginning of Time

At rare moments in scientific history, a new window on the universe opens up that changes everything. Today was quite possibly such a day. At a press conference on Monday morning at the Harvard-Smithsonian Center for Astrophysics, a team of scientists operating a sensitive microwave telescope at the South Pole announced the discovery of polarization distortions in the Cosmic Microwave Background Radiation, which is the observable afterglow of the Big Bang. The distortions appear to be due to the presence of gravitational waves, which would date back to almost the beginning of time.

This observation, made possible by the fact that gravitational waves can travel unimpeded through the universe, takes us to 10-35 seconds after the Big Bang. By comparison, the Cosmic Microwave Background—which, until today, was the earliest direct signal we had of the Big Bang—was created when the universe was already three hundred thousand years old.

If the discovery announced this morning holds up, it will allow us to peer back to the very beginning of time—a million billion billion billion billion billion times closer to the Big Bang than any previous direct observation—and will allow us to explore the fundamental forces of nature on a scale ten thousand billion times smaller than can be probed at the Large Hadron Collider, the world’s largest particle accelerator. Moreover, it will allow us to test some of the most ambitious theoretical speculations about the origin of our observed universe that have ever been made by humans—speculations that may first appear to verge on metaphysics. It might seem like an esoteric finding, so far removed from everyday life as to be of almost no interest. But, if confirmed, it will have increased our empirical window on the origins of the universe by a margin comparable to the amount it has grown in all of the rest of human history. Where this may lead, no one knows, but it should be cause for great excitement.

Even for someone who has been thinking about these possibilities for the past thirty-five years, the truth can sometimes seem stranger than fiction. In 1979, a young particle physicist named Alan Guth proposed what seemed like an outrageous possibility, which he called Inflation: that new physics, involving a large extrapolation from what could then be observed, might imply that the universe expanded in size by over thirty orders of magnitude in a tiny fraction of a second after the Big Bang, increasing in size by a greater amount in that instance than it has in the fourteen billion years since.

by Lawrence Krauss, New Yorker |  Read more:
Image: Steffen Richter/Harvard University

Why the Long Face?

I was listening to Paul Simon’s Hearts and Bones album recently, for the first time in many years – the first time, really, since I was a young teenager. I bought it when it came out in 1983 and listened to it over and over. But hearing it again, and particularly listening to the title track, I was struck by a question: how did I take this back then? What did it mean to me, and why did it mean so much?

So: the title song is a beautifully worn-down response to a relationship at its end, a mix of nostalgic glimpses of happier times and a weary, bruised sense of life in the aftermath of some cathartic break-up. Listening to it as a young teenager, still a virgin and almost wholly inexperienced in such emotions, I wonder if I didn’t think this is how I want to feel. I wanted the happiness, but in a retrospective way (because then it’s done and dusted and safe); and I wanted the melancholy because it just seemed so grown-up and sophisticated and suave. I wanted, as an old joke has it, to skip the marriage and go straight to the divorce. After all – and I am hardly the first person to point this out – there is a complex sort of joy in sadness.

But can this be right? Surely what people want is to be happy. Whole philosophies (I’m looking at you, utilitarianism) rest on the premise that more happiness is always and everywhere a good thing. There is a Global Happiness Index, measuring how happy people are (Denmark tops the league). Bhutan even has a Gross National Happiness Commission, with the power to review government policy decisions and allocate resources.

It’s good to be happy sometimes, of course. Yet the strange truth is that we don’t wish to be happy all the time. If we did, more of us would be happy – it’s not as if we in the affluent West lack tools or means to gratify ourselves. Sometimes we are sad because we have cause, and sometimes we are sad because – consciously or unconsciously – we want to be. Perhaps there’s a sense in which emotional variety is better than monotony, even if the monotone is a happy one. But there’s more to it than that, I think. We value sadness in ways that make happiness look a bit simple-minded. (...)

It was Charles Darwin, in The Expression of the Emotions in Man and Animals (1872), who noted that sadness manifested the same way in all cultures. For something so ubiquitous, it is tempting to venture an evolutionary explanation. Alas, the anthropological and evolutionary work in this area has focused almost entirely upon depression, which is not quite what we are talking about here. I can tell you with rather grim authority that the difference between elegant ennui and the black dog is like the difference between pleasant intoxication and typhus. Many evolutionary theories have been proposed for depression’s adaptive value, but no one has, so far as I am aware, tried to claim that it is enjoyable.

If depression is a foul miasma wreathing the brain, elegant sadness is more like a peacock’s tail, coloured in blue-gentian and rich marine greens. Is it also universal? To this question, anthropology offers no definitive answer. Yet the condition certainly manifests itself in a suggestive array of cultures. It is the sadness to which the Japanese phrase mono no aware gestures (物の哀れ, literally ‘the beautiful sorrow of things’). It is the haunted simplicity of those musical traditions that spread from Africa into the New World as the Blues. It’s the mixture of strength, energy, pity and melancholy that Claude Lévi-Strauss found in Brazil, encapsulated in the title of his book about his travels there Tristes Tropiques (1955). It’s the insight of Vergil’s Aeneas, as he looks back over his troubled life and forward to troubles yet to some: sunt lacrimae rerum; there are tears in everything, said not mournfully nor hopelessly but as a paradoxical statement about the beauty of the world (Aeneid 1:462).

by Adam Roberts, Aeon |  Read more:
Image via: 

Sunday, March 16, 2014

Wild Darkness

For twenty-six Septembers I’ve hiked up streams littered with corpses of dying humpbacked salmon. It is nothing new, nothing surprising, not the stench, not the gore, not the thrashing of black humpies plowing past their dead brethren to spawn and die. It is familiar; still, it is terrible and wild. Winged and furred predators gather at the mouths of streams to pounce, pluck, tear, rip, and plunder the living, dying hordes. This September, it is just as terrible and wild as ever, but I gather in the scene with different eyes, the eyes of someone whose own demise is no longer an abstraction, the eyes of someone who has experienced the tears, rips, and plunder of cancer treatment. In spring, I learned my breast cancer had come back, had metastasized to the pleura of my right lung. Metastatic breast cancer is incurable. Through its prism I now see this world.

I’m not a salmon biologist. I don’t hike salmon streams as part of my job. I hike up streams and bear trails and muskegs and mountains for pleasure. The work my husband, Craig, and I do each field season in Prince William Sound is sedentary. We study whales. For weeks at a stretch, we live on a thirty-four-foot boat far from any town, often out of cell-phone and internet range. We sit for hours on the flying bridge with binoculars or a camera pressed to our eyes. Periodically, we climb down the ladder and walk a few paces to the cabin to retrieve the orca or humpback catalogue, to drop the hydrophone, or to grab fresh batteries, mugs of hot soup or tea, or granola bars. We climb back up. We get wet; we get cold; we get bored; sometimes we even get sunburned. We eat, sleep, and work on the boat. Hikes are our sanity, our maintenance. We hike because we love this rainy, lush, turbulent, breathing, expiring, windy place as much as we love our work with whales. It’s a good thing, because in autumn weather thwarts our research half the time and sends us ashore, swaddled in heavy rain gear, paddling against williwaw gusts and sideways rain in our red plastic kayaks. What we find there is not always pretty.

Normally, September is the beginning of the end of our field season, which starts most years in April or May. But for me, this year it’s just the beginning, and conversely, like everything else in my life since I learned cancer had come back, it’s tinged with the prescience of ending. The median survival for a person with metastatic breast cancer is twenty-six months. Some people live much longer. An oncologist told me he could give me a prognosis if I demanded one, but it would most likely be wrong. I changed the subject. No one can tell me how long I will live. Will this be my last field season? Will the chemo drug I’m taking subdue the cancer into a long-term remission? Will I be well enough to work on the boat next summer? Will I be alive?

A summer of tests and procedures and doctor appointments kept me off the boat until now. A surgery and six-day hospitalization in early August to prevent fluid from building up in my pleural space taught me that certain experiences cut us off entirely from nature—or seem to; I know that as long as we inhabit bodies of flesh, blood, and bone, we are wholly inside nature. But under medical duress, we forget this. Flesh, blood, and bone not withstanding, a body hooked by way of tubes to suction devices, by way of an IV to a synthetic morphine pump, forgets its organic, animal self. In the hospital, I learned to fear something more than death: existence dependent upon technology, machines, sterile procedures, hoses, pumps, chemicals easing one kind of pain only to feed a psychic other. Existence apart from dirt, mud, muck, wind gust, crow caw, fishy orca breath, bog musk, deer track, rain squall, bear scat. The whole ordeal was a necessary palliation, a stint of suffering to grant me long-term physical freedom. And yet it smacked of the way people too often spend their last days alive, and it really scared me.

Ultimately, what I faced those hospital nights, what I face every day, is death impending—the other side, the passing over into, the big unknown—what poet Joseph Brodsky called his “wild darkness,” what poet Christian Wiman calls his “bright abyss.” Death may be the wildest thing of all, the least tamed or known phenomenon our consciousness has to reckon with. I don’t understand how to meet it, not yet—maybe never. Perhaps (I tell myself), though we deny and abhor and battle death in our society, though we hide it away, it is something so natural, so innate, that when the time comes, our bodies—our whole selves— know exactly how it’s done. All I know right now is that something has stepped toward me, some invisible presence in the woods, one I’ve always sensed and feared and backed away from, called out to in a tentative voice (hello?), trying to scare it off, but which I now must approach. I stumble toward it in dusky conifer light: my own predatory, furred, toothed, clawed angel. (...)

Can I take comfort in the countless births and deaths this earth enacts each moment, the jellyfish, the barnacles, the orcas, the salmon, the fungi, the trees, much less the humans? I woke this morning to the screech of gulls at the stream mouth. We’d anchored in Sleepy Bay for the night, a cove wide open to the strait where we often find orcas. The humpbacked salmon—millions returned this summer, a record run—are all up the creeks now. Before starting our daily search, Craig and I kayaked to shore. As we approached, I watched the gulls, dozens of them, launching from the sloping beach where the stream branches into rivulets and pours into the bay. They wheeled and dipped over our heads, then quickly settled again to their grim task, plucking at faded salmon carcasses scattered all over the stones. The stench of a salmon stream in September is a cloying muck of rot, waste, ammonia. Rocks are smeared with black bear shit, white gull shit. This is in-your-face death, death without palliation or mercy or intervention. At the same time, it is enlivening, feeding energy to gulls, bears, river otters, eagles, and the invisible decomposers who break the carcasses down to just bones and scales, which winter then erases. In spring, I kneel and drink from the same stream’s clear cold water, or plunge my head into it. It is snowmelt and rain filtered through alpine tundra, avalanche chute, muskeg, fen, and bog. It is water newly born, fresh, alive, and oxygenated, rushing over clean stones, numbing my skin.

by Eva Saulitis, Orion |  Read more:
Image: NOAA

[ed. I know Sleepy Bay well. The stream she mentions was reconstructed after being buried under a foot of oil during the Exxon Valdez spill. It's heartening to hear that it's still productive, and that people still enjoy it. It took many battles.]

How Finance Gutted Manufacturing

In May 2013 shareholders voted to break up the Timken Company—a $5 billion Ohio manufacturer of tapered bearings, power transmissions, gears, and specialty steel—into two separate businesses. Their goal was to raise stock prices. The company, which makes complex and difficult products that cannot be easily outsourced, employs 20,000 people in the United States, China, and Romania. Ward “Tim” Timken, Jr., the Timken chairman whose family founded the business more than a hundred years ago, and James Griffith, Timken’s CEO, opposed the move.

The shareholders who supported the breakup hardly looked like the “barbarians at the gate” who forced the 1988 leveraged buyout of RJR Nabisco. This time the attack came from the California State Teachers Retirement System pension fund, the second-largest public pension fund in the United States, together with Relational Investors LLC, an asset management firm. And Tim Timken was not, like the RJR Nabisco CEO, eagerly pursuing the breakup to raise his own take. But beneath these differences are the same financial pressures that have shaped corporate structure for thirty years.  (...)

In the radical downsizing of American manufacturing, changes in corporate structures since the 1980s have been a powerful driver, though not one that is generally recognized. Over the first decade of the twenty-first century, about 5.8 million U.S. manufacturing jobs disappeared. The most frequent explanations for this decline are productivity gains and increased trade with low-wage economies. Both of these factors have been important, but they explain far less of the picture than is usually claimed.  (...)

To better understand the decline of American manufacturing, we need to go back well before the last decade to see how changes in corporate structures made it more difficult to scale up innovation through production to market.

In the 1980s about two-dozen large, vertically integrated companies such as Motorola, DuPont, and IBM dominated the American scene. With some notable exceptions (for example, GE), large vertically integrated companies today have pared off activities and become not only smaller but also more narrowly focused on core competencies. Under pressure from financial markets, they have shed activities that investors deemed peripheral—such as Timken’s steel.

This process has been fostered by great technological advances in digitization, which have allowed companies to outsource and offshore many of the functions they previously had to carry out themselves. In the 1970s a Hewlett-Packard engineer who designed circuits for a new semiconductor chip had to work together with a technician with a razor blade to cut a mask to place on silicon. Now the engineer can send a complete file of digital instructions over the Internet to a cutting machine. The mask and the chip fabrication can take place in different companies, anywhere in the world. A senior executive of Cisco told MIT researchers:
The separation of R&D and manufacturing has today become possible at a level not even conceivable five years ago. Progress in technology allows us to have people working anywhere collaborating. We no longer need to have them located in clusters or centers of excellence. We now have the ability to sense and monitor what’s going on in our suppliers at any place and any time. Most of this is based on sensors deployed locally, distributed control systems, and new middleware and encryption schemes that allow this to be done securely over the open Internet. . . . In other words, not only do we monitor and control what’s happening inside a factory, but we’re also deeply into the supply chain feeding in and feeding out of the factory.
Digitization and the Internet continue in multiple ways to enable the fragmentation of corporate structures that financial markets demand.

The breakup of vertically integrated corporations and their recomposition into globally linked value chains of designers, researchers, manufacturers, and distributors has had some enormous benefits both for the United States and for developing economies. It has meant lower costs for consumers, new pathways for building businesses, and a chance for poor countries to create new industries and raise incomes.

But the changes in corporate structures that brought about these new opportunities also left big holes in the American industrial ecosystem. These holes are market failures. Functions once performed by big companies are now carried out by no one.

by Suzanne Berger, Boston Review |  Read more:
Image: Timken Company

Saturday, March 15, 2014

Amy Winehouse


[ed. Just wasted 15 minutes trying to change the intro thumbnail to this video. It can be done, just not by me. Sorry.]