Thursday, March 20, 2014

What I Want to Know Is Why You Hate Porn Stars

Here's what I want to know.

It's an open question to everyone, to my ex-boyfriends, neuroscientists, radical feminists, politicians, people on Twitter, my friends, myself.

What is it about porn stars that bothers you so much?

Why do you hate us?

What is it about us that you don't like? (...)

"Food porn" is pictures of food you love eating.

"Wedding porn" is pictures of lavish dresses, table settings, cakes.

"Science porn" is pictures of the natural world or how-things-work charts.

There's skater porn (videos of skateboarders doing daring tricks on stairways and in parking lots), book porn (images of huge libraries and bookstores), fashion porn (photos of outrageously ornamental outfits). There's even Christian missionary porn (pics of missionaries helping the poor).

People love using the word "porn" as long as there's a partner for it. Pair "porn" with something else and it's usually a good thing. A celebration of style and culture. But that word on its own? Well.

by Conner Habib, The Stranger |  Read more:
Image: Paccarik Orue

Parents, Leave Those Kids Alone


It’s still morning, but someone has already started a fire in the tin drum in the corner, perhaps because it’s late fall and wet-cold, or more likely because the kids here love to start fires. Three boys lounge in the only unbroken chairs around it; they are the oldest ones here, so no one complains. One of them turns on the radio—Shaggy is playing (Honey came in and she caught me red-handed, creeping with the girl next door)—as the others feel in their pockets to make sure the candy bars and soda cans are still there. Nearby, a couple of boys are doing mad flips on a stack of filthy mattresses, which makes a fine trampoline. At the other end of the playground, a dozen or so of the younger kids dart in and out of large structures made up of wooden pallets stacked on top of one another. Occasionally a group knocks down a few pallets—just for the fun of it, or to build some new kind of slide or fort or unnamed structure. Come tomorrow and the Land might have a whole new topography.

Other than some walls lit up with graffiti, there are no bright colors, or anything else that belongs to the usual playground landscape: no shiny metal slide topped by a red steering wheel or a tic-tac-toe board; no yellow seesaw with a central ballast to make sure no one falls off; no rubber bucket swing for babies. There is, however, a frayed rope swing that carries you over the creek and deposits you on the other side, if you can make it that far (otherwise it deposits you in the creek). The actual children’s toys (a tiny stuffed elephant, a soiled Winnie the Pooh) are ignored, one facedown in the mud, the other sitting behind a green plastic chair. On this day, the kids seem excited by a walker that was donated by one of the elderly neighbors and is repurposed, at different moments, as a scooter, a jail cell, and a gymnastics bar.

The Land is an “adventure playground,” although that term is maybe a little too reminiscent of theme parks to capture the vibe. In the U.K., such playgrounds arose and became popular in the 1940s, as a result of the efforts of Lady Marjory Allen of Hurtwood, a landscape architect and children’s advocate. Allen was disappointed by what she described in a documentary as “asphalt square” playgrounds with “a few pieces of mechanical equipment.” She wanted to design playgrounds with loose parts that kids could move around and manipulate, to create their own makeshift structures. But more important, she wanted to encourage a “free and permissive atmosphere” with as little adult supervision as possible. The idea was that kids should face what to them seem like “really dangerous risks” and then conquer them alone. That, she said, is what builds self-confidence and courage. (...)

If a 10-year-old lit a fire at an American playground, someone would call the police and the kid would be taken for counseling. At the Land, spontaneous fires are a frequent occurrence. The park is staffed by professionally trained “playworkers,” who keep a close eye on the kids but don’t intervene all that much. Claire Griffiths, the manager of the Land, describes her job as “loitering with intent.” Although the playworkers almost never stop the kids from what they’re doing, before the playground had even opened they’d filled binders with “risk benefits assessments” for nearly every activity. (In the two years since it opened, no one has been injured outside of the occasional scraped knee.) Here’s the list of benefits for fire: “It can be a social experience to sit around with friends, make friends, to sing songs to dance around, to stare at, it can be a co-operative experience where everyone has jobs. It can be something to experiment with, to take risks, to test its properties, its heat, its power, to re-live our evolutionary past.” The risks? “Burns from fire or fire pit” and “children accidentally burning each other with flaming cardboard or wood.” In this case, the benefits win, because a playworker is always nearby, watching for impending accidents but otherwise letting the children figure out lessons about fire on their own.  (...)

Like most parents my age, I have memories of childhood so different from the way my children are growing up that sometimes I think I might be making them up, or at least exaggerating them. I grew up on a block of nearly identical six-story apartment buildings in Queens, New York. In my elementary-school years, my friends and I spent a lot of afternoons playing cops and robbers in two interconnected apartment garages, after we discovered a door between them that we could pry open. Once, when I was about 9, my friend Kim and I “locked” a bunch of younger kids in an imaginary jail behind a low gate. Then Kim and I got hungry and walked over to Alba’s pizzeria a few blocks away and forgot all about them. When we got back an hour later, they were still standing in the same spot. They never hopped over the gate, even though they easily could have; their parents never came looking for them, and no one expected them to. A couple of them were pretty upset, but back then, the code between kids ruled. We’d told them they were in jail, so they stayed in jail until we let them out. A parent’s opinion on their term of incarceration would have been irrelevant.

I used to puzzle over a particular statistic that routinely comes up in articles about time use: even though women work vastly more hours now than they did in the 1970s, mothers—and fathers—of all income levels spend much more time with their children than they used to. This seemed impossible to me until recently, when I began to think about my own life. My mother didn’t work all that much when I was younger, but she didn’t spend vast amounts of time with me, either. She didn’t arrange my playdates or drive me to swimming lessons or introduce me to cool music she liked. On weekdays after school she just expected me to show up for dinner; on weekends I barely saw her at all. I, on the other hand, might easily spend every waking Saturday hour with one if not all three of my children, taking one to a soccer game, the second to a theater program, the third to a friend’s house, or just hanging out with them at home. When my daughter was about 10, my husband suddenly realized that in her whole life, she had probably not spent more than 10 minutes unsupervised by an adult. Not 10 minutes in 10 years.

It’s hard to absorb how much childhood norms have shifted in just one generation. Actions that would have been considered paranoid in the ’70s—walking third-graders to school, forbidding your kid to play ball in the street, going down the slide with your child in your lap—are now routine. In fact, they are the markers of good, responsible parenting. One very thorough study of “children’s independent mobility,” conducted in urban, suburban, and rural neighborhoods in the U.K., shows that in 1971, 80 percent of third-graders walked to school alone. By 1990, that measure had dropped to 9 percent, and now it’s even lower. When you ask parents why they are more protective than their parents were, they might answer that the world is more dangerous than it was when they were growing up. But this isn’t true, or at least not in the way that we think. For example, parents now routinely tell their children never to talk to strangers, even though all available evidence suggests that children have about the same (very slim) chance of being abducted by a stranger as they did a generation ago. Maybe the real question is, how did these fears come to have such a hold over us? And what have our children lost—and gained—as we’ve succumbed to them?

by Hanna Rosin, The Atlantic |  Read more:
Image: Hanna Rosin

Wednesday, March 19, 2014

Ban Tipping

As a person who writes about food and drink for a living, I couldn’t tell you the first thing about Bill Perry or whether the beers he sells are that great. But I can tell you that months before opening The Public Option, a brewpub in Washington DC, the man has already landed in my good graces. That’s because he plans to ban tipping in favor of paying his servers an actual living wage. Bill Perry might just be the most progressive thing going in Washington right now.

I hate tipping.

I hate it because it’s an obligation masquerading as an option, and a bizarre singling-out of one person’s compensation, just dangling there, clumsily, outside the cost of my meal. I hate it for the postprandial math it requires of me. But mostly, I hate tipping because I believe that I would be in a better place – as a diner, and as a human – if pay decisions regarding employees were simply left up to their employers, as is the custom in virtually every other industry, in pretty much every civilized corner of the earth.

Most of you think that you hate to tip, too. The research suggests otherwise. You actually love tipping! You like to feel that you have a voice in how much money your server makes. No matter how the math works out, you persistently view restaurants with voluntary tipping systems as being a better value.

This makes it extremely difficult for restaurants and bars to do away with the tipping system. Which is a shame, really, because tipping deserves to go the way of the zeppelin, pantaloons and Creationism. We should know better by now.

by Elizabeth Gunnison Dunn, The Guardian | Read more:
Image: Francois Lenoir / Reuters

[ed. Enjoy your day.]

Pixel and Dimed


The gig economy (a phrase which encompasses both the related collaborative economy and sharing economy) represents a theory of the future of work that's a viable alternative to laboring for corporate America. Instead of selling your soul to the Man, it goes, you are empowered to work for yourself on a project-by-project basis. One day it might be delivering milk, but the next it's building Ikea furniture, driving someone to the airport, hosting a stranger from out of town in your spare bedroom, or teaching a class on a topic in which you're an expert. The best part? The work will come to you, via apps on your smartphone, making the process of finding work as easy as checking your Twitter feed.

Whatever you do, it will be your choice. Because you are no longer just an employee with set hours and wages working to make someone else rich. In the future, you will be your very own mini-business. (...)

The only way to find out whether the tech world's solution for the poor job market and income inequality had the answer was to put it to the test. For four weeks this winter, spread out over a six-week period to avoid the holidays, I hustled for work in the gig economy. Technically I was undercover, but I used my real name and background, and whenever asked, I readily shared that I was a journalist. (Alas, people were all too willing to accept that a writer was a perfect candidate for alternative sources of income.) I have changed the names of anyone who did not know, when I was speaking to them, that I was working on this story.

I decided that I would accept any gigs I could get my hands on in pursuit of my goal: I would use the slick technology and shimmering promise of the Silicon Valley-created gig economy to beat Capitol Hill's $10.10 per hour proposal. How hard could it be?

by Sarah Kessler, Fast Company |  Read more:
Image: Fast Company

The Great Corporate Cash-Hoarding Crisis

A troubling change is taking place in American business, one that explains why nearly five years after the Great Recession officially ended so many people cannot find work and the economy remains frail.

The biggest American corporations are reporting record profits, official data shows. But the companies are not investing their windfalls in business expansion, which would mean jobs. Nor are they paying profits out to shareholders as dividends.

Instead, the biggest companies are putting profits into the corporate equivalent of a mattress. They are hoarding what just a few years ago would have been considered unimaginable pools of cash and buying risk-free securities that can be instantly converted to cash, which together are known in accounting parlance as liquid assets.

This is just one of many signs that America’s chief executive officers, chief financial officers and corporate boards are behaving fearfully. They are comparable to the slothful servant in the biblical parable of the talents who buries a fortune in the ground rather than invest it. Their caution, aided by government policy, costs all of us. (...)

My analysis of the latest data from the Federal Reserve, the IRS and corporate reports shows that American businesses last year held almost $7.9 trillion of liquid assets worldwide.

Those who follow the news may be surprised, because the figure that’s been mentioned lately has been just under $2 billion. That figure, which comes from the Federal Reserve, is only for domestic cash. The Fed makes its calculations (from the latest Flow of Funds report) using IRS worldwide data after subtracting offshore money.

My estimate is conservative. I did not count cash due to American companies from their offshore subsidiaries as accounts receivable because the IRS does not provide fine details on these additional trillions of dollars.  (...)

Turning taxes into profit

These facts also demonstrate that America’s CEOs, chief financial officers and corporate boards fear the future because instead of investing their cash they hold onto it. But even if cash hoarding comforts weak-kneed executives, it makes no sense for investors, workers or taxpayers.

Investors do not need a company to hold their extra cash. That’s what savings accounts are for.

Workers need companies to invest in the future, replacing old factories, purchasing new equipment and engaging in other activities that employ people in pursuit of bigger future profits.

Taxpayers also get a terrible deal. When companies siphon cash out of the country it reduces their immediate federal income taxes. Congress spends the money anyway, which requires borrowing. Companies then loan Washington the money they did not pay in taxes, collecting interest.

This means companies that do this turn a profit on their taxes. Consider a company that defers a $1 billion tax for 30 years, using the cash to buy federal debt paying 4 percent interest in an era of 3 percent inflation. The company will collect more than $2.2 billion in interest, while inflation will erode the value of the tax to $401 million, a nearly 60 percent reduction. From the government’s point of view the tax is converted from a source of revenue into an expense.

by David Cay Johnston, Aljazeera America |  Read more:
Image: Comstock/Thinkstock

End of the day
via:

John Hammond

Rage Against the Machines

Anybody who grew up in America can tell you it’s a pretty violent country, and every consumer knows that our mass culture was reflecting that fact long before it began spewing the stuff in videogames. So on the surface, it seems strange that special powers should be attributed to games. What gives?  (...)

But if there is something dangerous about videogames now, it’s not the specter of players transforming into drooling sociopaths by enacting depraved fantasies. Instead of forensically dissecting the content packaged in games, we should look closely at the system of design and distribution that’s led them out of teen bedrooms and into the hands of a broader audience via computers and smartphones. It’s not Doom or Mortal Kombat or Death Race we should fear, in other words; it’s Candy Crush Saga, Angry Birds, and FarmVille.

To understand what is really distinctive about videogames, it helps to see how their operation runs like a racket: how the experience is designed to offer players a potentially toxic brew of guilty pleasure spiced with a kind of extortion and how they profit by stoking addiction. We might remember why we looked sideways at machine-enabled gaming in the first place—because it was a mode of play that seemed to normalize corrupt business practices in the guise of entertainment. Because the industry often seems like just another medium for swindlers. (...)

The new model of videogame delivery is “free-to-play” (F2P). At first it was limited to massively multiplayer online games (MMOs) like Neopets and MapleStory, which primarily relied on kids pestering their parents to fund their accounts so that they could buy in-game goods. These games always offer the first taste for free, and then ratchet up the attraction of paying for a more robust or customized gaming environment. In 2007, Facebook released a platform for developers to make free-to-play apps and games run within the social network’s ecosystem. Then came the iPhone, the Apple App Store, and all the copycats and spinoffs that it inspired. By 2010, free-to-play had become the norm for new games, particularly those being released for play online, via downloads, on social networks, or on smartphones—a category that is now quickly overtaking disc-based games. The point is to sell, sell, sell; the games give users opportunities to purchase virtual items or add-ons like clothing, hairstyles, or pets for their in-game characters.

In 2009, Facebook gaming startup darling Zynga launched a free-to-play game called FarmVille that went on to reach more than 80 million players. It offered a core experience for free, with add-ons and features available to those with enough “farm cash” scrip. Players can purchase farm cash through real-money transactions, earn it through gameplay accomplishments, or receive it as a reward for watching video ads or signing up for unrelated services that pay referral fees to game operators. Former Zynga CEO Mark Pincus sought out every possible method for increasing revenues. “I knew I needed revenues, right fucking now,” Pincus told attendees of a Berkeley startup mixer in 2009. “I did every horrible thing in the book just to get revenues right away.”

Every horrible thing in the book included designing a highly manipulative gameplay environment, much like the ones doled out by slot machines and coin-ops. FarmVille users had to either stop after they expended their in-game “energy” or pay up, in which case they could immediately continue. The in-game activities were designed so that they took much longer than any single play session could reasonably last, requiring players to return at prescheduled intervals to complete those tasks or else risk losing work they’d previously done—and possibly spent cash money to pursue. Players were prodded to spread notices and demands among their Facebook friends in exchange for items or favors that were otherwise inaccessible. As with slots and coin-ops, the occasional calculated anomaly in a free-to-play game doesn’t alter the overall results of the system, but only recharges the desire for another surprise, another epiphany; meanwhile, the expert player and the jackpot winner are exceptions that prove the rule.

FarmVille’s mimicry of the economically obsolete production unit of the family farm, in short, proved all too apt—like the hordes of small farmers sucked into tenantry and debt peonage during the first wave of industrialization in America, the freeholders on FarmVille’s vast virtual acreage soon learned that the game’s largely concealed infrastructure was where all the real fee-gouging action was occurring. Even those who kept their wallets tucked away in their pockets and purses would pay in other ways—by spreading “viral” invitations to recruit new farmers, for example. FarmVille users might have been having fun in the moment, but before long, they would look up to discover they owed their souls to the company store.

by Ian Bogost, Baffler |  Read more:
Image: Micael Duffy

Tuesday, March 18, 2014

The Human Heart of Sacred Art


There is a passage in Marilynne Robinson's novel Gilead, in which the main character John Ames, a pastor, is walking to his church, and comes across a young couple ahead of him in the street:
The sun had come up brilliantly after a heavy rain, and the trees were glistening and very wet. On some impulse, plain exuberance, I suppose, the fellow jumped up and caught hold of a branch, and a storm of luminous water came pouring down on the two of them, and they laughed and took off running, the girl sweeping water off her hair and her dress as if she were a little bit disgusted, but she wasn't. It was a beautiful thing to see, like something from a myth. I don't know why I thought of that now, except perhaps because it is easy to believe in such moments that water was made primarily for blessing, and only secondarily for growing vegetables or doing the wash. I wish I had paid more attention to it.
It is a wonderful, luminous passage, typical of Robinson's ability to discover the poetic even in the most mundane. Robinson is a Christian, indeed a Calvinist (though, improbably, she tends to see John Calvin more as a kind of Erasmus-like humanist than as the firebrand preacher who railed against the human race as constituting a "teeming horde of infamies"), whose life and writing is suffused with religious faith. Robinson's fiction possesses an austere beauty, "a Protestant bareness" as the critic James Wood has put it,[1] that recalls both the English poet George Herbert and "the American religious spirit that produced Congregationalism and nineteenth-century Transcendentalism and those bareback religious riders Emerson, Thoreau and Melville".

There is in Robinson's writing a spiritual force that clearly springs from her religious faith. It is nevertheless a spiritual force that transcends the merely religious. "There is a grandeur in this vision of life", Darwin wrote in The Origin of Species, expressing his awe at nature's creation of "endless forms most beautiful and most wonderful". The springs of Robinson's awe are different from those of Darwin's. And yet she too finds grandeur in all that she touches, whether in the simple details of everyday life or in the great moral dilemmas of human existence. Robinson would probably describe it as the uncovering of a divine presence in the world. But it is also the uncovering of something very human, a celebration of our ability to find the poetic and the transcendent, not through invoking the divine, but as a replacement for the divine.

One does not, of course, have to be religious to appreciate religiously inspired art. One can, as a non-believer, listen to Mozart's Requiem or Nusrat Fateh Ali Khan's qawwli, look upon Michaelangelo's Adam or the patterns of the Sheikh Lotfollah Mosque in Isfahan in Iran, read Dante's Divine Comedy or Lao Zi's Daode Jing, and be drawn into a world of awe and wonder. Many believers may question whether non-believers can truly comprehend the meaning of religiously-inspired art. We can, however, turn this round and ask a different question. What is it that is "sacred" about sacred art? For religious believers, the sacred, whether in art or otherwise, is clearly that which is associated with the holy and the divine. The composer John Tavener, who died at the end of last year, was one of the great modern creators of sacred music. A profoundly religious man – he was a convert to Russian Orthodoxy – Tavener's faith and sense of mysticism suffused much of his music. Historically, and in the minds of most people today, the sacred in art is, as it was with Tavener, inextricably linked with religious faith.

There is, however, another sense in which we can think about the sacred in art. Not so much as an expression of the divine but, paradoxically perhaps, more an exploration of what it means to be human; what it is to be human not in the here and now, not in our immediacy, nor merely in our physicality, but in a more transcendental sense. It is a sense that is often difficult to capture in a purely propositional form, but which we seek to grasp through art or music or poetry. Transcendence does not, however, necessarily have to be understood in a religious fashion – that is, solely in relation to some concept of the divine. It is rather a recognition that our humanness is invested not simply in our existence as individuals or as physical beings, but also in our collective existence as social beings and in our ability, as social beings, to rise above our individual physical selves and to see ourselves as part of a larger project, to cast upon the world, and upon human life, a meaning or purpose that exists only because we as human beings create it.
by Kenan Malik, Eurozine | Read more
Image: Richard Pluck. Source: Flickr

How "Revolution" Became an Adjective

    In case of rain, the revolution will take place in the hall.
    -- Erwin Chargaff

For the last several years, the word “revolution” has been hanging around backstage on the national television talk-show circuit waiting for somebody, anybody -- visionary poet, unemployed automobile worker, late-night comedian -- to cue its appearance on camera. I picture the word sitting alone in the green room with the bottled water and a banana, armed with press clippings of its once-upon-a-time star turns in America’s political theater (tie-dyed and brassiere-less on the barricades of the 1960s countercultural insurrection, short-haired and seersucker smug behind the desks of the 1980s Reagan Risorgimento), asking itself why it’s not being brought into the segment between the German and the Japanese car commercials.

Surely even the teleprompter must know that it is the beast in the belly of the news reports, more of them every day in print and en blog, about income inequality, class conflict, the American police state. Why then does nobody have any use for it except in the form of the adjective, revolutionary, unveiling a new cellphone app or a new shade of lipstick?

I can think of several reasons, among them the cautionary tale told by the round-the-clock media footage of dead revolutionaries in Syria, Egypt, and Tunisia, also the certain knowledge that anything anybody says (on camera or off, to a hotel clerk, a Facebook friend, or an ATM) will be monitored for security purposes. Even so, the stockpiling of so much careful silence among people who like to imagine themselves on the same page with Patrick Henry -- “Give me liberty, or give me death” -- raises the question as to what has become of the American spirit of rebellion. Where have all the flowers gone, and what, if anything, is anybody willing to risk in the struggle for “Freedom Now,” “Power to the People,” “Change We Can Believe In”?

My guess is next to nothing that can’t be written off as a business expense or qualified as a tax deduction. Not in America at least, but maybe, with a better publicist and 50% of the foreign rights, somewhere east of the sun or west of the moon. (...)

I inherited the instinct as a true-born American bred to the worship of both machinery and money; an appreciation of its force I acquired during a lifetime of reading newspaper reports of political uprisings in the provinces of the bourgeois world state -- in China, Israel, and Greece in the 1940s; in the 1950s those in Hungary, Cuba, Guatemala, Algeria, Egypt, Bolivia, and Iran; in the 1960s in Vietnam, France, America, Ethiopia, and the Congo; in the 1970s and 1980s in El Salvador, Poland, Nicaragua, Kenya, Argentina, Chile, Indonesia, Czechoslovakia, Turkey, Jordan, Cambodia, again in Iran; over the last 24 years in Russia, Venezuela, Lebanon, Croatia, Bosnia, Libya, Tunisia, Syria, Ukraine, Iraq, Somalia, South Africa, Romania, Sudan, again in Algeria and Egypt.

The plot line tends to repeat itself -- first the new flag on the roof of the palace, rapturous crowds in the streets waving banners; then searches, requisitions, massacres, severed heads raised on pikes; soon afterward the transfer of power from one police force to another police force, the latter more repressive than the former (darker uniforms, heavier motorcycles) because more frightened of the social and economic upheavals they can neither foresee nor control.

All the shiftings of political power produced changes within the committees managing regional budgets and social contracts on behalf of the bourgeois imperium. None of them dethroned or defenestrated Adams’ dynamo or threw off the chains of Marx’s cash nexus. That they could possibly do so is the “romantic idea” that Albert Camus, correspondent for the French Resistance newspaper Combat during and after World War II, sees in 1946 as having been “consigned to fantasy by advances in the technology of weaponry.”

The French philosopher Simone Weil draws a corollary lesson from her acquaintance with the Civil War in Spain, and from her study of the communist Sturm und Drang in Russia, Germany, and France subsequent to World War I. “One magic word today seems capable of compensating for all sufferings, resolving all anxieties, avenging the past, curing present ills, summing up all future possibilities: that word is revolution... This word has aroused such pure acts of devotion, has repeatedly caused such generous blood to be shed, has constituted for so many unfortunates the only source of courage for living, that it is almost a sacrilege to investigate it; all this, however, does not prevent it from possibly being meaningless.”

by Lewis Lapham, Tom Dispatch |  Read more:
Image: via:

On a Strange Roof, Thinking of Home

In 2009 The Oxford American polled 134 Southern writers and academics and put together a list of the greatest Southern novels of all time based on their responses. All save one, The Adventures of Huckleberry Finn, were published between 1929 and 1960. What we think of when we think of “Southern fiction” exists now almost entirely within the boundaries of the two generations of writers that occupied that space. Asked to name great American authors, we’ll give answers that span time from Hawthorne and Melville to Whitman to DeLillo. Ask for great Southern ones and you’ll more than likely get a name from the Southern Renaissance: William Faulkner, Harper Lee, Flannery O’Connor, Walker Percy, Eudora Welty, Thomas Wolfe—all of them sandwiched into the same couple of post-Agrarian decades.

The two waves of Southern writers that crested in the wake of the Agrarian-Mencken fight, first in the 1930s and ’40s, and then in the ’50s and ’60s, didn’t build upon the existing tradition of Southern letters. They weren’t conceived of as new additions to the canon, but as an entirely new canon unto themselves, supplanting the old. They remade the popular notion of Southern literary culture, obscuring predecessors who had, in their time, seemed immortal.

“Southern,” as a descriptor of literature, is immediately familiar, possessed of a thrilling, evocative, almost ontological power. It is a primary descriptor, and alone among American literary geographies in that respect. Faulkner’s work is essentially “Southern” in the same way that Thomas Pynchon’s is essentially “postmodern,” but not, you’ll note, “Northeastern.” To displace Faulkner from his South would be to remove an essential quality; he would functionally cease to exist in a recognizable way.

It applies to the rest of the list, too (with O’Connor the possible exception, being inoculated somewhat by her Catholicism). It is impossible to imagine these writers divorced from the South. This is unusual, and a product of the unusual circumstances that gave rise to them. Faulkner, Lee, Percy, and Welty were no more Southern than Edgar Allen Poe or Sidney Lanier or Kate Chopin, and yet their writing, in the context of the South at that time, definitively was. There’s a universal appeal to their work, to be certain, but it’s also very much a regional literature, one grappling with a very specific set of circumstances in a fixed time, and correspondingly, one with very specific interests: the wearing away of the old Southern social structures, the economic uncertainty inherent in family farming, and overt, systematized racism (which, while undoubtedly still present in the South today, is very much changed from what it was).  (...)

Put a character in a tobacco field and give them a shotgun and an accent and it will evoke, without fail, a sense of the South; this is true. If they pop off with a “Hey there, y’all,” it will sound fitting, correct, like the accordion bleats that mark transitions between stories in a public radio program; useful in pushing you toward a desired emotional state, and fun to listen to when done well. But, on the other hand, it doesn’t mean anything. If this is, in fact, “Southern fiction,” then it is becoming as stale as it was a century ago—updated only in that, instead of regurgitating the Lost Cause ethos, it is now Faulkner’s South that’s subjected to the regional nostalgic impulse, a double reverberation.

There is nothing wrong with these writers because of this. It’s not that they’ve failed somehow to keep up, or are stupefying readers, or anything of the sort. It’s that this kind of writing is no longer reflective of the South—or, it reflects a South that is no longer. We wouldn’t think of someone writing whaling novels as quintessentially “New England” anymore, either. The South isn’t so homogenous a culture as it once was, and the societal tropes that Faulkner and Welty and even Barry Hannah grew up with and explored in their fiction are, in large part, gone. The rise of industrial-scale agribusiness, rapid suburbanization, the death of traditional industries like textiles, the corresponding growth of high-tech industries, a major increase in the Hispanic population: all these things and many more have contributed to a wildly different South than the one summoned in what we casually call “Southern writing.”

by Ed Winstead, Guernica | Read more:
Image: Alec Soth

Jonathan Curry
via:

DHS Wants to Track You Everywhere You Drive

[ed. See also: this.]

Immigrations, Customs Enforcement wants to firm up its relationship with Vigilant Solutions, the most dominant actor in the increasingly powerful license plate reader industry, to enable agents to more efficiently track down people they want to deport. Vigilant maintains a national database, called the National Vehicle Location Service, containing information revealing the sensitive driving histories of millions of law-abiding people. According to the company, the database currently contains nearly 2 billion discrete records of our movements, and grows by almost 100 million records per month.

In a widely reported but largely misunderstood solicitation for bids, DHS announced that it wants access to a nationwide license plate reader database, along with technology enabling agents to capture and view data from the field, using their smartphones. Reading the solicitation, I was struck by the fact that it almost perfectly describes Vigilant’s system. It’s almost as if the solicitation was written by Vigilant, it so comprehensively sketches out the contours of the corporation’s offerings.

Lots of news reports are misinterpreting DHS’ solicitation, implying that the agency wants to either build its own database or ask a contractor to build one. The department doesn’t intend to build its own license plate reader database, and it isn’t asking corporations to build one. Instead, it is seeking bids from private companies that already maintain national license plate reader databases. And because it’s the only company in the country that offers precisely the kind of services that DHS wants, there’s about a 99.9 percent chance that this contract will be awarded to Vigilant Solutions. (Mark my words.)

According to documents obtained by the ACLU, ICE agents and other branches of DHS have already been tapping into Vigilant’s data sets for years. So why did the agency decide to go public with this solicitation now? Your guess is as good as mine, but it may simply be a formality so that the agency can pretend as if there was actually robust competition in the bidding process. (As recent reporting about the FBI’s secretive surveillance acquisitions has shown, no-bid contracts for spy gear tend to raise eyebrows when they’re finally discovered.)

What’s the problem with a nationwide license plate tracking database, anyway? If you aren't the subject of a criminal investigation, the government shouldn't be keeping tabs on when you go to the grocery store, your friend's house, the abortion clinic, the antiwar protest, or the mosque. In a democratic society, we should know almost everything about what the government's doing, and it should know very little to nothing about us, unless it has a good reason to believe we're up to no good and shows that evidence to a judge. Unfortunately, that basic framework for an open, democracy society has been turned on its head. Now the government routinely collects vast troves of data about hundreds of millions of innocent people, casting everyone as a potential suspect until proven innocent. That's unacceptable.

by Admin, SOS |  Read more:
Image: uncredited

Fast Fashion


Over the past 15 years, the fashion industry has undergone a profound and baffling transformation. What used to be a stable three-month production cycle—the time it takes to design, manufacture, and distribute clothing to stores, in an extraordinary globe-spanning process—has collapsed, across much of the industry, to just two weeks. The “on-trend” clothes that were, until recently, only accessible to well-heeled, slender urban fashionistas, are now available to a dramatically broader audience, at bargain prices. A design idea for a blouse, cribbed from a runway show in Paris, can make it onto the racks in Wichita in a wide range of sizes within the space of a month.

Popularly known as “fast fashion,” this trend has inspired a great deal of media attention, but not many satisfying explanations as to how this huge shift came about, especially in the United States, and why it happened when it did. Some accounts attribute the new normal to top-down “process innovations” at big companies like Inditex, the parent company of Zara and the world’s largest—but hardly most typical—fast-fashion retailer. And at times, popular writing has simply lumped fast fashion in with the generally sped-up pace of life in the digital age, as if complex industrial systems were as fluid as our social media habits.

So the questions remain: Who is designing and manufacturing these garments in the U.S.? How are so many different suppliers producing such large volumes of clothes so quickly, executing coordinated feats of design, production, and logistics in a matter of days?

For my own part, I went looking for the answers in church.

Specifically, I paid a visit this past summer to the Ttokamsa Home Mission Church, a large, gray, industrial box of a building near a highway on the edge of Echo Park, a residential neighborhood in East Los Angeles. A well-known local institution among Korean Americans, the church is the spiritual home of the Chang family—the owners of Forever 21, the largest fast-fashion retailer based in the U.S. (Look on the bottom of any canary-yellow Forever 21 shopping bag and you’ll find the words “John 3:16.”)

With more than 630 locations worldwide, the Changs’ retail empire employs more than 35,000 people and made $3.7 billion in revenue in 2012. But in the pews at Ttokamsa, the Changs are in good company: The vast majority of their fellow parishioners are Korean families that also make their livelihoods in fast fashion.

As an anthropologist, I have been coming to Los Angeles with the photographer Lauren Lancaster for the past two years to study the hundreds of Korean families who have, over the last decade, transformed the city’s garment district into a central hub for fast fashion in the Americas. These families make their living by designing clothes, organizing the factory labor that will cut and sew them in places like China and Vietnam, and selling them wholesale to many of the most famous retailers in the U.S.—including Forever 21, Urban Outfitters, T.J. Maxx, Anthropologie, and Nordstrom.

I first became curious about the garment sector in Los Angeles after noticing that an increasingly large proportion of students at Parsons, the New York design school where I teach, were second-generation children of Korean immigrants from Southern California. Many of them were studying fashion marketing and design so they could return to Los Angeles to help scale up their parents’ businesses. These students and their contemporaries were, I came to understand, the driving force behind U.S. fast fashion—a phenomenon whose rise is less a story about corporate innovation than one about an immigrant subculture coming of age.

by Christina Moon, Pacific Standard |  Read more:
Image: Lauren Lancaster

Arthur Meyerson, Color of Light
via:

Monday, March 17, 2014

A Scientific Breakthrough Lets Us See to the Beginning of Time

At rare moments in scientific history, a new window on the universe opens up that changes everything. Today was quite possibly such a day. At a press conference on Monday morning at the Harvard-Smithsonian Center for Astrophysics, a team of scientists operating a sensitive microwave telescope at the South Pole announced the discovery of polarization distortions in the Cosmic Microwave Background Radiation, which is the observable afterglow of the Big Bang. The distortions appear to be due to the presence of gravitational waves, which would date back to almost the beginning of time.

This observation, made possible by the fact that gravitational waves can travel unimpeded through the universe, takes us to 10-35 seconds after the Big Bang. By comparison, the Cosmic Microwave Background—which, until today, was the earliest direct signal we had of the Big Bang—was created when the universe was already three hundred thousand years old.

If the discovery announced this morning holds up, it will allow us to peer back to the very beginning of time—a million billion billion billion billion billion times closer to the Big Bang than any previous direct observation—and will allow us to explore the fundamental forces of nature on a scale ten thousand billion times smaller than can be probed at the Large Hadron Collider, the world’s largest particle accelerator. Moreover, it will allow us to test some of the most ambitious theoretical speculations about the origin of our observed universe that have ever been made by humans—speculations that may first appear to verge on metaphysics. It might seem like an esoteric finding, so far removed from everyday life as to be of almost no interest. But, if confirmed, it will have increased our empirical window on the origins of the universe by a margin comparable to the amount it has grown in all of the rest of human history. Where this may lead, no one knows, but it should be cause for great excitement.

Even for someone who has been thinking about these possibilities for the past thirty-five years, the truth can sometimes seem stranger than fiction. In 1979, a young particle physicist named Alan Guth proposed what seemed like an outrageous possibility, which he called Inflation: that new physics, involving a large extrapolation from what could then be observed, might imply that the universe expanded in size by over thirty orders of magnitude in a tiny fraction of a second after the Big Bang, increasing in size by a greater amount in that instance than it has in the fourteen billion years since.

by Lawrence Krauss, New Yorker |  Read more:
Image: Steffen Richter/Harvard University