Thursday, March 20, 2014
Dear Abby (Polly) on Steroids
Dear Polly,

I was hooked and I said yes, yes I will be your girlfriend. Then some shit started…
He never complimented me on any of my physical traits, yet every weekend we hung out, he would somehow manage to tell me that he wanted me to have larger breasts like so-and-so, get more toned legs like this person, grow your hair long and put on some eye shadow…. A lot of similar things were said over and over for probably the first six months of our relationship. I think I didn't confront him for so long because I really liked him otherwise. I was also only 20 at the time and really wanted this relationship to work.
I was incredibly hurt every time but I held my disappointment and devastation inside. Then one day, I was mad enough to confront him. I told him that what he was saying was downright hurtful and that he shouldn't be with me if all he can think of is improving me and making me more like other women he probably desired.
He was completely shocked at my confrontation as if he didn't realize he was hurting me. Right after that he never compared me to anyone again, he even started complimenting me and saying that I was the most beautiful woman in the world to him.
I usually tell him to drop it with the comments because I don't believe him. It annoys the hell out of me that he always tries to overcompensate.
You might be wondering why I stay with him? Well, he's WONDERFUL. He does dishes, takes out garbage, is kind and thoughtful. He always wants to buy me anything and everything I want, even though he can't cause we're not rich, but he always tries his best. He listens to me and is interested in my life. He supports my goals and dreams and always believes in me when other people do not. He is faithful and compassionate. It's difficult to leave such a lovely package.
My theory for his actions at the beginning of the relationship is that, he was just being completely honest, without any thought for consequence. On the very downside, his ridiculously honest comments at the beginning of the relationship have given my self-esteem a beating. Sometimes during sex I feel inadequate cause I know I don't look a certain way.
BUT… why oh why did he say such cruel things and then try to over-compensate??? It is very very annoying.
AND HERE'S THE TWIST. The other night he decided to compliment me. I got mad and started saying he has been lying all these years. And then… he admitted that he had been!
He said that I am not the most beautiful woman to him. He was just trying to make me feel better and mend the wound.
WHAT THE FUCK. Why go through all the trouble of lying just to tell the truth? Sigh. I am pretty relieved to finally hear the truth. Because I always knew.
Now I don't know what to do, I've been largely ignoring this issue, sweeping it under the rug.
I would love some straightforward advice. I want to know if it's worth it to stay with a man who didn't really want me for who I was physically. I know relationships are not based on physical attraction. But do you think his actions have been unreasonable? I feel hurt and kind of ugly. Should I completely forgive him and keep focusing on the positives of our relationship?
He has since said, "Physically you are an okay, pretty girl, but that's it. Many girls are much hotter than you." I know this is true. I'm glad he can be honest again. But I don't know if I can get over the fact that he lied for sooooo long.
I really don't want you to tell me to follow my heart, and that it's up to me to choose what I do. (Because that's what people have told me.) Please tell me what to do… OR tell me what you would do if you were in my situation now.
Thanks in advance.
Not Hot Enough
by Heather Havrilesky, The Awl | Read more:
Image: Dan DeBold
What I Want to Know Is Why You Hate Porn Stars
Here's what I want to know.
It's an open question to everyone, to my ex-boyfriends, neuroscientists, radical feminists, politicians, people on Twitter, my friends, myself.
What is it about porn stars that bothers you so much?
Why do you hate us?
What is it about us that you don't like? (...)
"Food porn" is pictures of food you love eating.
"Wedding porn" is pictures of lavish dresses, table settings, cakes.
"Science porn" is pictures of the natural world or how-things-work charts.
There's skater porn (videos of skateboarders doing daring tricks on stairways and in parking lots), book porn (images of huge libraries and bookstores), fashion porn (photos of outrageously ornamental outfits). There's even Christian missionary porn (pics of missionaries helping the poor).
People love using the word "porn" as long as there's a partner for it. Pair "porn" with something else and it's usually a good thing. A celebration of style and culture. But that word on its own? Well.
It's an open question to everyone, to my ex-boyfriends, neuroscientists, radical feminists, politicians, people on Twitter, my friends, myself.
What is it about porn stars that bothers you so much?
Why do you hate us?
What is it about us that you don't like? (...)
"Food porn" is pictures of food you love eating.
"Wedding porn" is pictures of lavish dresses, table settings, cakes.
"Science porn" is pictures of the natural world or how-things-work charts.
There's skater porn (videos of skateboarders doing daring tricks on stairways and in parking lots), book porn (images of huge libraries and bookstores), fashion porn (photos of outrageously ornamental outfits). There's even Christian missionary porn (pics of missionaries helping the poor).
People love using the word "porn" as long as there's a partner for it. Pair "porn" with something else and it's usually a good thing. A celebration of style and culture. But that word on its own? Well.
by Conner Habib, The Stranger | Read more:
Image: Paccarik OrueParents, Leave Those Kids Alone
It’s still morning, but someone has already started a fire in the tin drum in the corner, perhaps because it’s late fall and wet-cold, or more likely because the kids here love to start fires. Three boys lounge in the only unbroken chairs around it; they are the oldest ones here, so no one complains. One of them turns on the radio—Shaggy is playing (Honey came in and she caught me red-handed, creeping with the girl next door)—as the others feel in their pockets to make sure the candy bars and soda cans are still there. Nearby, a couple of boys are doing mad flips on a stack of filthy mattresses, which makes a fine trampoline. At the other end of the playground, a dozen or so of the younger kids dart in and out of large structures made up of wooden pallets stacked on top of one another. Occasionally a group knocks down a few pallets—just for the fun of it, or to build some new kind of slide or fort or unnamed structure. Come tomorrow and the Land might have a whole new topography.
Other than some walls lit up with graffiti, there are no bright colors, or anything else that belongs to the usual playground landscape: no shiny metal slide topped by a red steering wheel or a tic-tac-toe board; no yellow seesaw with a central ballast to make sure no one falls off; no rubber bucket swing for babies. There is, however, a frayed rope swing that carries you over the creek and deposits you on the other side, if you can make it that far (otherwise it deposits you in the creek). The actual children’s toys (a tiny stuffed elephant, a soiled Winnie the Pooh) are ignored, one facedown in the mud, the other sitting behind a green plastic chair. On this day, the kids seem excited by a walker that was donated by one of the elderly neighbors and is repurposed, at different moments, as a scooter, a jail cell, and a gymnastics bar.
The Land is an “adventure playground,” although that term is maybe a little too reminiscent of theme parks to capture the vibe. In the U.K., such playgrounds arose and became popular in the 1940s, as a result of the efforts of Lady Marjory Allen of Hurtwood, a landscape architect and children’s advocate. Allen was disappointed by what she described in a documentary as “asphalt square” playgrounds with “a few pieces of mechanical equipment.” She wanted to design playgrounds with loose parts that kids could move around and manipulate, to create their own makeshift structures. But more important, she wanted to encourage a “free and permissive atmosphere” with as little adult supervision as possible. The idea was that kids should face what to them seem like “really dangerous risks” and then conquer them alone. That, she said, is what builds self-confidence and courage. (...)
If a 10-year-old lit a fire at an American playground, someone would call the police and the kid would be taken for counseling. At the Land, spontaneous fires are a frequent occurrence. The park is staffed by professionally trained “playworkers,” who keep a close eye on the kids but don’t intervene all that much. Claire Griffiths, the manager of the Land, describes her job as “loitering with intent.” Although the playworkers almost never stop the kids from what they’re doing, before the playground had even opened they’d filled binders with “risk benefits assessments” for nearly every activity. (In the two years since it opened, no one has been injured outside of the occasional scraped knee.) Here’s the list of benefits for fire: “It can be a social experience to sit around with friends, make friends, to sing songs to dance around, to stare at, it can be a co-operative experience where everyone has jobs. It can be something to experiment with, to take risks, to test its properties, its heat, its power, to re-live our evolutionary past.” The risks? “Burns from fire or fire pit” and “children accidentally burning each other with flaming cardboard or wood.” In this case, the benefits win, because a playworker is always nearby, watching for impending accidents but otherwise letting the children figure out lessons about fire on their own. (...)
Like most parents my age, I have memories of childhood so different from the way my children are growing up that sometimes I think I might be making them up, or at least exaggerating them. I grew up on a block of nearly identical six-story apartment buildings in Queens, New York. In my elementary-school years, my friends and I spent a lot of afternoons playing cops and robbers in two interconnected apartment garages, after we discovered a door between them that we could pry open. Once, when I was about 9, my friend Kim and I “locked” a bunch of younger kids in an imaginary jail behind a low gate. Then Kim and I got hungry and walked over to Alba’s pizzeria a few blocks away and forgot all about them. When we got back an hour later, they were still standing in the same spot. They never hopped over the gate, even though they easily could have; their parents never came looking for them, and no one expected them to. A couple of them were pretty upset, but back then, the code between kids ruled. We’d told them they were in jail, so they stayed in jail until we let them out. A parent’s opinion on their term of incarceration would have been irrelevant.
I used to puzzle over a particular statistic that routinely comes up in articles about time use: even though women work vastly more hours now than they did in the 1970s, mothers—and fathers—of all income levels spend much more time with their children than they used to. This seemed impossible to me until recently, when I began to think about my own life. My mother didn’t work all that much when I was younger, but she didn’t spend vast amounts of time with me, either. She didn’t arrange my playdates or drive me to swimming lessons or introduce me to cool music she liked. On weekdays after school she just expected me to show up for dinner; on weekends I barely saw her at all. I, on the other hand, might easily spend every waking Saturday hour with one if not all three of my children, taking one to a soccer game, the second to a theater program, the third to a friend’s house, or just hanging out with them at home. When my daughter was about 10, my husband suddenly realized that in her whole life, she had probably not spent more than 10 minutes unsupervised by an adult. Not 10 minutes in 10 years.
It’s hard to absorb how much childhood norms have shifted in just one generation. Actions that would have been considered paranoid in the ’70s—walking third-graders to school, forbidding your kid to play ball in the street, going down the slide with your child in your lap—are now routine. In fact, they are the markers of good, responsible parenting. One very thorough study of “children’s independent mobility,” conducted in urban, suburban, and rural neighborhoods in the U.K., shows that in 1971, 80 percent of third-graders walked to school alone. By 1990, that measure had dropped to 9 percent, and now it’s even lower. When you ask parents why they are more protective than their parents were, they might answer that the world is more dangerous than it was when they were growing up. But this isn’t true, or at least not in the way that we think. For example, parents now routinely tell their children never to talk to strangers, even though all available evidence suggests that children have about the same (very slim) chance of being abducted by a stranger as they did a generation ago. Maybe the real question is, how did these fears come to have such a hold over us? And what have our children lost—and gained—as we’ve succumbed to them?
If a 10-year-old lit a fire at an American playground, someone would call the police and the kid would be taken for counseling. At the Land, spontaneous fires are a frequent occurrence. The park is staffed by professionally trained “playworkers,” who keep a close eye on the kids but don’t intervene all that much. Claire Griffiths, the manager of the Land, describes her job as “loitering with intent.” Although the playworkers almost never stop the kids from what they’re doing, before the playground had even opened they’d filled binders with “risk benefits assessments” for nearly every activity. (In the two years since it opened, no one has been injured outside of the occasional scraped knee.) Here’s the list of benefits for fire: “It can be a social experience to sit around with friends, make friends, to sing songs to dance around, to stare at, it can be a co-operative experience where everyone has jobs. It can be something to experiment with, to take risks, to test its properties, its heat, its power, to re-live our evolutionary past.” The risks? “Burns from fire or fire pit” and “children accidentally burning each other with flaming cardboard or wood.” In this case, the benefits win, because a playworker is always nearby, watching for impending accidents but otherwise letting the children figure out lessons about fire on their own. (...)
Like most parents my age, I have memories of childhood so different from the way my children are growing up that sometimes I think I might be making them up, or at least exaggerating them. I grew up on a block of nearly identical six-story apartment buildings in Queens, New York. In my elementary-school years, my friends and I spent a lot of afternoons playing cops and robbers in two interconnected apartment garages, after we discovered a door between them that we could pry open. Once, when I was about 9, my friend Kim and I “locked” a bunch of younger kids in an imaginary jail behind a low gate. Then Kim and I got hungry and walked over to Alba’s pizzeria a few blocks away and forgot all about them. When we got back an hour later, they were still standing in the same spot. They never hopped over the gate, even though they easily could have; their parents never came looking for them, and no one expected them to. A couple of them were pretty upset, but back then, the code between kids ruled. We’d told them they were in jail, so they stayed in jail until we let them out. A parent’s opinion on their term of incarceration would have been irrelevant.
I used to puzzle over a particular statistic that routinely comes up in articles about time use: even though women work vastly more hours now than they did in the 1970s, mothers—and fathers—of all income levels spend much more time with their children than they used to. This seemed impossible to me until recently, when I began to think about my own life. My mother didn’t work all that much when I was younger, but she didn’t spend vast amounts of time with me, either. She didn’t arrange my playdates or drive me to swimming lessons or introduce me to cool music she liked. On weekdays after school she just expected me to show up for dinner; on weekends I barely saw her at all. I, on the other hand, might easily spend every waking Saturday hour with one if not all three of my children, taking one to a soccer game, the second to a theater program, the third to a friend’s house, or just hanging out with them at home. When my daughter was about 10, my husband suddenly realized that in her whole life, she had probably not spent more than 10 minutes unsupervised by an adult. Not 10 minutes in 10 years.
It’s hard to absorb how much childhood norms have shifted in just one generation. Actions that would have been considered paranoid in the ’70s—walking third-graders to school, forbidding your kid to play ball in the street, going down the slide with your child in your lap—are now routine. In fact, they are the markers of good, responsible parenting. One very thorough study of “children’s independent mobility,” conducted in urban, suburban, and rural neighborhoods in the U.K., shows that in 1971, 80 percent of third-graders walked to school alone. By 1990, that measure had dropped to 9 percent, and now it’s even lower. When you ask parents why they are more protective than their parents were, they might answer that the world is more dangerous than it was when they were growing up. But this isn’t true, or at least not in the way that we think. For example, parents now routinely tell their children never to talk to strangers, even though all available evidence suggests that children have about the same (very slim) chance of being abducted by a stranger as they did a generation ago. Maybe the real question is, how did these fears come to have such a hold over us? And what have our children lost—and gained—as we’ve succumbed to them?
by Hanna Rosin, The Atlantic | Read more:
Image: Hanna Rosin Wednesday, March 19, 2014
Ban Tipping
As a person who writes about food and drink for a living, I couldn’t tell you the first thing about Bill Perry or whether the beers he sells are that great. But I can tell you that months before opening The Public Option, a brewpub in Washington DC, the man has already landed in my good graces. That’s because he plans to ban tipping in favor of paying his servers an actual living wage. Bill Perry might just be the most progressive thing going in Washington right now.

I hate tipping.
I hate it because it’s an obligation masquerading as an option, and a bizarre singling-out of one person’s compensation, just dangling there, clumsily, outside the cost of my meal. I hate it for the postprandial math it requires of me. But mostly, I hate tipping because I believe that I would be in a better place – as a diner, and as a human – if pay decisions regarding employees were simply left up to their employers, as is the custom in virtually every other industry, in pretty much every civilized corner of the earth.
Most of you think that you hate to tip, too. The research suggests otherwise. You actually love tipping! You like to feel that you have a voice in how much money your server makes. No matter how the math works out, you persistently view restaurants with voluntary tipping systems as being a better value.
This makes it extremely difficult for restaurants and bars to do away with the tipping system. Which is a shame, really, because tipping deserves to go the way of the zeppelin, pantaloons and Creationism. We should know better by now.
by Elizabeth Gunnison Dunn, The Guardian | Read more:
Image: Francois Lenoir / Reuters

I hate tipping.
I hate it because it’s an obligation masquerading as an option, and a bizarre singling-out of one person’s compensation, just dangling there, clumsily, outside the cost of my meal. I hate it for the postprandial math it requires of me. But mostly, I hate tipping because I believe that I would be in a better place – as a diner, and as a human – if pay decisions regarding employees were simply left up to their employers, as is the custom in virtually every other industry, in pretty much every civilized corner of the earth.
Most of you think that you hate to tip, too. The research suggests otherwise. You actually love tipping! You like to feel that you have a voice in how much money your server makes. No matter how the math works out, you persistently view restaurants with voluntary tipping systems as being a better value.
This makes it extremely difficult for restaurants and bars to do away with the tipping system. Which is a shame, really, because tipping deserves to go the way of the zeppelin, pantaloons and Creationism. We should know better by now.
by Elizabeth Gunnison Dunn, The Guardian | Read more:
Image: Francois Lenoir / Reuters
Pixel and Dimed
Whatever you do, it will be your choice. Because you are no longer just an employee with set hours and wages working to make someone else rich. In the future, you will be your very own mini-business. (...)
The only way to find out whether the tech world's solution for the poor job market and income inequality had the answer was to put it to the test. For four weeks this winter, spread out over a six-week period to avoid the holidays, I hustled for work in the gig economy. Technically I was undercover, but I used my real name and background, and whenever asked, I readily shared that I was a journalist. (Alas, people were all too willing to accept that a writer was a perfect candidate for alternative sources of income.) I have changed the names of anyone who did not know, when I was speaking to them, that I was working on this story.
I decided that I would accept any gigs I could get my hands on in pursuit of my goal: I would use the slick technology and shimmering promise of the Silicon Valley-created gig economy to beat Capitol Hill's $10.10 per hour proposal. How hard could it be?
by Sarah Kessler, Fast Company | Read more:
Image: Fast Company
The Great Corporate Cash-Hoarding Crisis
A troubling change is taking place in American business, one that explains why nearly five years after the Great Recession officially ended so many people cannot find work and the economy remains frail.
The biggest American corporations are reporting record profits, official data shows. But the companies are not investing their windfalls in business expansion, which would mean jobs. Nor are they paying profits out to shareholders as dividends.
Instead, the biggest companies are putting profits into the corporate equivalent of a mattress. They are hoarding what just a few years ago would have been considered unimaginable pools of cash and buying risk-free securities that can be instantly converted to cash, which together are known in accounting parlance as liquid assets.
This is just one of many signs that America’s chief executive officers, chief financial officers and corporate boards are behaving fearfully. They are comparable to the slothful servant in the biblical parable of the talents who buries a fortune in the ground rather than invest it. Their caution, aided by government policy, costs all of us. (...)
My analysis of the latest data from the Federal Reserve, the IRS and corporate reports shows that American businesses last year held almost $7.9 trillion of liquid assets worldwide.
Those who follow the news may be surprised, because the figure that’s been mentioned lately has been just under $2 billion. That figure, which comes from the Federal Reserve, is only for domestic cash. The Fed makes its calculations (from the latest Flow of Funds report) using IRS worldwide data after subtracting offshore money.
My estimate is conservative. I did not count cash due to American companies from their offshore subsidiaries as accounts receivable because the IRS does not provide fine details on these additional trillions of dollars. (...)
Turning taxes into profit
These facts also demonstrate that America’s CEOs, chief financial officers and corporate boards fear the future because instead of investing their cash they hold onto it. But even if cash hoarding comforts weak-kneed executives, it makes no sense for investors, workers or taxpayers.
Investors do not need a company to hold their extra cash. That’s what savings accounts are for.
Workers need companies to invest in the future, replacing old factories, purchasing new equipment and engaging in other activities that employ people in pursuit of bigger future profits.
Taxpayers also get a terrible deal. When companies siphon cash out of the country it reduces their immediate federal income taxes. Congress spends the money anyway, which requires borrowing. Companies then loan Washington the money they did not pay in taxes, collecting interest.
This means companies that do this turn a profit on their taxes. Consider a company that defers a $1 billion tax for 30 years, using the cash to buy federal debt paying 4 percent interest in an era of 3 percent inflation. The company will collect more than $2.2 billion in interest, while inflation will erode the value of the tax to $401 million, a nearly 60 percent reduction. From the government’s point of view the tax is converted from a source of revenue into an expense.

Instead, the biggest companies are putting profits into the corporate equivalent of a mattress. They are hoarding what just a few years ago would have been considered unimaginable pools of cash and buying risk-free securities that can be instantly converted to cash, which together are known in accounting parlance as liquid assets.
This is just one of many signs that America’s chief executive officers, chief financial officers and corporate boards are behaving fearfully. They are comparable to the slothful servant in the biblical parable of the talents who buries a fortune in the ground rather than invest it. Their caution, aided by government policy, costs all of us. (...)
My analysis of the latest data from the Federal Reserve, the IRS and corporate reports shows that American businesses last year held almost $7.9 trillion of liquid assets worldwide.
Those who follow the news may be surprised, because the figure that’s been mentioned lately has been just under $2 billion. That figure, which comes from the Federal Reserve, is only for domestic cash. The Fed makes its calculations (from the latest Flow of Funds report) using IRS worldwide data after subtracting offshore money.
My estimate is conservative. I did not count cash due to American companies from their offshore subsidiaries as accounts receivable because the IRS does not provide fine details on these additional trillions of dollars. (...)
Turning taxes into profit
These facts also demonstrate that America’s CEOs, chief financial officers and corporate boards fear the future because instead of investing their cash they hold onto it. But even if cash hoarding comforts weak-kneed executives, it makes no sense for investors, workers or taxpayers.
Investors do not need a company to hold their extra cash. That’s what savings accounts are for.
Workers need companies to invest in the future, replacing old factories, purchasing new equipment and engaging in other activities that employ people in pursuit of bigger future profits.
Taxpayers also get a terrible deal. When companies siphon cash out of the country it reduces their immediate federal income taxes. Congress spends the money anyway, which requires borrowing. Companies then loan Washington the money they did not pay in taxes, collecting interest.
This means companies that do this turn a profit on their taxes. Consider a company that defers a $1 billion tax for 30 years, using the cash to buy federal debt paying 4 percent interest in an era of 3 percent inflation. The company will collect more than $2.2 billion in interest, while inflation will erode the value of the tax to $401 million, a nearly 60 percent reduction. From the government’s point of view the tax is converted from a source of revenue into an expense.
by David Cay Johnston, Aljazeera America | Read more:
Image: Comstock/ThinkstockRage Against the Machines
Anybody who grew up in America can tell you it’s a pretty violent country, and every consumer knows that our mass culture was reflecting that fact long before it began spewing the stuff in videogames. So on the surface, it seems strange that special powers should be attributed to games. What gives? (...)
But if there is something dangerous about videogames now, it’s not the specter of players transforming into drooling sociopaths by enacting depraved fantasies. Instead of forensically dissecting the content packaged in games, we should look closely at the system of design and distribution that’s led them out of teen bedrooms and into the hands of a broader audience via computers and smartphones. It’s not Doom or Mortal Kombat or Death Race we should fear, in other words; it’s Candy Crush Saga, Angry Birds, and FarmVille.
To understand what is really distinctive about videogames, it helps to see how their operation runs like a racket: how the experience is designed to offer players a potentially toxic brew of guilty pleasure spiced with a kind of extortion and how they profit by stoking addiction. We might remember why we looked sideways at machine-enabled gaming in the first place—because it was a mode of play that seemed to normalize corrupt business practices in the guise of entertainment. Because the industry often seems like just another medium for swindlers. (...)
The new model of videogame delivery is “free-to-play” (F2P). At first it was limited to massively multiplayer online games (MMOs) like Neopets and MapleStory, which primarily relied on kids pestering their parents to fund their accounts so that they could buy in-game goods. These games always offer the first taste for free, and then ratchet up the attraction of paying for a more robust or customized gaming environment. In 2007, Facebook released a platform for developers to make free-to-play apps and games run within the social network’s ecosystem. Then came the iPhone, the Apple App Store, and all the copycats and spinoffs that it inspired. By 2010, free-to-play had become the norm for new games, particularly those being released for play online, via downloads, on social networks, or on smartphones—a category that is now quickly overtaking disc-based games. The point is to sell, sell, sell; the games give users opportunities to purchase virtual items or add-ons like clothing, hairstyles, or pets for their in-game characters.
In 2009, Facebook gaming startup darling Zynga launched a free-to-play game called FarmVille that went on to reach more than 80 million players. It offered a core experience for free, with add-ons and features available to those with enough “farm cash” scrip. Players can purchase farm cash through real-money transactions, earn it through gameplay accomplishments, or receive it as a reward for watching video ads or signing up for unrelated services that pay referral fees to game operators. Former Zynga CEO Mark Pincus sought out every possible method for increasing revenues. “I knew I needed revenues, right fucking now,” Pincus told attendees of a Berkeley startup mixer in 2009. “I did every horrible thing in the book just to get revenues right away.”
Every horrible thing in the book included designing a highly manipulative gameplay environment, much like the ones doled out by slot machines and coin-ops. FarmVille users had to either stop after they expended their in-game “energy” or pay up, in which case they could immediately continue. The in-game activities were designed so that they took much longer than any single play session could reasonably last, requiring players to return at prescheduled intervals to complete those tasks or else risk losing work they’d previously done—and possibly spent cash money to pursue. Players were prodded to spread notices and demands among their Facebook friends in exchange for items or favors that were otherwise inaccessible. As with slots and coin-ops, the occasional calculated anomaly in a free-to-play game doesn’t alter the overall results of the system, but only recharges the desire for another surprise, another epiphany; meanwhile, the expert player and the jackpot winner are exceptions that prove the rule.
FarmVille’s mimicry of the economically obsolete production unit of the family farm, in short, proved all too apt—like the hordes of small farmers sucked into tenantry and debt peonage during the first wave of industrialization in America, the freeholders on FarmVille’s vast virtual acreage soon learned that the game’s largely concealed infrastructure was where all the real fee-gouging action was occurring. Even those who kept their wallets tucked away in their pockets and purses would pay in other ways—by spreading “viral” invitations to recruit new farmers, for example. FarmVille users might have been having fun in the moment, but before long, they would look up to discover they owed their souls to the company store.
But if there is something dangerous about videogames now, it’s not the specter of players transforming into drooling sociopaths by enacting depraved fantasies. Instead of forensically dissecting the content packaged in games, we should look closely at the system of design and distribution that’s led them out of teen bedrooms and into the hands of a broader audience via computers and smartphones. It’s not Doom or Mortal Kombat or Death Race we should fear, in other words; it’s Candy Crush Saga, Angry Birds, and FarmVille.

The new model of videogame delivery is “free-to-play” (F2P). At first it was limited to massively multiplayer online games (MMOs) like Neopets and MapleStory, which primarily relied on kids pestering their parents to fund their accounts so that they could buy in-game goods. These games always offer the first taste for free, and then ratchet up the attraction of paying for a more robust or customized gaming environment. In 2007, Facebook released a platform for developers to make free-to-play apps and games run within the social network’s ecosystem. Then came the iPhone, the Apple App Store, and all the copycats and spinoffs that it inspired. By 2010, free-to-play had become the norm for new games, particularly those being released for play online, via downloads, on social networks, or on smartphones—a category that is now quickly overtaking disc-based games. The point is to sell, sell, sell; the games give users opportunities to purchase virtual items or add-ons like clothing, hairstyles, or pets for their in-game characters.
In 2009, Facebook gaming startup darling Zynga launched a free-to-play game called FarmVille that went on to reach more than 80 million players. It offered a core experience for free, with add-ons and features available to those with enough “farm cash” scrip. Players can purchase farm cash through real-money transactions, earn it through gameplay accomplishments, or receive it as a reward for watching video ads or signing up for unrelated services that pay referral fees to game operators. Former Zynga CEO Mark Pincus sought out every possible method for increasing revenues. “I knew I needed revenues, right fucking now,” Pincus told attendees of a Berkeley startup mixer in 2009. “I did every horrible thing in the book just to get revenues right away.”
Every horrible thing in the book included designing a highly manipulative gameplay environment, much like the ones doled out by slot machines and coin-ops. FarmVille users had to either stop after they expended their in-game “energy” or pay up, in which case they could immediately continue. The in-game activities were designed so that they took much longer than any single play session could reasonably last, requiring players to return at prescheduled intervals to complete those tasks or else risk losing work they’d previously done—and possibly spent cash money to pursue. Players were prodded to spread notices and demands among their Facebook friends in exchange for items or favors that were otherwise inaccessible. As with slots and coin-ops, the occasional calculated anomaly in a free-to-play game doesn’t alter the overall results of the system, but only recharges the desire for another surprise, another epiphany; meanwhile, the expert player and the jackpot winner are exceptions that prove the rule.
FarmVille’s mimicry of the economically obsolete production unit of the family farm, in short, proved all too apt—like the hordes of small farmers sucked into tenantry and debt peonage during the first wave of industrialization in America, the freeholders on FarmVille’s vast virtual acreage soon learned that the game’s largely concealed infrastructure was where all the real fee-gouging action was occurring. Even those who kept their wallets tucked away in their pockets and purses would pay in other ways—by spreading “viral” invitations to recruit new farmers, for example. FarmVille users might have been having fun in the moment, but before long, they would look up to discover they owed their souls to the company store.
by Ian Bogost, Baffler | Read more:
Image: Micael DuffyTuesday, March 18, 2014
The Human Heart of Sacred Art
The sun had come up brilliantly after a heavy rain, and the trees were glistening and very wet. On some impulse, plain exuberance, I suppose, the fellow jumped up and caught hold of a branch, and a storm of luminous water came pouring down on the two of them, and they laughed and took off running, the girl sweeping water off her hair and her dress as if she were a little bit disgusted, but she wasn't. It was a beautiful thing to see, like something from a myth. I don't know why I thought of that now, except perhaps because it is easy to believe in such moments that water was made primarily for blessing, and only secondarily for growing vegetables or doing the wash. I wish I had paid more attention to it.
It is a wonderful, luminous passage, typical of Robinson's ability to discover the poetic even in the most mundane. Robinson is a Christian, indeed a Calvinist (though, improbably, she tends to see John Calvin more as a kind of Erasmus-like humanist than as the firebrand preacher who railed against the human race as constituting a "teeming horde of infamies"), whose life and writing is suffused with religious faith. Robinson's fiction possesses an austere beauty, "a Protestant bareness" as the critic James Wood has put it,[1] that recalls both the English poet George Herbert and "the American religious spirit that produced Congregationalism and nineteenth-century Transcendentalism and those bareback religious riders Emerson, Thoreau and Melville".
There is in Robinson's writing a spiritual force that clearly springs from her religious faith. It is nevertheless a spiritual force that transcends the merely religious. "There is a grandeur in this vision of life", Darwin wrote in The Origin of Species, expressing his awe at nature's creation of "endless forms most beautiful and most wonderful". The springs of Robinson's awe are different from those of Darwin's. And yet she too finds grandeur in all that she touches, whether in the simple details of everyday life or in the great moral dilemmas of human existence. Robinson would probably describe it as the uncovering of a divine presence in the world. But it is also the uncovering of something very human, a celebration of our ability to find the poetic and the transcendent, not through invoking the divine, but as a replacement for the divine.
There is in Robinson's writing a spiritual force that clearly springs from her religious faith. It is nevertheless a spiritual force that transcends the merely religious. "There is a grandeur in this vision of life", Darwin wrote in The Origin of Species, expressing his awe at nature's creation of "endless forms most beautiful and most wonderful". The springs of Robinson's awe are different from those of Darwin's. And yet she too finds grandeur in all that she touches, whether in the simple details of everyday life or in the great moral dilemmas of human existence. Robinson would probably describe it as the uncovering of a divine presence in the world. But it is also the uncovering of something very human, a celebration of our ability to find the poetic and the transcendent, not through invoking the divine, but as a replacement for the divine.
One does not, of course, have to be religious to appreciate religiously inspired art. One can, as a non-believer, listen to Mozart's Requiem or Nusrat Fateh Ali Khan's qawwli, look upon Michaelangelo's Adam or the patterns of the Sheikh Lotfollah Mosque in Isfahan in Iran, read Dante's Divine Comedy or Lao Zi's Daode Jing, and be drawn into a world of awe and wonder. Many believers may question whether non-believers can truly comprehend the meaning of religiously-inspired art. We can, however, turn this round and ask a different question. What is it that is "sacred" about sacred art? For religious believers, the sacred, whether in art or otherwise, is clearly that which is associated with the holy and the divine. The composer John Tavener, who died at the end of last year, was one of the great modern creators of sacred music. A profoundly religious man – he was a convert to Russian Orthodoxy – Tavener's faith and sense of mysticism suffused much of his music. Historically, and in the minds of most people today, the sacred in art is, as it was with Tavener, inextricably linked with religious faith.
There is, however, another sense in which we can think about the sacred in art. Not so much as an expression of the divine but, paradoxically perhaps, more an exploration of what it means to be human; what it is to be human not in the here and now, not in our immediacy, nor merely in our physicality, but in a more transcendental sense. It is a sense that is often difficult to capture in a purely propositional form, but which we seek to grasp through art or music or poetry. Transcendence does not, however, necessarily have to be understood in a religious fashion – that is, solely in relation to some concept of the divine. It is rather a recognition that our humanness is invested not simply in our existence as individuals or as physical beings, but also in our collective existence as social beings and in our ability, as social beings, to rise above our individual physical selves and to see ourselves as part of a larger project, to cast upon the world, and upon human life, a meaning or purpose that exists only because we as human beings create it.
Image: Richard Pluck. Source: Flickr
How "Revolution" Became an Adjective
In case of rain, the revolution will take place in the hall.
-- Erwin Chargaff
For the last several years, the word “revolution” has been hanging around backstage on the national television talk-show circuit waiting for somebody, anybody -- visionary poet, unemployed automobile worker, late-night comedian -- to cue its appearance on camera. I picture the word sitting alone in the green room with the bottled water and a banana, armed with press clippings of its once-upon-a-time star turns in America’s political theater (tie-dyed and brassiere-less on the barricades of the 1960s countercultural insurrection, short-haired and seersucker smug behind the desks of the 1980s Reagan Risorgimento), asking itself why it’s not being brought into the segment between the German and the Japanese car commercials.
Surely even the teleprompter must know that it is the beast in the belly of the news reports, more of them every day in print and en blog, about income inequality, class conflict, the American police state. Why then does nobody have any use for it except in the form of the adjective, revolutionary, unveiling a new cellphone app or a new shade of lipstick?
I can think of several reasons, among them the cautionary tale told by the round-the-clock media footage of dead revolutionaries in Syria, Egypt, and Tunisia, also the certain knowledge that anything anybody says (on camera or off, to a hotel clerk, a Facebook friend, or an ATM) will be monitored for security purposes. Even so, the stockpiling of so much careful silence among people who like to imagine themselves on the same page with Patrick Henry -- “Give me liberty, or give me death” -- raises the question as to what has become of the American spirit of rebellion. Where have all the flowers gone, and what, if anything, is anybody willing to risk in the struggle for “Freedom Now,” “Power to the People,” “Change We Can Believe In”?
-- Erwin Chargaff

Surely even the teleprompter must know that it is the beast in the belly of the news reports, more of them every day in print and en blog, about income inequality, class conflict, the American police state. Why then does nobody have any use for it except in the form of the adjective, revolutionary, unveiling a new cellphone app or a new shade of lipstick?
I can think of several reasons, among them the cautionary tale told by the round-the-clock media footage of dead revolutionaries in Syria, Egypt, and Tunisia, also the certain knowledge that anything anybody says (on camera or off, to a hotel clerk, a Facebook friend, or an ATM) will be monitored for security purposes. Even so, the stockpiling of so much careful silence among people who like to imagine themselves on the same page with Patrick Henry -- “Give me liberty, or give me death” -- raises the question as to what has become of the American spirit of rebellion. Where have all the flowers gone, and what, if anything, is anybody willing to risk in the struggle for “Freedom Now,” “Power to the People,” “Change We Can Believe In”?
My guess is next to nothing that can’t be written off as a business expense or qualified as a tax deduction. Not in America at least, but maybe, with a better publicist and 50% of the foreign rights, somewhere east of the sun or west of the moon. (...)
I inherited the instinct as a true-born American bred to the worship of both machinery and money; an appreciation of its force I acquired during a lifetime of reading newspaper reports of political uprisings in the provinces of the bourgeois world state -- in China, Israel, and Greece in the 1940s; in the 1950s those in Hungary, Cuba, Guatemala, Algeria, Egypt, Bolivia, and Iran; in the 1960s in Vietnam, France, America, Ethiopia, and the Congo; in the 1970s and 1980s in El Salvador, Poland, Nicaragua, Kenya, Argentina, Chile, Indonesia, Czechoslovakia, Turkey, Jordan, Cambodia, again in Iran; over the last 24 years in Russia, Venezuela, Lebanon, Croatia, Bosnia, Libya, Tunisia, Syria, Ukraine, Iraq, Somalia, South Africa, Romania, Sudan, again in Algeria and Egypt.
The plot line tends to repeat itself -- first the new flag on the roof of the palace, rapturous crowds in the streets waving banners; then searches, requisitions, massacres, severed heads raised on pikes; soon afterward the transfer of power from one police force to another police force, the latter more repressive than the former (darker uniforms, heavier motorcycles) because more frightened of the social and economic upheavals they can neither foresee nor control.
All the shiftings of political power produced changes within the committees managing regional budgets and social contracts on behalf of the bourgeois imperium. None of them dethroned or defenestrated Adams’ dynamo or threw off the chains of Marx’s cash nexus. That they could possibly do so is the “romantic idea” that Albert Camus, correspondent for the French Resistance newspaper Combat during and after World War II, sees in 1946 as having been “consigned to fantasy by advances in the technology of weaponry.”
The French philosopher Simone Weil draws a corollary lesson from her acquaintance with the Civil War in Spain, and from her study of the communist Sturm und Drang in Russia, Germany, and France subsequent to World War I. “One magic word today seems capable of compensating for all sufferings, resolving all anxieties, avenging the past, curing present ills, summing up all future possibilities: that word is revolution... This word has aroused such pure acts of devotion, has repeatedly caused such generous blood to be shed, has constituted for so many unfortunates the only source of courage for living, that it is almost a sacrilege to investigate it; all this, however, does not prevent it from possibly being meaningless.”
I inherited the instinct as a true-born American bred to the worship of both machinery and money; an appreciation of its force I acquired during a lifetime of reading newspaper reports of political uprisings in the provinces of the bourgeois world state -- in China, Israel, and Greece in the 1940s; in the 1950s those in Hungary, Cuba, Guatemala, Algeria, Egypt, Bolivia, and Iran; in the 1960s in Vietnam, France, America, Ethiopia, and the Congo; in the 1970s and 1980s in El Salvador, Poland, Nicaragua, Kenya, Argentina, Chile, Indonesia, Czechoslovakia, Turkey, Jordan, Cambodia, again in Iran; over the last 24 years in Russia, Venezuela, Lebanon, Croatia, Bosnia, Libya, Tunisia, Syria, Ukraine, Iraq, Somalia, South Africa, Romania, Sudan, again in Algeria and Egypt.
The plot line tends to repeat itself -- first the new flag on the roof of the palace, rapturous crowds in the streets waving banners; then searches, requisitions, massacres, severed heads raised on pikes; soon afterward the transfer of power from one police force to another police force, the latter more repressive than the former (darker uniforms, heavier motorcycles) because more frightened of the social and economic upheavals they can neither foresee nor control.
All the shiftings of political power produced changes within the committees managing regional budgets and social contracts on behalf of the bourgeois imperium. None of them dethroned or defenestrated Adams’ dynamo or threw off the chains of Marx’s cash nexus. That they could possibly do so is the “romantic idea” that Albert Camus, correspondent for the French Resistance newspaper Combat during and after World War II, sees in 1946 as having been “consigned to fantasy by advances in the technology of weaponry.”
The French philosopher Simone Weil draws a corollary lesson from her acquaintance with the Civil War in Spain, and from her study of the communist Sturm und Drang in Russia, Germany, and France subsequent to World War I. “One magic word today seems capable of compensating for all sufferings, resolving all anxieties, avenging the past, curing present ills, summing up all future possibilities: that word is revolution... This word has aroused such pure acts of devotion, has repeatedly caused such generous blood to be shed, has constituted for so many unfortunates the only source of courage for living, that it is almost a sacrilege to investigate it; all this, however, does not prevent it from possibly being meaningless.”
by Lewis Lapham, Tom Dispatch | Read more:
Image: via:
On a Strange Roof, Thinking of Home
In 2009 The Oxford American polled 134 Southern writers and academics and put together a list of the greatest Southern novels of all time based on their responses. All save one, The Adventures of Huckleberry Finn, were published between 1929 and 1960. What we think of when we think of “Southern fiction” exists now almost entirely within the boundaries of the two generations of writers that occupied that space. Asked to name great American authors, we’ll give answers that span time from Hawthorne and Melville to Whitman to DeLillo. Ask for great Southern ones and you’ll more than likely get a name from the Southern Renaissance: William Faulkner, Harper Lee, Flannery O’Connor, Walker Percy, Eudora Welty, Thomas Wolfe—all of them sandwiched into the same couple of post-Agrarian decades.
The two waves of Southern writers that crested in the wake of the Agrarian-Mencken fight, first in the 1930s and ’40s, and then in the ’50s and ’60s, didn’t build upon the existing tradition of Southern letters. They weren’t conceived of as new additions to the canon, but as an entirely new canon unto themselves, supplanting the old. They remade the popular notion of Southern literary culture, obscuring predecessors who had, in their time, seemed immortal.
“Southern,” as a descriptor of literature, is immediately familiar, possessed of a thrilling, evocative, almost ontological power. It is a primary descriptor, and alone among American literary geographies in that respect. Faulkner’s work is essentially “Southern” in the same way that Thomas Pynchon’s is essentially “postmodern,” but not, you’ll note, “Northeastern.” To displace Faulkner from his South would be to remove an essential quality; he would functionally cease to exist in a recognizable way.
It applies to the rest of the list, too (with O’Connor the possible exception, being inoculated somewhat by her Catholicism). It is impossible to imagine these writers divorced from the South. This is unusual, and a product of the unusual circumstances that gave rise to them. Faulkner, Lee, Percy, and Welty were no more Southern than Edgar Allen Poe or Sidney Lanier or Kate Chopin, and yet their writing, in the context of the South at that time, definitively was. There’s a universal appeal to their work, to be certain, but it’s also very much a regional literature, one grappling with a very specific set of circumstances in a fixed time, and correspondingly, one with very specific interests: the wearing away of the old Southern social structures, the economic uncertainty inherent in family farming, and overt, systematized racism (which, while undoubtedly still present in the South today, is very much changed from what it was). (...)
Put a character in a tobacco field and give them a shotgun and an accent and it will evoke, without fail, a sense of the South; this is true. If they pop off with a “Hey there, y’all,” it will sound fitting, correct, like the accordion bleats that mark transitions between stories in a public radio program; useful in pushing you toward a desired emotional state, and fun to listen to when done well. But, on the other hand, it doesn’t mean anything. If this is, in fact, “Southern fiction,” then it is becoming as stale as it was a century ago—updated only in that, instead of regurgitating the Lost Cause ethos, it is now Faulkner’s South that’s subjected to the regional nostalgic impulse, a double reverberation.
There is nothing wrong with these writers because of this. It’s not that they’ve failed somehow to keep up, or are stupefying readers, or anything of the sort. It’s that this kind of writing is no longer reflective of the South—or, it reflects a South that is no longer. We wouldn’t think of someone writing whaling novels as quintessentially “New England” anymore, either. The South isn’t so homogenous a culture as it once was, and the societal tropes that Faulkner and Welty and even Barry Hannah grew up with and explored in their fiction are, in large part, gone. The rise of industrial-scale agribusiness, rapid suburbanization, the death of traditional industries like textiles, the corresponding growth of high-tech industries, a major increase in the Hispanic population: all these things and many more have contributed to a wildly different South than the one summoned in what we casually call “Southern writing.”
by Ed Winstead, Guernica | Read more:
Image: Alec Soth

“Southern,” as a descriptor of literature, is immediately familiar, possessed of a thrilling, evocative, almost ontological power. It is a primary descriptor, and alone among American literary geographies in that respect. Faulkner’s work is essentially “Southern” in the same way that Thomas Pynchon’s is essentially “postmodern,” but not, you’ll note, “Northeastern.” To displace Faulkner from his South would be to remove an essential quality; he would functionally cease to exist in a recognizable way.
It applies to the rest of the list, too (with O’Connor the possible exception, being inoculated somewhat by her Catholicism). It is impossible to imagine these writers divorced from the South. This is unusual, and a product of the unusual circumstances that gave rise to them. Faulkner, Lee, Percy, and Welty were no more Southern than Edgar Allen Poe or Sidney Lanier or Kate Chopin, and yet their writing, in the context of the South at that time, definitively was. There’s a universal appeal to their work, to be certain, but it’s also very much a regional literature, one grappling with a very specific set of circumstances in a fixed time, and correspondingly, one with very specific interests: the wearing away of the old Southern social structures, the economic uncertainty inherent in family farming, and overt, systematized racism (which, while undoubtedly still present in the South today, is very much changed from what it was). (...)
Put a character in a tobacco field and give them a shotgun and an accent and it will evoke, without fail, a sense of the South; this is true. If they pop off with a “Hey there, y’all,” it will sound fitting, correct, like the accordion bleats that mark transitions between stories in a public radio program; useful in pushing you toward a desired emotional state, and fun to listen to when done well. But, on the other hand, it doesn’t mean anything. If this is, in fact, “Southern fiction,” then it is becoming as stale as it was a century ago—updated only in that, instead of regurgitating the Lost Cause ethos, it is now Faulkner’s South that’s subjected to the regional nostalgic impulse, a double reverberation.
There is nothing wrong with these writers because of this. It’s not that they’ve failed somehow to keep up, or are stupefying readers, or anything of the sort. It’s that this kind of writing is no longer reflective of the South—or, it reflects a South that is no longer. We wouldn’t think of someone writing whaling novels as quintessentially “New England” anymore, either. The South isn’t so homogenous a culture as it once was, and the societal tropes that Faulkner and Welty and even Barry Hannah grew up with and explored in their fiction are, in large part, gone. The rise of industrial-scale agribusiness, rapid suburbanization, the death of traditional industries like textiles, the corresponding growth of high-tech industries, a major increase in the Hispanic population: all these things and many more have contributed to a wildly different South than the one summoned in what we casually call “Southern writing.”
by Ed Winstead, Guernica | Read more:
Image: Alec Soth
Subscribe to:
Posts (Atom)