Sunday, November 18, 2012
Maybe the Web's Not a Place to Stick Your Ads
"Steve Jobs hates the internet." So jokes a contact of mine whenever he laments what he regards as Apple's relatively paltry investment in web advertising. The point that person -- who once had a stake in that investment -- is trying to make is not that Mr. Jobs is actually a closet Luddite but that Apple, one of the world's strongest brands, isn't as experimental as it should be and, as such, isn't contributing enough to the gold rush that is the digital-advertising business.
That's one way to look at it. Another is that regardless of what it lays out on ads, Apple has a greater online presence than most brands that spend many times what it does. Consider that in December, Apple sites had the 10th-best traffic figures on the web. Those sites, which grabbed more unique visitors than many of the most popular sites where Apple would place its own ads -- including The New York Times, NBC Universal and ESPN -- are destinations. Plus, there's the endless gusher of Apple-obsessed jabbering on any number of blogs and social networks. Oh, and Apple did manage to lay out $32 million in measured media online in 2007, more than double the amount it spent the year before and four times its 2005 outlay.
Look closely at the disappointment that an advanced marketer in 2008 wouldn't be willing to spend more than that to spray its brand all over an Internet already saturated by it and you'll see very clearly some misperceptions plaguing the marketing business today. First, there's the basic mistake that marketing is synonymous with advertising. Then, there's the underexamined assumption so popular in marketing circles of all kinds that when it comes to helping companies create brands or move product the Internet's greatest use is as an ad medium.
Are we having the right conversation?
That's one way to look at it. Another is that regardless of what it lays out on ads, Apple has a greater online presence than most brands that spend many times what it does. Consider that in December, Apple sites had the 10th-best traffic figures on the web. Those sites, which grabbed more unique visitors than many of the most popular sites where Apple would place its own ads -- including The New York Times, NBC Universal and ESPN -- are destinations. Plus, there's the endless gusher of Apple-obsessed jabbering on any number of blogs and social networks. Oh, and Apple did manage to lay out $32 million in measured media online in 2007, more than double the amount it spent the year before and four times its 2005 outlay.
Look closely at the disappointment that an advanced marketer in 2008 wouldn't be willing to spend more than that to spray its brand all over an Internet already saturated by it and you'll see very clearly some misperceptions plaguing the marketing business today. First, there's the basic mistake that marketing is synonymous with advertising. Then, there's the underexamined assumption so popular in marketing circles of all kinds that when it comes to helping companies create brands or move product the Internet's greatest use is as an ad medium.
Are we having the right conversation?
What you're about to read is not an argument for making over web marketing as a factory for destination websites or for making every brand a content player. Not every brand has as much natural pull as Apple and, anyway, there have already been high-profile flubs in the if-you-build-a-content-channel-they-will-come department (Bud.TV, anyone?). This, however, is a call to give some thought to a question that's not asked enough about the Internet: Should it even be viewed as an ad medium? After all, in some quarters of the broader marketing world, the habit of looking at advertising as the most important tool in the marketers' toolbox is undergoing intense interrogation. Consider the growth of the word-of-mouth marketing business, premised on the notion that people not corporations who help other people make consumer decisions. Or look at the growing importance put on public relations and customer-relationship management both in marketing circles and even in the c-suite.
The same conversation should be going on around the Internet. Trends like those listed suggest the possibility of a post-advertising age, a not-too-distant future where consumers will no longer be treated as subjects to be brainwashed with endless repetitions of whatever messaging some focus group liked. That world isn't about hidden persuasion, but about transparency and dialogue and at its center is that supreme force of consumer empowerment, the Internet. But when you look at how the media and marketing business packages the Internet -- as just more space to be bought and sold -- you have to worry that the history of mass media is just trying to repeat itself. Rarely a fortnight goes by without some new bullish forecast for ad growth that works to stoke digital exuberance within media owners that often drowns out critical thinking about the medium itself.
Here's the issue: The internet is too often viewed as inventory, as a place where brands pay for the privilege of being adjacent to content, like prime-time TV and glossy magazines relics of the pre-blog days when getting into the media game actually required infrastructure and distribution. The presumed power of that adjacency has provided the groundwork for the media industry for decades and long ago calcified into an auspicious economic reality the big media companies are trying to take with it to the digital future. For the media seller, ads and ad revenue might be all that's left.
The same conversation should be going on around the Internet. Trends like those listed suggest the possibility of a post-advertising age, a not-too-distant future where consumers will no longer be treated as subjects to be brainwashed with endless repetitions of whatever messaging some focus group liked. That world isn't about hidden persuasion, but about transparency and dialogue and at its center is that supreme force of consumer empowerment, the Internet. But when you look at how the media and marketing business packages the Internet -- as just more space to be bought and sold -- you have to worry that the history of mass media is just trying to repeat itself. Rarely a fortnight goes by without some new bullish forecast for ad growth that works to stoke digital exuberance within media owners that often drowns out critical thinking about the medium itself.
Here's the issue: The internet is too often viewed as inventory, as a place where brands pay for the privilege of being adjacent to content, like prime-time TV and glossy magazines relics of the pre-blog days when getting into the media game actually required infrastructure and distribution. The presumed power of that adjacency has provided the groundwork for the media industry for decades and long ago calcified into an auspicious economic reality the big media companies are trying to take with it to the digital future. For the media seller, ads and ad revenue might be all that's left.
That Was Fast
From: Teller, Paul
Sent: Saturday, November 17, 2012 04:11 PM
Subject: RSC Copyright PB
We at the RSC take pride in providing informative analysis of major policy issues and pending legislation that accounts for the range of perspectives held by RSC Members and within the conservative community. Yesterday you received a Policy Brief on copyright law that was published without adequate review within the RSC and failed to meet that standard. Copyright reform would have far-reaching impacts, so it is incredibly important that it be approached with all facts and viewpoints in hand. As the RSC’s Executive Director, I apologize and take full responsibility for this oversight. Enjoy the rest of your weekend and a meaningful Thanksgiving holiday....
Paul S. Teller
Executive Director
U.S. House Republican Study Committee
Paul.Teller@mail.house.gov
http://republicanstudycommittee.com
The idea that this was published "without adequate review" is silly. Stuff doesn't just randomly appear on the RSC website. Anything being posted there has gone through the same full review process. What happened, instead, was that the entertainment industry's lobbyists went crazy, and some in the GOP folded.
by Mike Masnick, TechDirt | Read more:
Saturday, November 17, 2012
David Foster Wallace: The Nature of Fun
[ed. Excerpt from DFW's posthumously published collection Both Flesh and Not on a writer's motivation.]
This is the sort of parabolic straw you cling to as you struggle with the issue of fun, as a writer. In the beginning, when you first start out trying to write fiction, the whole endeavour's about fun. You don't expect anybody else to read it. You're writing almost wholly to get yourself off. To enable your own fantasies and deviant logics and to escape or transform parts of yourself you don't like. And it works – and it's terrific fun. Then, if you have good luck and people seem to like what you do, and you actually get to get paid for it, and get to see your stuff professionally typeset and bound and blurbed and reviewed and even (once) being read on the AM subway by a pretty girl you don't even know, it seems to make it even more fun. For a while. Then things start to get complicated and confusing, not to mention scary. Now you feel like you're writing for other people, or at least you hope so. You're no longer writing just to get yourself off, which – since any kind of masturbation is lonely and hollow – is probably good. But what replaces the onanistic motive? You've found you very much enjoy having your writing liked by people, and you find you're extremely keen to have people like the new stuff you're doing. The motive of pure personal fun starts to get supplanted by the motive of being liked, of having pretty people you don't know like you and admire you and think you're a good writer. Onanism gives way to attempted seduction, as a motive.
Now, attempted seduction is hard work, and its fun is offset by a terrible fear of rejection. Whatever "ego" means, your ego has now gotten into the game. Or maybe "vanity" is a better word. Because you notice that a good deal of your writing has now become basically showing off, trying to get people to think you're good. This is understandable. You have a great deal of yourself on the line, now, writing – your vanity is at stake. You discover a tricky thing about fiction writing: a certain amount of vanity is necessary to be able to do it at all, but any vanity above that certain amount is lethal. At this point 90+% of the stuff you're writing is motivated and informed by an overwhelming need to be liked. This results in shitty fiction. And the shitty work must get fed to the wastebasket, less because of any sort of artistic integrity than simply because shitty work will make you disliked. At this point in the evolution of writerly fun, the very thing that's always motivated you to write is now also what's motivating you to feed your writing to the wastebasket. This is a paradox and a kind of double bind, and it can keep you stuck inside yourself for months or even years, during which you wail and gnash and rue your bad luck and wonder bitterly where all the fun of the thing could have gone.
The smart thing to say, I think, is that the way out of this bind is to work your way somehow back to your original motivation: fun. And, if you can find your way back to the fun, you will find that the hideously unfortunate double bind of the late vain period turns out really to have been good luck for you. Because the fun you work back to has been transfigured by the unpleasantness of vanity and fear, an unpleasantness you're now so anxious to avoid that the fun you rediscover is a way fuller and more large-hearted kind of fun. It has something to do with Work as Play. Or with the discovery that disciplined fun is more fun than impulsive or hedonistic fun. Or with figuring out that not all paradoxes have to be paralysing. Under fun's new administration, writing fiction becomes a way to go deep inside yourself and illuminate precisely the stuff you don't want to see or let anyone else see, and this stuff usually turns out (paradoxically) to be precisely the stuff all writers and readers share and respond to, feel. Fiction becomes a weird way to countenance yourself and to tell the truth instead of being a way to escape yourself or present yourself in a way you figure you will be maximally likeable. This process is complicated and confusing and scary, and also hard work, but it turns out to be the best fun there is.
by David Foster Wallace, The Guardian | Read more:
Photograph: © Gary Hannabarger/CorbisOur New $237/month Health Insurance Plan
As noted in past articles, I’ve had a pretty cozy health insurance situation up to this point. Growing up in Canada, I was blissfully unaware of the issue, since like virtually all other rich nations, that country provides universal healthcare for all citizens. I took advantage of that system for exactly two major health events: being born in the early 1970s, and a broken ankle after a bike accident in the late 1990s. Both times, the hospital got the job done well.
Moving to the United States, I found the choice of employer-offered health insurance plans confusing, so I just went with the cheapest one. Occasional gaps in coverage occurred as I hopped between employers throughout the early 2000s, but I didn’t notice since I was fortunate enough to have no occasion to visit a doctor during those years.
Then early retirement came and my wife was kind enough to throw me under the umbrella of coverage offered by her part-time employer for the last five years. Although I was grateful, I was not able to take advantage of the insurance outside of an annual visit to the doctor for a checkup. But it did help out greatly by paying most of the bill for the hospital birth of our son.
At last, she quit her part-time job, the free insurance ended, and we were forced to think for ourselves earlier this fall. So all of the health history above went into deciding how to cover ourselves for the rest of our adult lives, during which we will probably never be conventionally employed again.
The thing about insurance is that it is best enjoyed as a game of numbers and probabilities – not feared as a nightmare of imagined outcomes. As I noted long ago in Insurance: A tax on people who are bad at math?, there are only two situations in which I buy insurance:
Health insurance is different: medical care is expensive in the US, with lifetime costs for major conditions potentially reaching to a million dollars or more. On top of that, my young son is a wild card who is more likely than me to injure himself while playing, and I still have the slightly dangerous hobbies of mountain biking and snowboarding. We may even be slightly riskier than the insurance company estimates, making the choice to buy health insurance a positive one.
The next step was looking at our own health care spending over the 13 years we’ve lived in the US:

Then early retirement came and my wife was kind enough to throw me under the umbrella of coverage offered by her part-time employer for the last five years. Although I was grateful, I was not able to take advantage of the insurance outside of an annual visit to the doctor for a checkup. But it did help out greatly by paying most of the bill for the hospital birth of our son.
At last, she quit her part-time job, the free insurance ended, and we were forced to think for ourselves earlier this fall. So all of the health history above went into deciding how to cover ourselves for the rest of our adult lives, during which we will probably never be conventionally employed again.
The thing about insurance is that it is best enjoyed as a game of numbers and probabilities – not feared as a nightmare of imagined outcomes. As I noted long ago in Insurance: A tax on people who are bad at math?, there are only two situations in which I buy insurance:
- If I am significantly riskier than the insurance company thinks I am, or
- If the consequences of being uninsured would be too disastrous for me to handle, yet still have a reasonable chance of occurring
Health insurance is different: medical care is expensive in the US, with lifetime costs for major conditions potentially reaching to a million dollars or more. On top of that, my young son is a wild card who is more likely than me to injure himself while playing, and I still have the slightly dangerous hobbies of mountain biking and snowboarding. We may even be slightly riskier than the insurance company estimates, making the choice to buy health insurance a positive one.
The next step was looking at our own health care spending over the 13 years we’ve lived in the US:
- From 1999-2005, costs were negligible: less than one check-up per year each, with no treatments or prescriptions. They were covered by insurance, but even if paid out of pocket, this would have averaged to under $200 per year.
- In 2006, the birth of the boy and related issues racked up a bill of about $20,000 (a routine surgical intervention was needed, quadrupling the cost), $4,500 of which we had to pay ourselves.
- From 2006 to the present, we have averaged one doctor checkup each per year, plus one antibiotic prescription per year between us, which if paid out of pocket would have cost about $600 per year.
Total medical spending (mostly covered by insurance): about $25,000
Total premiums paid by from employers to insurance companies on our behalf: about $100,000
Hey, there’s an unexpected result! We took a 12-year period which included the once-in-a-lifetime (for us) event of a hospital birth of a baby with added surgery, and it still ended up that the insurance premiums were about four times higher than the insurance benefits. This told me that I should probably shop carefully for insurance, in order to get something that protects me from those million-dollar illnesses, but does not attempt to pay for any hundred-dollar incidents, since the cost for that extra protection is clearly very high.
The next stop was an insurance comparison engine. We used ehealthinsurance.com* to do this search, which allowed me to see offerings from the companies that compete specifically in my area – sorted by price. I was pleased to note that prices drop rapidly as the annual deductible rises – meaning most health care expenses are statistically the lower cost ones, and the million-dollar illnesses are indeed very rare (otherwise the premiums would be different).
The winning plan for us was one called the “Saver80 United Health One” plan from United Healthcare, with a quoted price of $219/month** for the family (two 38-year-old adults and a 6-year old boy). The price is low because it comes with a relatively whopping $10,000 per-person/ $20k-per-family deductible, meaning we are very unlikely to ever use this coverage. But at the same time, covering $10-20k in the event of a catastrophe would not be a significant hardship for us, especially given that this is an unlikely event. Even if the expense were to reoccur annually for decades, we could adjust our lifestyle as needed, or earn more income, or get a job with insurance coverage, or make any number of other changes – assuming we even survived that long with such a serious condition. So it passes the test of putting a safe cap on expenses.
All plans these days also provide one free checkup (or “annual physical”) doctor visit per year, with no copay or deductible at all. The value of this alone is worth 10-15% of the annual premium of our new plan.
Total premiums paid by from employers to insurance companies on our behalf: about $100,000
Hey, there’s an unexpected result! We took a 12-year period which included the once-in-a-lifetime (for us) event of a hospital birth of a baby with added surgery, and it still ended up that the insurance premiums were about four times higher than the insurance benefits. This told me that I should probably shop carefully for insurance, in order to get something that protects me from those million-dollar illnesses, but does not attempt to pay for any hundred-dollar incidents, since the cost for that extra protection is clearly very high.
The next stop was an insurance comparison engine. We used ehealthinsurance.com* to do this search, which allowed me to see offerings from the companies that compete specifically in my area – sorted by price. I was pleased to note that prices drop rapidly as the annual deductible rises – meaning most health care expenses are statistically the lower cost ones, and the million-dollar illnesses are indeed very rare (otherwise the premiums would be different).
The winning plan for us was one called the “Saver80 United Health One” plan from United Healthcare, with a quoted price of $219/month** for the family (two 38-year-old adults and a 6-year old boy). The price is low because it comes with a relatively whopping $10,000 per-person/ $20k-per-family deductible, meaning we are very unlikely to ever use this coverage. But at the same time, covering $10-20k in the event of a catastrophe would not be a significant hardship for us, especially given that this is an unlikely event. Even if the expense were to reoccur annually for decades, we could adjust our lifestyle as needed, or earn more income, or get a job with insurance coverage, or make any number of other changes – assuming we even survived that long with such a serious condition. So it passes the test of putting a safe cap on expenses.
All plans these days also provide one free checkup (or “annual physical”) doctor visit per year, with no copay or deductible at all. The value of this alone is worth 10-15% of the annual premium of our new plan.
by MMM | Read more:
The Real Scandal
[ed. See also, Collateral damage of our surveillance state.]

This is a surveillance state run amok. It also highlights how any remnants of internet anonymity have been all but obliterated by the union between the state and technology companies.
But, as unwarranted and invasive as this all is, there is some sweet justice in having the stars of America's national security state destroyed by the very surveillance system which they implemented and over which they preside. As Trevor Timm of the Electronic Frontier Foundation put it this morning: "Who knew the key to stopping the Surveillance State was to just wait until it got so big that it ate itself?" [ed. Or, as Kurt Opsahi, a senior staff attorney at EFF, asks "If the director of central intelligence isn't able to successfully keep his emails private, what chance do I have?"]
It is usually the case that abuses of state power become a source for concern and opposition only when they begin to subsume the elites who are responsible for those abuses. Recall how former Democratic Rep. Jane Harman - one of the most outspoken defenders of the illegal Bush National Security Agency (NSA) warrantless eavesdropping program -suddenly began sounding like an irate, life-long ACLU privacy activist when it was revealed that the NSA had eavesdropped on her private communications with a suspected Israeli agent over alleged attempts to intervene on behalf of AIPAC officials accused of espionage. Overnight, one of the Surveillance State's chief assets, the former ranking member of the House Intelligence Committee, transformed into a vocal privacy proponent because now it was her activities, rather than those of powerless citizens, which were invaded.
With the private, intimate activities of America's most revered military and intelligence officials being smeared all over newspapers and televisions for no good reason, perhaps similar conversions are possible. Put another way, having the career of the beloved CIA Director and the commanding general in Afghanistan instantly destroyed due to highly invasive and unwarranted electronic surveillance is almost enough to make one believe not only that there is a god, but that he is an ardent civil libertarian.
The US operates a sprawling, unaccountable Surveillance State that - in violent breach of the core guarantees of the Fourth Amendment - monitors and records virtually everything even the most law-abiding citizens do. Just to get a flavor for how pervasive it is, recall that the Washington Post, in its 2010 three-part "Top Secret America" series, reported: "Every day, collection systems at the National Security Agency intercept and store 1.7 billion e-mails, phone calls and other types of communications."
by Glenn Greenwald, The Guardian | Read more:
Image NSA Headquarters via: BGR
A Preface to "Song Reader"
[ed. Story behind Beck's new "album" Song Reader, 20 songbooks designed to be interpreted by the listener/player.]
After releasing an album in the mid-nineteen-nineties, I was sent a copy of the sheet-music version by a publisher who had commissioned piano transcriptions and guitar-chord charts of everything on the original recording. Seeing the record’s sonic ideas distilled down to notation made it obvious that most of the songs weren’t intended to work that way. Reversing the process and putting together a collection of songs in book form seemed more natural—it would be an album that could only be heard by playing the songs.
A few years later, I came across a story about a song called “Sweet Leilani,” which Bing Crosby had released in 1937. Apparently, it was so popular that, by some estimates, the sheet music sold fifty-four million copies. Home-played music had been so widespread that nearly half the country had bought the sheet music for a single song, and had presumably gone through the trouble of learning to play it. It was one of those statistics that offers a clue to something fundamental about our past.
I met with Dave Eggers in 2004 to talk about doing a songbook project with McSweeney’s. Initially I was going to write the songs the same way I’d write one of my albums, only in notated form, leaving the interpretation and performance to the player. But after a few discussions, the approach broadened. We started collecting old sheet music, and becoming acquainted with the art work, the ads, the tone of the copy, and the songs themselves. They were all from a world that had been cast so deeply into the shadow of contemporary music that only the faintest idea of it seemed to exist anymore. I wondered if there was a way to explore that world that would be more than an exercise in nostalgia—a way to represent how people felt about music back then, and to speak to what was left, in our nature, of that instinct to play popular music ourselves.
When I started out on guitar, I gravitated toward folk and country blues; they seemed to work well with the limited means I had to make music of my own. The popular songs, by contrast, didn’t really translate to my Gibson flat-top acoustic. There was an unspoken division between the music you heard on the radio and the music you were able to play with your own hands. By then, recorded music was no longer just the document of a performance—it was a composite of style, hooks, and production techniques, an extension of a popular personality’s image within a current sound.
The pop of the early twentieth century had a different character. Songs could function as an accompaniment to some action; they could speak to specific parts of life, even as they indulged in fantasy and lyricism. They could be absurd as often as they were sentimental—you could pick up “The Unlucky Velocipedist” along with “Get Off of Cuba’s Toes” and “I’m a Cake-Eating Man.” Motifs repeated—there are thousands of “moon” songs, exotic-locale songs, place-name songs, songs about new inventions, stuttering songs—but even though much of the music was formulaic, there was originality and eccentricity as well. And professional songwriters, remote figures, names on a page, occupied a central place in the lives of millions. The culture was closer to its folk traditions, to the time of songs being passed down. The music felt like it could belong to almost anybody.
You could say that things like karaoke or band-replicating video games have filled that vacuum, but home music was different in its demands—a fundamentally more individual expression. Learning to play a song is its own category of experience; recorded music made much of that participation unnecessary. More recently, digital developments have made songs even less substantial-seeming than they were when they came on vinyl or CD. Access to music has changed the perception of it. Songs have lost their cachet; they compete with so much other noise now that they can become more exaggerated in an attempt to capture attention. The question of what a song is supposed to do, and how its purpose has altered, has begun to seem worth asking.
by Beck Hansen, New Yorker | Read more:
After releasing an album in the mid-nineteen-nineties, I was sent a copy of the sheet-music version by a publisher who had commissioned piano transcriptions and guitar-chord charts of everything on the original recording. Seeing the record’s sonic ideas distilled down to notation made it obvious that most of the songs weren’t intended to work that way. Reversing the process and putting together a collection of songs in book form seemed more natural—it would be an album that could only be heard by playing the songs.
A few years later, I came across a story about a song called “Sweet Leilani,” which Bing Crosby had released in 1937. Apparently, it was so popular that, by some estimates, the sheet music sold fifty-four million copies. Home-played music had been so widespread that nearly half the country had bought the sheet music for a single song, and had presumably gone through the trouble of learning to play it. It was one of those statistics that offers a clue to something fundamental about our past.
I met with Dave Eggers in 2004 to talk about doing a songbook project with McSweeney’s. Initially I was going to write the songs the same way I’d write one of my albums, only in notated form, leaving the interpretation and performance to the player. But after a few discussions, the approach broadened. We started collecting old sheet music, and becoming acquainted with the art work, the ads, the tone of the copy, and the songs themselves. They were all from a world that had been cast so deeply into the shadow of contemporary music that only the faintest idea of it seemed to exist anymore. I wondered if there was a way to explore that world that would be more than an exercise in nostalgia—a way to represent how people felt about music back then, and to speak to what was left, in our nature, of that instinct to play popular music ourselves.
When I started out on guitar, I gravitated toward folk and country blues; they seemed to work well with the limited means I had to make music of my own. The popular songs, by contrast, didn’t really translate to my Gibson flat-top acoustic. There was an unspoken division between the music you heard on the radio and the music you were able to play with your own hands. By then, recorded music was no longer just the document of a performance—it was a composite of style, hooks, and production techniques, an extension of a popular personality’s image within a current sound.
The pop of the early twentieth century had a different character. Songs could function as an accompaniment to some action; they could speak to specific parts of life, even as they indulged in fantasy and lyricism. They could be absurd as often as they were sentimental—you could pick up “The Unlucky Velocipedist” along with “Get Off of Cuba’s Toes” and “I’m a Cake-Eating Man.” Motifs repeated—there are thousands of “moon” songs, exotic-locale songs, place-name songs, songs about new inventions, stuttering songs—but even though much of the music was formulaic, there was originality and eccentricity as well. And professional songwriters, remote figures, names on a page, occupied a central place in the lives of millions. The culture was closer to its folk traditions, to the time of songs being passed down. The music felt like it could belong to almost anybody.
You could say that things like karaoke or band-replicating video games have filled that vacuum, but home music was different in its demands—a fundamentally more individual expression. Learning to play a song is its own category of experience; recorded music made much of that participation unnecessary. More recently, digital developments have made songs even less substantial-seeming than they were when they came on vinyl or CD. Access to music has changed the perception of it. Songs have lost their cachet; they compete with so much other noise now that they can become more exaggerated in an attempt to capture attention. The question of what a song is supposed to do, and how its purpose has altered, has begun to seem worth asking.
by Beck Hansen, New Yorker | Read more:
The Pretty Restless
[ed. As one commenter noted, this sounds like it should have been in the Hangover.]
Last Call
It’s apparent in their hospitals, where since the 1970s rates of cirrhosis and other liver diseases among the middle-aged have increased by eightfold for men and sevenfold for women. And it’s apparent in their streets, where the carousing, violent “lager lout” is as much a symbol of modern Britain as Adele, Andy Murray, and the London Eye. Busting a bottle across someone’s face in a bar is a bona fide cultural phenomenon—so notorious that it has its own slang term, “glassing,” and so common that at one point the Manchester police called for bottles and beer mugs to be replaced with more shatter-resistant material. In every detail but the style of dress, the alleys of London on a typical Saturday night look like the scenes in William Hogarth’s famous pro-temperance print Gin Lane. It was released in 1751.
The United States, although no stranger to alcohol abuse problems, is in comparatively better shape. A third of the country does not drink, and teenage drinking is at a historic low. The rate of alcohol use among seniors in high school has fallen 25 percentage points since 1980. Glassing is something that happens in movies, not at the corner bar.
Why has the United States, so similar to Great Britain in everything from language to pop culture trends, managed to avoid the huge spike of alcohol abuse that has gripped the UK? The reasons are many, but one stands out above all: the market in Great Britain is rigged to foster excessive alcohol consumption in ways it is not in the United States—at least not yet. (...)
A moment’s thought makes it obvious that alcohol is different from, say, apples. Apples don’t form addicts. Apples don’t foster disease. Society doesn’t bear the cost of excessive apple consumption. Society does bear the cost of alcoholism, drink-related illness, and drunken violence and crime. The fact that alcohol is habit forming and life threatening among a substantial share of those who use it (and kills or damages the lives of many who don’t) means that a market for it inevitably imposes steep costs on society.
It was the recognition of this plain truth that led post-Prohibition America to regulate the alcohol market as a rancher might fetter a horse—letting it roam freely within certain confines, neither as far nor as fast as it might choose.
The UK, by contrast, spent most of the last eighty years fussing with the barn door while the beast ran wild. It made sure that every pub closed at the appointed hour, that every glass of ale contained a full Queen’s pint, that every dram of whiskey was doled out in increments precise to the milliliter—and simultaneously allowed the industry to adopt virtually any tactic that could get more young people to start drinking and keep at it throughout their lives. It is no coincidence that one of the first major studies to prompt a shift in Britain’s approach to liquor regulation, published in 2003, is titled Alcohol: No Ordinary Commodity.
The UK’s modern drinking problem started appearing in the years following World War II. Some of the developments were natural. Peace reigned; people wanted to have fun again; there was an understandable push toward relaxing wartime restrictions and loosening puritan attitudes left over from the more temperance-minded prewar years.
But other changes were happening that deserved, but did not get, a dose of caution. As the nation shifted to a service and banking economy, and from agricultural and industrial towns to modern cities and suburbs, social life moved from pubs to private homes and shopping moved from the local grocer, butcher, and fishmonger to the all-in-one supermarket. In the 1960s, loosened regulations led to a boom in the off-license sale of alcohol—that is, store-based sale for private consumption, as opposed to on-license sale in public drinking establishments. But whereas pubs were required to meet certain responsibilities (such as refusing to serve the inebriated), and had their hours of operation strictly regulated (for example, having to close their doors temporarily in the afternoon, to prevent all-day drinking), few limits were placed on off-licenses.
Supermarkets, in particular, profited from the new regime. They were free to stock wine, beer, and liquor alongside other consumables, making alcohol as convenient to purchase as marmalade. They were free, also, to offer discounts on bulk sales, and to use alcoholic beverages as so-called loss leaders, selling them below cost to lure customers into their stores and recouping the losses through increased overall sales. Very quickly, cheap booze became little more than a force multiplier for groceries.
When the supermarkets themselves subsequently underwent a wave of consolidation, the multiplier only increased. Four major chains—Tesco, Sainsbury’s, Asda, and Morrisons—now enjoy near-total dominance in the UK, and their vast purchasing power lets them cut alcohol prices even further. Relative to disposable income, alcohol today costs 40 percent less than it did in 1980. The country is awash in a river of cheap drink, available on seemingly every corner.
by Tim Heffernan, Washington Monthly | Read more:
Can American Diplomacy Ever Come Out of Its Bunker?
Three decades later, after serving as an ambassador in three countries, Neumann found himself marveling at how much his profession has changed. “The dangers have gotten worse, but the change is partly psychological,” he told me. “There’s less willingness among our political leaders to accept risks, and all that has driven us into the bunker.”
Nothing illustrated those changes better than the death of J. Christopher Stevens, after an assault by jihadis on the U.S. mission in Benghazi on Sept. 11. Stevens was a brave and thoughtful diplomat who, like Neumann, lived to engage with ordinary people in the countries where he served, to get past the wire. Yet his death was treated as a scandal, and it set off a political storm that seems likely to tie the hands of American diplomats around the world for some time to come. Congressmen and Washington pundits accused the administration of concealing the dangers Americans face abroad and of failing Stevens by providing inadequate security. Threats had been ignored, the critics said, seemingly unaware that a background noise of threats is constant at embassies across the greater Middle East. The death of an ambassador would not be seen as the occasional price of a noble but risky profession; someone had to be blamed.
Lost in all this partisan wrangling was the fact that American diplomacy has already undergone vast changes in the past few decades and is now so heavily encumbered by fortresslike embassies, body armor and motorcades that it is almost unrecognizable. In 1985 there were about 150 security officers in U.S. embassies abroad, and now there are about 900. That does not include the military officers and advisers, whose presence in many embassies — especially in the Middle East — can change the atmosphere. Security has gone from a marginal concern to the very heart of American interactions with other countries.
The barriers are there for a reason: Stevens’s death attests to that, as do those of Americans in Beirut, Baghdad and other violent places. But the reaction to the attack in Benghazi crystallized a sense among many diplomats that risks are less acceptable in Washington than they once were, that the mantra of “security” will only grow louder. As a result, some of the country’s most distinguished former ambassadors are now asking anew what diplomacy can achieve at such a remove.
“No one has sat back to say, ‘What are our objectives?’ ” said Prudence Bushnell, who was ambassador to Kenya when the Qaeda bombing took place there in 1998, killing more than 200 people and injuring 4,000. “The model has become, we will go to dangerous places and transform them, and we will do it from secure fortresses. And it doesn’t work.”
Photo: Tara Todras-Whitehill for The New York Times
Friday, November 16, 2012
A Phony Hero for a Phony War
[ed. See also: The Medals They Carried.]
“What’s wrong with a general looking good?” you may wonder. I would propose that every moment a general spends on his uniform jacket is a moment he’s not doing his job, which is supposed to be leading soldiers in combat and winning wars — something we, and our generals, stopped doing about the time that MacArthur gold-braided his way around the stalemated Korean War.
And now comes “Dave” Petraeus, and the Iraq and Afghanistan conflicts. No matter how good he looked in his biographer-mistress’s book, it doesn’t make up for the fact that we failed to conquer the countries we invaded, and ended up occupying undefeated nations.
The genius of General Petraeus was to recognize early on that the war he had been sent to fight in Iraq wasn’t a real war at all. This is what the public and the news media — lamenting the fall of the brilliant hero undone by a tawdry affair — have failed to see. He wasn’t the military magician portrayed in the press; he was a self-constructed hologram, emitting an aura of preening heroism for the ever eager cameras. (...)
The thing he learned to do better than anything else was present the image of The Man You Turn To When Things Get Tough. (Who can forget the Newsweek cover, “Can This Man Save Iraq?” with a photo of General Petraeus looking very Princeton-educated in his Westy-starched fatigues?) He was so good at it that he conned the news media into thinking he was the most remarkable general officer in the last 40 years, and, by playing hard to get, he conned the political establishment into thinking that he could morph into Ike Part Deux and might one day be persuaded to lead a moribund political party back to the White House.
The problem was that he hadn’t led his own Army to win anything even approximating a victory in either Iraq or Afghanistan. It’s not just General Petraeus. The fact is that none of our generals have led us to a victory since men like Patton and my grandfather, Lucian King Truscott Jr., stormed the beaches of North Africa and southern France with blood in their eyes and military murder on their minds.
Those generals, in my humble opinion, were nearly psychotic in their drive to kill enemy soldiers and subjugate enemy nations. Thankfully, we will probably never have cause to go back to those blood-soaked days. But we still shouldn’t allow our military establishment to give us one generation after another of imitation generals who pretend to greatness on talk shows and photo spreads, jetting around the world in military-spec business jets.
by Lucien K. Truscott IV, NY Times | Read more:
Photo: TYWKIWDBI
Dry Cabins: Living the Dream
[ed. This is almost a rite of passage if you're a UAF student living the Alaska dream.]
The rinse water washes down six inches of pipe into a bucket beneath your sink. Dishes done, you carefully pick up the bucket-full of rancid waste water and inch outside, mindful not to slop any on the floor. You fling the water from your deck and it evaporates instantly into the air.
With the dishes done, you prepare to brave the cold for the bathroom, an outhouse 20 feet away. Hopefully there are no moose on the trail, but you grab your headlamp just in case.
You're living the "dry cabin" lifestyle, just like several thousand others in Fairbanks, an Alaska town known for its extreme climate and endless winters. It's also the epicenter of an unusual cultural phenomenon: Dry-cabin living, a.k.a, living without running water.
That means no plumbing.
No toilet.
No shower.
No kitchen faucets. These modern amenities are replaced by outhouses, five-gallon water jugs and trips to the laundromat.
Why would anyone live this way in one of America's coldest cities?
Dry cabin communities in Fairbanks are partially a product of geology – yes, you read that right. Patches of ground remain frozen year-round in the Interior; that permafrost presents builders with a lot of problems. You can’t dig into frozen ground, so installing septic and water systems becomes difficult if not impossible.
People turn to dry cabins instead. Some are drawn to dry-cabin living for the mystique that the lifestyle offers. Others gravitate toward dry cabins for economic reasons. Either way, it’s a life that offers rewards and challenges found only in Alaska.
by Laurel Andrews, Alaska Dispatch | Read more:
Photo: Leah Hill
Subscribe to:
Posts (Atom)