[ed. Repost]
Saturday, February 28, 2015
The Dress That Melted The Internet
The mother of the bride wore white and gold. Or was it blue and black?
From a photograph of the dress the bride posted online, there was broad disagreement. A few days after the wedding last weekend on the Scottish island of Colonsay, a member of the wedding band was so frustrated by the lack of consensus that she posted a picture of the dress on Tumblr, and asked her followers for feedback.
“I was just looking for an answer because it was messing with my head,” said Caitlin McNeill, a 21-year-old singer and guitarist.
Within a half-hour, her post attracted some 500 likes and shares. The photo soon migrated to Buzzfeed and Facebook and Twitter, setting off a social media conflagration that few were able to resist.
As the debate caught fire across the Internet — even scientists could not agree on what was causing the discrepancy — media companies rushed to get articles online. Less than a half-hour after Ms. McNeil’s original Tumblr post, Buzzfeed posted a poll: “What Colors Are This Dress?” As of Friday afternoon, it had been viewed more than 28 million times. (White and gold was winning handily.)At its peak, more than 670,000 people were simultaneously viewing Buzzfeed’s post. Between that and the rest of Buzzfeed’s blanket coverage of the dress Thursday night, the site easily smashed its previous records for traffic. So did Tumblr.
Everyone, it seems, had an opinion. And everyone was convinced that he, or she, was right. (...)
In an era when just about everyone seems to be doing anything they can to ignite interest online, the great dress debate went viral the old-fashioned way. It just happened. (...)
At its center was a simple yet bedeviling mystery with an almost old-fashioned, trompe l’oeil quality: How could different people see the same article of clothing so differently? The simplicity of the debate, the fact that it was about something as universal as the color of a dress, made it all the more irresistible.
“This definitely felt like a special thing,” said Buzzfeed’s editor in chief, Ben Smith. “It sort of erased the line between web culture and real culture.”
by Jonathan Mahler, NY Times | Read more:
Images: New Yorker and Wired
Friday, February 27, 2015
William S. Burroughs, The Art of Fiction No. 36
Firecrackers and whistles sounded the advent of the New Year of 1965 in St. Louis. Stripteasers ran from the bars in Gaslight Square to dance in the street when midnight came. Burroughs, who had watched television alone that night, was asleep in his room at the Chase Park Plaza Hotel, St. Louis's most elegant.
At noon the next day he was ready for the interview. He wore a gray lightweight Brooks Brothers suit with a vest, a blue-striped shirt from Gibraltar cut in the English style, and a deep-blue tie with small white polka dots. His manner was not so much pedagogic as didactic or forensic. He might have been a senior partner in a private bank, charting the course of huge but anonymous fortunes. A friend of the interviewer, spotting Burroughs across the lobby, thought he was a British diplomat. At the age of fifty, he is trim; he performs a complex abdominal exercise daily and walks a good deal. His face carries no excess flesh. His expression is taut, and his features are intense and chiseled. He did not smile during the interview and laughed only once, but he gives the impression of being capable of much dry laughter under other circumstances. His voice is sonorous, its tone reasonable and patient; his accent is mid-Atlantic, the kind of regionless inflection Americans acquire after many years abroad. He speaks elliptically, in short, clear bursts.
On the dresser of his room sat a European transistor radio; several science fiction paperbacks; Romance, by Joseph Conrad and Ford Madox Ford; The Day Lincoln Was Shot, by Jim Bishop; and Ghosts in American Houses, by James Reynolds. A Zeiss Ikon camera in a scuffed leather case lay on one of the twin beds beside a copy of Field & Stream. On the other bed were a pair of long shears, clippings from newspaper society pages, photographs, and a scrapbook. A Facit portable typewriter sat on the desk, and gradually one became aware that the room, although neat, contained a great deal of paper.
Burroughs smoked incessantly, alternating between a box of English Ovals and a box of Benson & Hedges. As the interview progressed, the room filled with smoke. He opened the window. The temperature outside was seventy degrees, the warmest New Year's Day in St. Louis's history; a yellow jacket flew in and settled on the pane. The bright afternoon deepened. The faint cries of children rose up from the broad brick alleys in which Burroughs had played as a boy. (...)
INTERVIEWER
When and why did you start to write?
BURROUGHS
I started to write in about 1950; I was thirty-five at the time; there didn't seem to be any strong motivation. I simply was endeavoring to put down in a more or less straightforward journalistic style something about my experiences with addiction and addicts.
INTERVIEWER
Why did you feel compelled to record these experiences?
BURROUGHS
I didn't feel compelled. I had nothing else to do. Writing gave me something to do every day. I don't feel the results were at all spectacular. Junky is not much of a book, actually. I knew very little about writing at that time.
INTERVIEWER
Where was this?
BURROUGHS
In Mexico City. I was living near Sears, Roebuck, right around the corner from the University of Mexico. I had been in the army four or five months and I was there on the GI Bill, studying native dialects. I went to Mexico partly because things were becoming so difficult with the drug situation in America. Getting drugs in Mexico was quite easy, so I didn't have to rush around, and there wasn't any pressure from the law.
INTERVIEWER
Why did you start taking drugs?
BURROUGHS
Well, I was just bored. I didn't seem to have much interest in becoming a successful advertising executive or whatever, or living the kind of life Harvard designs for you. After I became addicted in New York in 1944, things began to happen. I got in some trouble with the law, got married, moved to New Orleans, and then went to Mexico.
INTERVIEWER
There seems to be a great deal of middle-class voyeurism in this country concerning addiction, and in the literary world, downright reverence for the addict. You apparently don't share these points of view.
BURROUGHS
No, most of it is nonsense. I think drugs are interesting principally as chemical means of altering metabolism and thereby altering what we call reality, which I would define as a more or less constant scanning pattern. (...)
INTERVIEWER
The visions of drugs and the visions of art don't mix?
BURROUGHS
Never. The hallucinogens produce visionary states, sort of, but morphine and its derivatives decrease awareness of inner processes, thoughts, and feelings. They are painkillers, pure and simple. They are absolutely contraindicated for creative work, and I include in the lot alcohol, morphine, barbiturates, tranquilizers—the whole spectrum of sedative drugs. As for visions and heroin, I had a hallucinatory period at the very beginning of addiction, for instance, a sense of moving at high speed through space. But as soon as addiction was established, I had no visions—vision—at all and very few dreams. (...)
INTERVIEWER
You regard addiction as an illness but also a central human fact, a drama?
BURROUGHS
Both, absolutely. It's as simple as the way in which anyone happens to become an alcoholic. They start drinking, that's all. They like it, and they drink, and then they become alcoholic. I was exposed to heroin in New York—that is, I was going around with people who were using it; I took it; the effects were pleasant. I went on using it and became addicted. Remember that if it can be readily obtained, you will have any number of addicts. The idea that addiction is somehow a psychological illness is, I think, totally ridiculous. It's as psychological as malaria. It's a matter of exposure. People, generally speaking, will take any intoxicant or any drug that gives them a pleasant effect if it is available to them. In Iran, for instance, opium was sold in shops until quite recently, and they had three million addicts in a population of twenty million. There are also all forms of spiritual addiction. Anything that can be done chemically can be done in other ways, that is, if we have sufficient knowledge of the processes involved. Many policemen and narcotics agents are precisely addicted to power, to exercising a certain nasty kind of power over people who are helpless. The nasty sort of power: white junk, I call it—rightness; they're right, right, right—and if they lost that power, they would suffer excruciating withdrawal symptoms. The picture we get of the whole Russian bureaucracy, people who are exclusively preoccupied with power and advantage, this must be an addiction. Suppose they lose it? Well, it's been their whole life.
by Conrad Knickerbocker, Paris Review | Read more:
Image: via:
At noon the next day he was ready for the interview. He wore a gray lightweight Brooks Brothers suit with a vest, a blue-striped shirt from Gibraltar cut in the English style, and a deep-blue tie with small white polka dots. His manner was not so much pedagogic as didactic or forensic. He might have been a senior partner in a private bank, charting the course of huge but anonymous fortunes. A friend of the interviewer, spotting Burroughs across the lobby, thought he was a British diplomat. At the age of fifty, he is trim; he performs a complex abdominal exercise daily and walks a good deal. His face carries no excess flesh. His expression is taut, and his features are intense and chiseled. He did not smile during the interview and laughed only once, but he gives the impression of being capable of much dry laughter under other circumstances. His voice is sonorous, its tone reasonable and patient; his accent is mid-Atlantic, the kind of regionless inflection Americans acquire after many years abroad. He speaks elliptically, in short, clear bursts.On the dresser of his room sat a European transistor radio; several science fiction paperbacks; Romance, by Joseph Conrad and Ford Madox Ford; The Day Lincoln Was Shot, by Jim Bishop; and Ghosts in American Houses, by James Reynolds. A Zeiss Ikon camera in a scuffed leather case lay on one of the twin beds beside a copy of Field & Stream. On the other bed were a pair of long shears, clippings from newspaper society pages, photographs, and a scrapbook. A Facit portable typewriter sat on the desk, and gradually one became aware that the room, although neat, contained a great deal of paper.
Burroughs smoked incessantly, alternating between a box of English Ovals and a box of Benson & Hedges. As the interview progressed, the room filled with smoke. He opened the window. The temperature outside was seventy degrees, the warmest New Year's Day in St. Louis's history; a yellow jacket flew in and settled on the pane. The bright afternoon deepened. The faint cries of children rose up from the broad brick alleys in which Burroughs had played as a boy. (...)
INTERVIEWER
When and why did you start to write?
BURROUGHS
I started to write in about 1950; I was thirty-five at the time; there didn't seem to be any strong motivation. I simply was endeavoring to put down in a more or less straightforward journalistic style something about my experiences with addiction and addicts.
INTERVIEWER
Why did you feel compelled to record these experiences?
BURROUGHS
I didn't feel compelled. I had nothing else to do. Writing gave me something to do every day. I don't feel the results were at all spectacular. Junky is not much of a book, actually. I knew very little about writing at that time.
INTERVIEWER
Where was this?
BURROUGHS
In Mexico City. I was living near Sears, Roebuck, right around the corner from the University of Mexico. I had been in the army four or five months and I was there on the GI Bill, studying native dialects. I went to Mexico partly because things were becoming so difficult with the drug situation in America. Getting drugs in Mexico was quite easy, so I didn't have to rush around, and there wasn't any pressure from the law.
INTERVIEWER
Why did you start taking drugs?
BURROUGHS
Well, I was just bored. I didn't seem to have much interest in becoming a successful advertising executive or whatever, or living the kind of life Harvard designs for you. After I became addicted in New York in 1944, things began to happen. I got in some trouble with the law, got married, moved to New Orleans, and then went to Mexico.
INTERVIEWER
There seems to be a great deal of middle-class voyeurism in this country concerning addiction, and in the literary world, downright reverence for the addict. You apparently don't share these points of view.
BURROUGHS
No, most of it is nonsense. I think drugs are interesting principally as chemical means of altering metabolism and thereby altering what we call reality, which I would define as a more or less constant scanning pattern. (...)
INTERVIEWER
The visions of drugs and the visions of art don't mix?
BURROUGHS
Never. The hallucinogens produce visionary states, sort of, but morphine and its derivatives decrease awareness of inner processes, thoughts, and feelings. They are painkillers, pure and simple. They are absolutely contraindicated for creative work, and I include in the lot alcohol, morphine, barbiturates, tranquilizers—the whole spectrum of sedative drugs. As for visions and heroin, I had a hallucinatory period at the very beginning of addiction, for instance, a sense of moving at high speed through space. But as soon as addiction was established, I had no visions—vision—at all and very few dreams. (...)
INTERVIEWER
You regard addiction as an illness but also a central human fact, a drama?
BURROUGHS
Both, absolutely. It's as simple as the way in which anyone happens to become an alcoholic. They start drinking, that's all. They like it, and they drink, and then they become alcoholic. I was exposed to heroin in New York—that is, I was going around with people who were using it; I took it; the effects were pleasant. I went on using it and became addicted. Remember that if it can be readily obtained, you will have any number of addicts. The idea that addiction is somehow a psychological illness is, I think, totally ridiculous. It's as psychological as malaria. It's a matter of exposure. People, generally speaking, will take any intoxicant or any drug that gives them a pleasant effect if it is available to them. In Iran, for instance, opium was sold in shops until quite recently, and they had three million addicts in a population of twenty million. There are also all forms of spiritual addiction. Anything that can be done chemically can be done in other ways, that is, if we have sufficient knowledge of the processes involved. Many policemen and narcotics agents are precisely addicted to power, to exercising a certain nasty kind of power over people who are helpless. The nasty sort of power: white junk, I call it—rightness; they're right, right, right—and if they lost that power, they would suffer excruciating withdrawal symptoms. The picture we get of the whole Russian bureaucracy, people who are exclusively preoccupied with power and advantage, this must be an addiction. Suppose they lose it? Well, it's been their whole life.
by Conrad Knickerbocker, Paris Review | Read more:
Image: via:
What Long-Distance Trains Teach Us About Public Space in America
"What people don’t like about the train is the time lapse. People don’t have time to tie their own shoes these days.” Trent, a fellow passenger on Amtrak’s Sunset Limited from New Orleans to Los Angeles, was philosophizing about the train. Trent is a middle-aged African-American man from California with whom I struck up a conversation in the observation car, which, for those of you who aren’t versed in the lingo of the rails, is the living room of a long-distance train, heavily windowed and designed for friendly interaction. Rolling through the desert, Trent and I talked for more two hours about family, spirituality, and all the other things that come up when you have opted to ride across the country with strangers and without WiFi.
“People [usually] just want to get from point A to point B as quickly as possible. We don’t give ourselves the chance to be in the moment,” Trent says. “Whether it’s good or bad, you grow. It’s about experiences.” He was describing something special about the long-distance train: It is a place to slow down and experience the present. (...)
Michel Foucault coined the term “heterotopia” to describe what he called “placeless places,” autonomous zones where societal rules are reinterpreted — like the train. (...)
The physical qualities that help to facilitate this sense of connection are human-scale design, a clean and safe environment, and an aesthetic that is straightforward and not overly fanciful. The dimensions of the car make it (generally speaking) cozy and comfortable, but spacious enough that you aren’t on top of the person seated next to you. (When people are too physically close they tend to retreat emotionally and mentally, as anyone who has ever ridden the 1 train during rush hour in Manhattan can attest.)
That long-distance trains aren’t designed with one specific aesthetic, demographic or psychographic in mind means that the ride is more about what’s unfolding within the space rather than the materiality of the car. It also frames the passing landscape in a way that makes it easy to use as a conversation starter. This follows the concept of “triangulation,” which William H. Whyte, a famous public space researcher and advocate, coined to describe a third element that gives people something easy to talk about.
Another important element encouraging interaction is what I will call the “together alone” factor. Riders are in the same space — and apart from everything and everyone else — for an extended period of time. Being in a shared physical space that’s also outside of one’s normal environment for an extended duration facilitates a special sense of focus and an enhanced sense of accountability, which can lead to conversations we wouldn’t normally have with strangers. (Online, the standard for conversation is a different ball game but we aren’t talking about that here.) Democratic theorists since the Ancient Greeks have celebrated public discourse. But where in contemporary offline America does this occur? Housing policies have segregated us by race, class and political leanings. It is increasingly difficult to have open, face-to-face conversations about important topics. Yet on train rides I observed conversations between total strangers about race, religion, sexuality and other taboo topics. Unlike the flame wars of the Internet, these conversations were civil.
by Danya Sherman, Next City | Read more:
Image: Nikki Yanofsky, YouTube
Why 40-Year-Old Tech Is Still Running America’s Air Traffic Control
On Friday, September 26, 2014, a telecommunications contractor named Brian Howard woke early and headed to Chicago Center, an air traffic control hub in Aurora, Illinois, where he had worked for eight years. He had decided to get stoned and kill himself, and as his final gesture he planned to take a chunk of the US air traffic control system with him.
Court records say Howard entered Chicago Center at 5:06 am and went to the basement, where he set a fire in the electronics bay, sliced cables beneath the floor, and cut his own throat. Paramedics saved Howard's life, but Chicago Center, which controls air traffic above 10,000 feet for 91,000 square miles of the Midwest, went dark. Airlines canceled 6,600 flights; air traffic was interrupted for 17 days. Howard had wanted to cause trouble, but he hadn't anticipated a disruption of this magnitude. He had posted a message to Facebook saying that the sabotage “should not take a large toll on the air space as all comms should be switched to the alt location.” It's not clear what alt location Howard was talking about, because there wasn't one. Howard had worked at the center for nearly a decade, and even he didn't know that.
At any given time, around 7,000 aircraft are flying over the United States. For the past 40 years, the same computer system has controlled all that high-altitude traffic—a relic of the 1970s known as Host. The core system predates the advent of the Global Positioning System, so Host uses point-to-point, ground-based radar. Every day, thousands of travelers switch their GPS-enabled smartphones to airplane mode while their flights are guided by technology that predates the Speak & Spell. If you're reading this at 30,000 feet, relax—Host is still safe, in terms of getting planes from point A to point B. But it's unbelievably inefficient. It can handle a limited amount of traffic, and controllers can't see anything outside of their own airspace—when they hand off a plane to a contiguous airspace, it vanishes from their radar.
The FAA knows all that. For 11 years the agency has been limping toward a collection of upgrades called NextGen. At its core is a new computer system that will replace Host and allow any controller, anywhere, to see any plane in US airspace. In theory, this would enable one air traffic control center to take over for another with the flip of a switch, as Howard seemed to believe was already possible. NextGen isn't vaporware; that core system was live in Chicago and the four adjacent centers when Howard attacked, and this spring it'll go online in all 20 US centers. But implementation has been a mess, with a cascade of delays, revisions, and unforeseen problems. Air traffic control can't do anything as sophisticated as Howard thought, and unless something changes about the way the FAA is managing NextGen, it probably never will.
This technology is complicated and novel, but that isn't the problem. The problem is that NextGen is a project of the FAA. The agency is primarily a regulatory body, responsible for keeping the national airspace safe, and yet it is also in charge of operating air traffic control, an inherent conflict that causes big issues when it comes to upgrades. Modernization, a struggle for any federal agency, is practically antithetical to the FAA's operational culture, which is risk-averse, methodical, and bureaucratic. Paired with this is the lack of anything approximating market pressure. The FAA is the sole consumer of the product; it's a closed loop.
The first phase of NextGen is to replace Host with the new computer system, the foundation for all future upgrades. The FAA will finish the job this spring, five years late and at least $500 million over budget. Lockheed Martin began developing the software for it in 2002, and the FAA projected that the transition from Host would be complete by late 2010. By 2007, the upgraded system was sailing through internal tests. But once installed, it was frighteningly buggy. It would link planes to flight data for the wrong aircraft, and sometimes planes disappeared from controllers' screens altogether. As timelines slipped and the project budget ballooned, Lockheed churned out new software builds, but unanticipated issues continued to pop up. As recently as April 2014, the system crashed at Los Angeles Center when a military U-2 jet entered its airspace—the spy plane cruises at 60,000 feet, twice the altitude of commercial airliners, and its flight plan caused a software glitch that overloaded the system.
Even when the software works, air traffic control infrastructure is not prepared to use it. Chicago Center and its four adjacent centers all had NextGen upgrades at the time of the fire, so nearby controllers could reconfigure their workstations to see Chicago airspace. But since those controllers weren't FAA-certified to work that airspace, they couldn't do anything. Chicago Center employees had to drive over to direct the planes. And when they arrived, there weren't enough workstations for them to use, so the Chicago controllers could pick up only a portion of the traffic. Meanwhile, the telecommunications systems were still a 1970s-era hardwired setup, so the FAA had to install new phone lines to transfer Chicago Center's workload. The agency doesn't anticipate switching to a digital system (based on the same voice over IP that became mainstream more than a decade ago) until 2018. Even in the best possible scenario, air traffic control will not be able to track every airplane with GPS before 2020. For the foreseeable future, if you purchase Wi-Fi in coach, you're pretty much better off than the pilot.
Court records say Howard entered Chicago Center at 5:06 am and went to the basement, where he set a fire in the electronics bay, sliced cables beneath the floor, and cut his own throat. Paramedics saved Howard's life, but Chicago Center, which controls air traffic above 10,000 feet for 91,000 square miles of the Midwest, went dark. Airlines canceled 6,600 flights; air traffic was interrupted for 17 days. Howard had wanted to cause trouble, but he hadn't anticipated a disruption of this magnitude. He had posted a message to Facebook saying that the sabotage “should not take a large toll on the air space as all comms should be switched to the alt location.” It's not clear what alt location Howard was talking about, because there wasn't one. Howard had worked at the center for nearly a decade, and even he didn't know that.At any given time, around 7,000 aircraft are flying over the United States. For the past 40 years, the same computer system has controlled all that high-altitude traffic—a relic of the 1970s known as Host. The core system predates the advent of the Global Positioning System, so Host uses point-to-point, ground-based radar. Every day, thousands of travelers switch their GPS-enabled smartphones to airplane mode while their flights are guided by technology that predates the Speak & Spell. If you're reading this at 30,000 feet, relax—Host is still safe, in terms of getting planes from point A to point B. But it's unbelievably inefficient. It can handle a limited amount of traffic, and controllers can't see anything outside of their own airspace—when they hand off a plane to a contiguous airspace, it vanishes from their radar.
The FAA knows all that. For 11 years the agency has been limping toward a collection of upgrades called NextGen. At its core is a new computer system that will replace Host and allow any controller, anywhere, to see any plane in US airspace. In theory, this would enable one air traffic control center to take over for another with the flip of a switch, as Howard seemed to believe was already possible. NextGen isn't vaporware; that core system was live in Chicago and the four adjacent centers when Howard attacked, and this spring it'll go online in all 20 US centers. But implementation has been a mess, with a cascade of delays, revisions, and unforeseen problems. Air traffic control can't do anything as sophisticated as Howard thought, and unless something changes about the way the FAA is managing NextGen, it probably never will.
This technology is complicated and novel, but that isn't the problem. The problem is that NextGen is a project of the FAA. The agency is primarily a regulatory body, responsible for keeping the national airspace safe, and yet it is also in charge of operating air traffic control, an inherent conflict that causes big issues when it comes to upgrades. Modernization, a struggle for any federal agency, is practically antithetical to the FAA's operational culture, which is risk-averse, methodical, and bureaucratic. Paired with this is the lack of anything approximating market pressure. The FAA is the sole consumer of the product; it's a closed loop.
The first phase of NextGen is to replace Host with the new computer system, the foundation for all future upgrades. The FAA will finish the job this spring, five years late and at least $500 million over budget. Lockheed Martin began developing the software for it in 2002, and the FAA projected that the transition from Host would be complete by late 2010. By 2007, the upgraded system was sailing through internal tests. But once installed, it was frighteningly buggy. It would link planes to flight data for the wrong aircraft, and sometimes planes disappeared from controllers' screens altogether. As timelines slipped and the project budget ballooned, Lockheed churned out new software builds, but unanticipated issues continued to pop up. As recently as April 2014, the system crashed at Los Angeles Center when a military U-2 jet entered its airspace—the spy plane cruises at 60,000 feet, twice the altitude of commercial airliners, and its flight plan caused a software glitch that overloaded the system.
Even when the software works, air traffic control infrastructure is not prepared to use it. Chicago Center and its four adjacent centers all had NextGen upgrades at the time of the fire, so nearby controllers could reconfigure their workstations to see Chicago airspace. But since those controllers weren't FAA-certified to work that airspace, they couldn't do anything. Chicago Center employees had to drive over to direct the planes. And when they arrived, there weren't enough workstations for them to use, so the Chicago controllers could pick up only a portion of the traffic. Meanwhile, the telecommunications systems were still a 1970s-era hardwired setup, so the FAA had to install new phone lines to transfer Chicago Center's workload. The agency doesn't anticipate switching to a digital system (based on the same voice over IP that became mainstream more than a decade ago) until 2018. Even in the best possible scenario, air traffic control will not be able to track every airplane with GPS before 2020. For the foreseeable future, if you purchase Wi-Fi in coach, you're pretty much better off than the pilot.
by Sara Breselor, Wired | Read more:
Image: Valero Doval
'Pics or It Didn’t Happen'
Our social networks have a banality problem. The cultural premium now placed on recording and broadcasting one’s life and accomplishments means that Facebook timelines are suffused with postings about meals, workouts, the weather, recent purchases, funny advertisements, the milestones of people three degrees removed from you. On Instagram, one encounters a parade of the same carefully distressed portraits, well-plated dishes and sunsets gilded with smog. Nuance, difference, and complexity evaporate as one scrolls through these endless feeds, vaguely hoping to find something new or important but mostly resigned to variations on familiar themes.
In a digital landscape built on attention and visibility, what matters is not so much the content of your updates but their existing at all. They must be there. Social broadcasts are not communications; they are records of existence and accumulating metadata. Rob Horning, an editor at the New Inquiry, once put it in tautological terms: “The point of being on social media is to produce and amass evidence of being on social media.” This is further complicated by the fact that the feed is always refreshing. Someone is always updating more often or rising to the top by virtue of retweets, reshares, or some opaque algorithmic calculation. In the ever-cresting tsunami of data, you are always out to sea, looking at the waves washing ashore. As the artist Fatima Al Qadiri has said: “There’s no such thing at the most recent update. It immediately becomes obsolete.”Why, then, do we do it? If it’s so easy to become cynical about social media, to see amid the occasionally illuminating exchanges or the harvesting of interesting links (which themselves come in bunches, in great indigestible numbers of browser tabs) that we are part of an unconquerable system, why go on? One answer is that it is a byproduct of the network effect: the more people who are part of a network, the more one’s experience can seem impoverished by being left out. Everyone else is doing it. A billion people on Facebook, hundreds of millions scattered between these other networks – who wants to be on the outside? Who wants to miss a birthday, a friend’s big news, a chance to sign up for Spotify, or the latest bit of juicy social intelligence? And once you’ve joined, the updates begin to flow, the small endorphin boosts of likes and re-pins becoming the meagre rewards for all that work. The feeling of disappointment embedded in each gesture, the sense of “Is this it?”, only advances the process, compelling us to continue sharing and participating.
The achievement of social-media evangelists is to make this urge – the urge to share simply so that others might know you are there, that you are doing this thing, that you are with this person – second nature. This is society’s great phenomenological shift, which, over the last decade, has occurred almost without notice. Now anyone who opts out, or who feels uncomfortable about their participation, begins to feel retrograde, Luddite, uncool. Interiority begins to feel like a prison. The very process of thinking takes on a kind of trajectory: how can this idea be projected outward, towards others? If I have a witty or profound thought and I don’t tweet or Facebook it, have I somehow failed? Is that bon mot now diminished, not quite as good or meaningful as it would be if laid bare for the public? And if people don’t respond – retweet, like, favourite – have I boomeranged back again, committing the greater failure of sharing something not worth sharing in the first place? After all, to be uninteresting is a cardinal sin in the social-media age. To say “He’s bad at Twitter” is like saying that someone fails to entertain; he won’t be invited back for dinner.
In this environment, interiority, privacy, reserve, introspection – all those inward-looking, quieter elements of consciousness – begin to seem insincere. Sharing is sincerity. Removing the mediating elements of thought becomes a mark of authenticity, because it allows you to be more uninhibited in your sharing. Don’t think, just post it. “Pics or it didn’t happen” – that is the populist mantra of the social networking age. Show us what you did, so that we may believe and validate it.
by Jacob Silverman, The Guardian | Read more:
Image: Peter Macdiarmid/PA
What It Means to be Made in Italy
My Italian has gotten good enough that I can understand pretty much everything the locals say to me. The only words I consistently miss are the English words that they insert into conversation like french fries stuck in a spaghetti carbonara. WTF is “Nike” when it rhymes with “hike”? “Levi’s” when it rhymes with “heavies”? “Ee Red Hot Keelee Pepper?” But one English phrase comes up so often in conversation, at least within the rag trade, that I can pick it up on the first take: “Made In Italy.”
Cosa Vuol Dire “Made In Italy”?
To understand the meaning of “Made In Italy,” you have to go back to the genesis of the Italian nation, in the second half of the 19th century. Before that, Italy was a geographic concept, but not a political or cultural one. There was no real sense of an “Italian people” in the same way as there was already for the Germans, who formed a nation around the same time. Italy became one country not through collaboration, but through conquest by the Piedmont in the far north, which might as well have been Sweden as far as many Italians were concerned. If you think of Italy as a boot, the Piedmont would be the knee. A knee the rest of the peninsula would feel at their throats.
Citizens of the newly formed Italian state had little shared history, so newly-crowned propagandists created one, often relying on Roman iconography. Over the following decades, nationalistic myths hypertrophied into fascism - also largely a Northern phenomenon. Italy’s defeat in World War II broke this fever, but at a huge cost. The War was, for Italy, also a civil war, mostly pitting North against South, breaking open all the fissures that had been plastered over at the nation’s birth.
Two industries recreated Italian identity following the war - the film industry, and the fashion industry. Film helped the country understand its experience with the war and the poverty that followed. Fashion gave Italians a new nationalistic myth. Its appeals were more to the artistic achievements of the Italian Renaissance than the empire-building of the Roman era, and it helped that the industry’s first successes were in Tuscany, birthplace of Michelangelo. The Sala Bianca in the Pitti Palace hosted the first Italian fashion show in 1951, as well as Brioni’s men’s fashion show, famously the first of its kind, in 1952. Italian designers were able to capture something of the uniquely Italian approach to luxury and craft that had eluded the stuffy couturiers and tailors of Paris and Savile Row. As post-war realist film gave way to Fellini’s surrealist fantasies, Marcello Mastroianni became the guy everyone wanted to look, dress, and act like. And he wore Italian suits.
Allure, but Insecure
By 1980, the industry had grown tremendously, but had become something different. It had mostly moved to Milan, the industrial behemoth of the North. And it had begun to shift its focus from brands like Brioni to emerging giants like Armani and Ferre’. It was at this point that the “Made In Italy” campaign began, with the ambitious goal of branding an entire country. As one politico at Pitti’s “Opening Ceremony” said this year,” ‘Made In Italy’ is not just about selling fashion - it’s about selling Italian quality of life.” “Made In Italy” was intended to convey more than just the country of origin, but elegance, sophistication, craftsmanship - as if Leonardo DaVinci himself had blessed every stitch.
The campaign has been a massive success. Armani remains one of the most valuable brands in all of fashion. Gucci, Prada, and Zegna aren’t far behind. The manufacturing infrastructure that supports these brands is now also used by brands from Huntsman to Tom Ford to Ralph Lauren Purple Label, all of which are Made In Italy.
But the future is uncertain. At the Pitti’s Opening Ceremony, politician after politician announced their full support for the Italian fashion industry, for Pitti as a trade show, and their belief in the enduring allure of Italian luxury. Each one pledged a re-investment in “Made In Italy”. Which is what you do when you’re worried that a good idea’s time is running out.
Cosa Vuol Dire “Made In Italy”?
To understand the meaning of “Made In Italy,” you have to go back to the genesis of the Italian nation, in the second half of the 19th century. Before that, Italy was a geographic concept, but not a political or cultural one. There was no real sense of an “Italian people” in the same way as there was already for the Germans, who formed a nation around the same time. Italy became one country not through collaboration, but through conquest by the Piedmont in the far north, which might as well have been Sweden as far as many Italians were concerned. If you think of Italy as a boot, the Piedmont would be the knee. A knee the rest of the peninsula would feel at their throats.
Citizens of the newly formed Italian state had little shared history, so newly-crowned propagandists created one, often relying on Roman iconography. Over the following decades, nationalistic myths hypertrophied into fascism - also largely a Northern phenomenon. Italy’s defeat in World War II broke this fever, but at a huge cost. The War was, for Italy, also a civil war, mostly pitting North against South, breaking open all the fissures that had been plastered over at the nation’s birth.
Two industries recreated Italian identity following the war - the film industry, and the fashion industry. Film helped the country understand its experience with the war and the poverty that followed. Fashion gave Italians a new nationalistic myth. Its appeals were more to the artistic achievements of the Italian Renaissance than the empire-building of the Roman era, and it helped that the industry’s first successes were in Tuscany, birthplace of Michelangelo. The Sala Bianca in the Pitti Palace hosted the first Italian fashion show in 1951, as well as Brioni’s men’s fashion show, famously the first of its kind, in 1952. Italian designers were able to capture something of the uniquely Italian approach to luxury and craft that had eluded the stuffy couturiers and tailors of Paris and Savile Row. As post-war realist film gave way to Fellini’s surrealist fantasies, Marcello Mastroianni became the guy everyone wanted to look, dress, and act like. And he wore Italian suits.
Allure, but Insecure
By 1980, the industry had grown tremendously, but had become something different. It had mostly moved to Milan, the industrial behemoth of the North. And it had begun to shift its focus from brands like Brioni to emerging giants like Armani and Ferre’. It was at this point that the “Made In Italy” campaign began, with the ambitious goal of branding an entire country. As one politico at Pitti’s “Opening Ceremony” said this year,” ‘Made In Italy’ is not just about selling fashion - it’s about selling Italian quality of life.” “Made In Italy” was intended to convey more than just the country of origin, but elegance, sophistication, craftsmanship - as if Leonardo DaVinci himself had blessed every stitch.
The campaign has been a massive success. Armani remains one of the most valuable brands in all of fashion. Gucci, Prada, and Zegna aren’t far behind. The manufacturing infrastructure that supports these brands is now also used by brands from Huntsman to Tom Ford to Ralph Lauren Purple Label, all of which are Made In Italy.
But the future is uncertain. At the Pitti’s Opening Ceremony, politician after politician announced their full support for the Italian fashion industry, for Pitti as a trade show, and their belief in the enduring allure of Italian luxury. Each one pledged a re-investment in “Made In Italy”. Which is what you do when you’re worried that a good idea’s time is running out.
by David Isle, Styleforum | Read more:
Image: uncredited
Thursday, February 26, 2015
Regulators Approve Tougher Rules for Internet Providers
[ed. Well, until the next administration anyway. See also: Brief History of the Internet.]
Internet activists declared victory over the nation's big cable companies Thursday, after the Federal Communications Commission voted to impose the toughest rules yet on broadband service to prevent companies like Comcast, Verizon and AT&T from creating paid fast lanes and slowing or blocking web traffic.
The 3-2 vote ushered in a new era of government oversight for an industry that has seen relatively little. It represents the biggest regulatory shake-up to telecommunications providers in almost two decades.
The new rules require that any company providing a broadband connection to your home or phone must act in the "public interest" and refrain from using "unjust or unreasonable" business practices. The goal is to prevent providers from striking deals with content providers like Google, Netflix or Twitter to move their data faster.
"Today is a red-letter day for Internet freedom," said FCC Chairman Tom Wheeler, whose remarks at Thursday's meeting frequently prompted applause by Internet activists in the audience.
President Barack Obama, who had come out in favor of net neutrality in the fall, portrayed the decision as a victory for democracy in the digital age. In an online letter, he thanked the millions who wrote to the FCC and spoke out on social media in support of the change.
"Today's FCC decision will protect innovation and create a level playing field for the next generation of entrepreneurs - and it wouldn't have happened without Americans like you," he wrote.
Verizon saw it differently, using the Twitter hashtag (hash)ThrowbackThursday to draw attention to the FCC's reliance on 1934 legislation to regulate the Internet. Likewise, AT&T suggested the FCC had damaged its reputation as an independent federal regulator by embracing such a liberal policy.
"Does anyone really think Washington needs yet another partisan fight? Particularly a fight around the Internet, one of the greatest engines of economic growth, investment and innovation in history?" said Jim Cicconi, AT&T's senior executive vice president for external and legislative affairs.
Net neutrality is the idea that websites or videos load at about the same speed. That means you won't be more inclined to watch a particular show on Amazon Prime instead of on Netflix because Amazon has struck a deal with your service provider to load its data faster.
For years, providers mostly agreed not to pick winners and losers among Web traffic because they didn't want to encourage regulators to step in and because they said consumers demanded it. But that started to change around 2005, when YouTube came online and Netflix became increasingly popular. On-demand video began hogging bandwidth, and evidence surfaced that some providers were manipulating traffic without telling consumers.
By 2010, the FCC enacted open Internet rules, but the agency's legal approach was eventually struck down in the courts. The vote Thursday was intended by Wheeler to erase any legal ambiguity by no longer classifying the Internet as an "information service" but a "telecommunications service" subject to Title II of the 1934 Communications Act.
That would dramatically expand regulators' power over the industry and hold broadband providers to the higher standard of operating in the public interest.
Internet activists declared victory over the nation's big cable companies Thursday, after the Federal Communications Commission voted to impose the toughest rules yet on broadband service to prevent companies like Comcast, Verizon and AT&T from creating paid fast lanes and slowing or blocking web traffic.
The 3-2 vote ushered in a new era of government oversight for an industry that has seen relatively little. It represents the biggest regulatory shake-up to telecommunications providers in almost two decades.
The new rules require that any company providing a broadband connection to your home or phone must act in the "public interest" and refrain from using "unjust or unreasonable" business practices. The goal is to prevent providers from striking deals with content providers like Google, Netflix or Twitter to move their data faster."Today is a red-letter day for Internet freedom," said FCC Chairman Tom Wheeler, whose remarks at Thursday's meeting frequently prompted applause by Internet activists in the audience.
President Barack Obama, who had come out in favor of net neutrality in the fall, portrayed the decision as a victory for democracy in the digital age. In an online letter, he thanked the millions who wrote to the FCC and spoke out on social media in support of the change.
"Today's FCC decision will protect innovation and create a level playing field for the next generation of entrepreneurs - and it wouldn't have happened without Americans like you," he wrote.
Verizon saw it differently, using the Twitter hashtag (hash)ThrowbackThursday to draw attention to the FCC's reliance on 1934 legislation to regulate the Internet. Likewise, AT&T suggested the FCC had damaged its reputation as an independent federal regulator by embracing such a liberal policy.
"Does anyone really think Washington needs yet another partisan fight? Particularly a fight around the Internet, one of the greatest engines of economic growth, investment and innovation in history?" said Jim Cicconi, AT&T's senior executive vice president for external and legislative affairs.
Net neutrality is the idea that websites or videos load at about the same speed. That means you won't be more inclined to watch a particular show on Amazon Prime instead of on Netflix because Amazon has struck a deal with your service provider to load its data faster.
For years, providers mostly agreed not to pick winners and losers among Web traffic because they didn't want to encourage regulators to step in and because they said consumers demanded it. But that started to change around 2005, when YouTube came online and Netflix became increasingly popular. On-demand video began hogging bandwidth, and evidence surfaced that some providers were manipulating traffic without telling consumers.
By 2010, the FCC enacted open Internet rules, but the agency's legal approach was eventually struck down in the courts. The vote Thursday was intended by Wheeler to erase any legal ambiguity by no longer classifying the Internet as an "information service" but a "telecommunications service" subject to Title II of the 1934 Communications Act.
That would dramatically expand regulators' power over the industry and hold broadband providers to the higher standard of operating in the public interest.
by Anne Flaherty, AP | Read more:
Image: uncredited via:
Blogger Porn Ban – Google's Arbitrary Prudishness is Attacking the Integrity of the Web
[ed. This post brought to you by Blogger. See also: Silicon Valley's War on Sex Continues.]
Google has steadily been cutting down on adult-oriented material hosted on Blogger, its blogging platform, over the last few years. Previously, bloggers could freely post “images or videos that contain nudity or sexual activity,” albeit behind an warning screen that Blogger implemented in 2013.
Then, Blogger said “censoring this content is contrary to a service that bases itself on freedom of expression”, so bloggers rightly assumed that they would be free to continue to post adult content.
But in a huge U-turn, Google has changed its position and decided that as of 23 March, there will be no explicit material allowed on Blogger unless it offers “public benefit, for example in artistic, educational, documentary, or scientific contexts” – all which will be determined by Google. Quite how they will do that has not been made clear.
Anything else that does not fall into this category will be restricted to private-only viewing, where only people who have been invited by the blog’s creator will be able to see them; it won’t appear in search results.
This is like having a public library where all the shelves are empty and all the books imperceptible to readers, and authors are required to stand there in person, handing out copies of their work to those hoping to read it. What Google is doing, in reality, is making these blogs invisible. It effectively kills them off.
Some people might read this and think: “Well, Google just doesn’t want to host porn for free any more, that’s why it’s bringing in these restrictions, what’s wrong with that?” To some extent, they’d have a point, because other blog platforms are available and if a users’ sole intent is to make money, then they’re a business and should pay for hosting, not expect to get it for free.
But this new policy has more far-reaching and long-term implications than just censorship and a loss of profit for those posting explicit content, and here’s an example of why: it breaks the internet.
My own personal blog (no explicit images, but graphic descriptions of sex) has had more than 8m readers over 11 years of being hosted on Blogger. If I was forced to make it private and invitation-only, there is no conceivable way that I could contact every single one of those readers and send them a password link to access it.
When I joined Blogger in 2004, I did more than just sign up to publishing a sex blog, I joined a community of people: other erotic writers, non-erotic writers, sex educators, feminist porn-makers, memoirists, political activists, journalists, photographers, news-sharers, comedians, artists, comic creators and more. A disparate bunch of people joined together by one thing in common: we all posted stuff on the internet and then shared it.
This network – indeed the Internet itself – is made up of links. You find a link, click through, and expect to arrive at a page containing some form of content, whether that be text, images, video, or audio files. From its inception, blogging has been about people sharing links; indeed, one of the UK’s first well-known blogs back in 1999 was the link-sharing LinkMachineGo.
By forcing blogs – any blogs, regardless of their content – to become private, it means the link to that blog will no longer work: people clicking through without a password would arrive on a non-existent page. Thousands of other bloggers and websites may have shared that blog’s link over some years, and as a result of this policy change, that link would effectively be dead. In essence, what this means is that a long-standing, interactive, supportive community will be killed off overnight.
Google has steadily been cutting down on adult-oriented material hosted on Blogger, its blogging platform, over the last few years. Previously, bloggers could freely post “images or videos that contain nudity or sexual activity,” albeit behind an warning screen that Blogger implemented in 2013.
Then, Blogger said “censoring this content is contrary to a service that bases itself on freedom of expression”, so bloggers rightly assumed that they would be free to continue to post adult content.
But in a huge U-turn, Google has changed its position and decided that as of 23 March, there will be no explicit material allowed on Blogger unless it offers “public benefit, for example in artistic, educational, documentary, or scientific contexts” – all which will be determined by Google. Quite how they will do that has not been made clear.Anything else that does not fall into this category will be restricted to private-only viewing, where only people who have been invited by the blog’s creator will be able to see them; it won’t appear in search results.
This is like having a public library where all the shelves are empty and all the books imperceptible to readers, and authors are required to stand there in person, handing out copies of their work to those hoping to read it. What Google is doing, in reality, is making these blogs invisible. It effectively kills them off.
Some people might read this and think: “Well, Google just doesn’t want to host porn for free any more, that’s why it’s bringing in these restrictions, what’s wrong with that?” To some extent, they’d have a point, because other blog platforms are available and if a users’ sole intent is to make money, then they’re a business and should pay for hosting, not expect to get it for free.
But this new policy has more far-reaching and long-term implications than just censorship and a loss of profit for those posting explicit content, and here’s an example of why: it breaks the internet.
My own personal blog (no explicit images, but graphic descriptions of sex) has had more than 8m readers over 11 years of being hosted on Blogger. If I was forced to make it private and invitation-only, there is no conceivable way that I could contact every single one of those readers and send them a password link to access it.
When I joined Blogger in 2004, I did more than just sign up to publishing a sex blog, I joined a community of people: other erotic writers, non-erotic writers, sex educators, feminist porn-makers, memoirists, political activists, journalists, photographers, news-sharers, comedians, artists, comic creators and more. A disparate bunch of people joined together by one thing in common: we all posted stuff on the internet and then shared it.
This network – indeed the Internet itself – is made up of links. You find a link, click through, and expect to arrive at a page containing some form of content, whether that be text, images, video, or audio files. From its inception, blogging has been about people sharing links; indeed, one of the UK’s first well-known blogs back in 1999 was the link-sharing LinkMachineGo.
By forcing blogs – any blogs, regardless of their content – to become private, it means the link to that blog will no longer work: people clicking through without a password would arrive on a non-existent page. Thousands of other bloggers and websites may have shared that blog’s link over some years, and as a result of this policy change, that link would effectively be dead. In essence, what this means is that a long-standing, interactive, supportive community will be killed off overnight.
by Zoe Margolis, The Guardian | Read more:
Image: AlamyWednesday, February 25, 2015
Kurt Vonnegut on the Shapes of Stories
[ed. Kurt Vonnegut: A Man Without a Country]
“The truth is, we know so little about life, we don’t really know what the good news is and what the bad news is.”
"Now let me give you a marketing tip. The people who can afford to buy books and magazines and go to the movies don’t like to hear about people who are poor or sick, so start your story up here [indicates top of the G-I axis]. You will see this story over and over again. People love it, and it is not copyrighted. The story is ‘Man in Hole,’ but the story needn’t be about a man or a hole. It’s: somebody gets into trouble, gets out of it again [draws line A]. It is not accidental that the line ends up higher than where it began. This is encouraging to readers. (...)
Now there’s a Franz Kafka story [begins line D toward bottom of G-I axis]. A young man is rather unattractive and not very personable. He has disagreeable relatives and has had a lot of jobs with no chance of promotion. He doesn’t get paid enough to take his girl dancing or to go to the beer hall to have a beer with a friend. One morning he wakes up, it’s time to go to work again, and he has turned into a cockroach [draws line downward and then infinity symbol]. It’s a pessimistic story. (...)
The question is, does this system I’ve devised help us in the evaluation of literature? Perhaps a real masterpiece cannot be crucified on a cross of this design. How about Hamlet? It’s a pretty good piece of work I’d say. Is anybody going to argue that it isn’t? I don’t have to draw a new line, because Hamlet’s situation is the same as Cinderella’s, except that the sexes are reversed.
His father has just died. He’s despondent. And right away his mother went and married his uncle, who’s a bastard. So Hamlet is going along on the same level as Cinderella when his friend Horatio comes up to him and says, ‘Hamlet, listen, there’s this thing up in the parapet, I think maybe you’d better talk to it. It’s your dad.’ So Hamlet goes up and talks to this, you know, fairly substantial apparition there. And this thing says, ‘I’m your father, I was murdered, you gotta avenge me, it was your uncle did it, here’s how.’
Well, was this good news or bad news? To this day we don’t know if that ghost was really Hamlet’s father. If you have messed around with Ouija boards, you know there are malicious spirits floating around, liable to tell you anything, and you shouldn’t believe them. Madame Blavatsky, who knew more about the spirit world than anybody else, said you are a fool to take any apparition seriously, because they are often malicious and they are frequently the souls of people who were murdered, were suicides, or were terribly cheated in life in one way or another, and they are out for revenge.
So we don’t know whether this thing was really Hamlet’s father or if it was good news or bad news. And neither does Hamlet. But he says okay, I got a way to check this out. I’ll hire actors to act out the way the ghost said my father was murdered by my uncle, and I’ll put on this show and see what my uncle makes of it. So he puts on this show. And it’s not like Perry Mason. His uncle doesn’t go crazy and say, ‘I-I-you got me, you got me, I did it, I did it.’ It flops. Neither good news nor bad news. After this flop Hamlet ends up talking with his mother when the drapes move, so he thinks his uncle is back there and he says, ‘All right, I am so sick of being so damn indecisive,’ and he sticks his rapier through the drapery. Well, who falls out? This windbag, Polonius. This Rush Limbaugh. And Shakespeare regards him as a fool and quite disposable.
You know, dumb parents think that the advice that Polonius gave to his kids when they were going away was what parents should always tell their kids, and it’s the dumbest possible advice, and Shakespeare even thought it was hilarious.
‘Neither a borrower nor a lender be.’ But what else is life but endless lending and borrowing, give and take?
‘This above all, to thine own self be true.’ Be an egomaniac!
Neither good news nor bad news. Hamlet didn’t get arrested. He’s prince. He can kill anybody he wants. So he goes along, and finally he gets in a duel, and he’s killed. Well, did he go to heaven or did he go to hell? Quite a difference. Cinderella or Kafka’s cockroach? I don’t think Shakespeare believed in a heaven or hell any more than I do. And so we don’t know whether it’s good news or bad news.
I have just demonstrated to you that Shakespeare was as poor a storyteller as any Arapaho.
But there’s a reason we recognize Hamlet as a masterpiece: it’s that Shakespeare told us the truth, and people so rarely tell us the truth in this rise and fall here [indicates blackboard]. The truth is, we know so little about life, we don’t really know what the good news is and what the bad news is.
And if I die — God forbid — I would like to go to heaven to ask somebody in charge up there, ‘Hey, what was the good news and what was the bad news?’"
Now there’s a Franz Kafka story [begins line D toward bottom of G-I axis]. A young man is rather unattractive and not very personable. He has disagreeable relatives and has had a lot of jobs with no chance of promotion. He doesn’t get paid enough to take his girl dancing or to go to the beer hall to have a beer with a friend. One morning he wakes up, it’s time to go to work again, and he has turned into a cockroach [draws line downward and then infinity symbol]. It’s a pessimistic story. (...)
The question is, does this system I’ve devised help us in the evaluation of literature? Perhaps a real masterpiece cannot be crucified on a cross of this design. How about Hamlet? It’s a pretty good piece of work I’d say. Is anybody going to argue that it isn’t? I don’t have to draw a new line, because Hamlet’s situation is the same as Cinderella’s, except that the sexes are reversed.
His father has just died. He’s despondent. And right away his mother went and married his uncle, who’s a bastard. So Hamlet is going along on the same level as Cinderella when his friend Horatio comes up to him and says, ‘Hamlet, listen, there’s this thing up in the parapet, I think maybe you’d better talk to it. It’s your dad.’ So Hamlet goes up and talks to this, you know, fairly substantial apparition there. And this thing says, ‘I’m your father, I was murdered, you gotta avenge me, it was your uncle did it, here’s how.’
Well, was this good news or bad news? To this day we don’t know if that ghost was really Hamlet’s father. If you have messed around with Ouija boards, you know there are malicious spirits floating around, liable to tell you anything, and you shouldn’t believe them. Madame Blavatsky, who knew more about the spirit world than anybody else, said you are a fool to take any apparition seriously, because they are often malicious and they are frequently the souls of people who were murdered, were suicides, or were terribly cheated in life in one way or another, and they are out for revenge.
So we don’t know whether this thing was really Hamlet’s father or if it was good news or bad news. And neither does Hamlet. But he says okay, I got a way to check this out. I’ll hire actors to act out the way the ghost said my father was murdered by my uncle, and I’ll put on this show and see what my uncle makes of it. So he puts on this show. And it’s not like Perry Mason. His uncle doesn’t go crazy and say, ‘I-I-you got me, you got me, I did it, I did it.’ It flops. Neither good news nor bad news. After this flop Hamlet ends up talking with his mother when the drapes move, so he thinks his uncle is back there and he says, ‘All right, I am so sick of being so damn indecisive,’ and he sticks his rapier through the drapery. Well, who falls out? This windbag, Polonius. This Rush Limbaugh. And Shakespeare regards him as a fool and quite disposable.
You know, dumb parents think that the advice that Polonius gave to his kids when they were going away was what parents should always tell their kids, and it’s the dumbest possible advice, and Shakespeare even thought it was hilarious.
‘Neither a borrower nor a lender be.’ But what else is life but endless lending and borrowing, give and take?
‘This above all, to thine own self be true.’ Be an egomaniac!
Neither good news nor bad news. Hamlet didn’t get arrested. He’s prince. He can kill anybody he wants. So he goes along, and finally he gets in a duel, and he’s killed. Well, did he go to heaven or did he go to hell? Quite a difference. Cinderella or Kafka’s cockroach? I don’t think Shakespeare believed in a heaven or hell any more than I do. And so we don’t know whether it’s good news or bad news.
I have just demonstrated to you that Shakespeare was as poor a storyteller as any Arapaho.
But there’s a reason we recognize Hamlet as a masterpiece: it’s that Shakespeare told us the truth, and people so rarely tell us the truth in this rise and fall here [indicates blackboard]. The truth is, we know so little about life, we don’t really know what the good news is and what the bad news is.
And if I die — God forbid — I would like to go to heaven to ask somebody in charge up there, ‘Hey, what was the good news and what was the bad news?’"
by Maria Popova, Brain Pickings | Read more:
Image: Kurt Vonnegut
How to Avoid Rape in Prison
The Marshall Project put together this short film where former inmates explain how to avoid being sexually assaulted while incarcerated.
Cities Don’t ♥ Us
[ed. See also: A Last Ditch Effort to Preserve the Heart of the Central District, and Fixing Pioneer Square.]
Each day in New York an army of street-sweeping trucks fan across the boroughs purportedly inhaling the litter and waste that parks itself in curbside crevices along residential blocks. (Commercial districts are typically cleaned overnight.) If you’ve ever seen one of these massive contraptions you’ve probably wondered how much they truly clean—rather than just disperse the dirt and debris to another location for the next day’s job—and whether they do more environmental harm than good. And if you happen to be a car-owning New Yorker, the sound of a street sweeper even one or two blocks away can easily trigger a chain of panicked questions starting with “What time is it?” followed by “What day is it?” before landing on “What side of the street am I parked on?”
Alternate-side parking is a part of life in New York City. Both for New Yorkers and the city they live in, which relies on parking violation revenue to provide city services. Last year alone the city raked in $70 million from 1.2 million alternate-side parking violations at $55 a pop. Is it any wonder the Dept. of Sanitation fought a recent proposal that would allow car owners to re-park as soon as the street sweeper finished its work—rather than waste countless hours idling just to honor the official parking rules?
New York’s parking wars embody the modern city’s twisted relationship with its dwellers. Officials know street sweeping is largely ineffective and environmentally harmful. They know the fine bears no relation to the underlying offense (spare me the “social cost” argument) and targets working people living in the low-income outer-borough neighborhoods where parking is tight and cars are essential since mass transit is less available. They know that even if everyone earnestly tries to follow the law, there aren’t nearly enough spots for everyone during alternate-side parking times. They know the average urbanite has zero sympathy (disdain is more like it) for drivers even though the billions the city rakes in each year from bridge and tunnel tolls subsidize their train and bus commutes.
The suggestion that alternate-side parking fines exist for any reason other than revenue is vulgar and pretentious. Yet no official, elected or otherwise, will ever come out and admit alternate-side parking rules have been engineered to extract what amounts to a backdoor tax. Doing so would undermine the movement heralding the smart city as humanity’s redeemer. Cities, we are told again and again by the sustainability expert, are our destiny. (...)
Yet this is the crux of urbanism’s shell game. I don’t believe white urbanites are an inherently favored species. Stock images of attractive white couples may adorn the latest luxury condo, but not because urbanism has a special place in its heart for them. It’s economics, pure and simple. This, of course, contradicts the prevailing propaganda pumping out of government public relations offices across the country. The modern city cares about our health and wellness. It wants to be livable, sustainable, and walkable—vibrant. It wants to provide us with amenities and opportunities to experience culture, food, and community. According to the urbanist, the city wishes us to believe it can be both affordable and upscale. That it is invested in our children’s education, our safety, our careers. It is all things to all people. It has a heart.
And in fact, the city will sometimes tease us. The train you desperately need will arrive on time. There will be parking on the block, an open table at a new restaurant. Your favorite artist will be playing in the park, for free. In that moment, you will believe that things could not be any better than they are. You will feel the soothing satisfaction of having made the right choice in life. You will forget the infinite frustrations and heartaches you endure. You will rationalize your overpriced micro-dwelling as a social good. You will believe the life the city offers has been created to suit your unique and discriminating needs and tastes. And you will be wrong.
Here’s what really happens. First, a city hires a think tank to come up with a revitalization plan (pdf). That plan typically entails attracting young people with skills and education and retaining slightly older people with money and small children. Case in point: Washington, DC, in the early 2000s. As I’ve written elsewhere (pdf), in 2001 Brookings Institute economist Alice Rivlin published a report entitled “Envisioning a Future Washington” in which she mapped a revitalization plan that became a blueprint for gentrification. Urban planning and design firms are then hired to figure out how to make a city more desirable to these people. They conduct surveys, mine the data, and issue reports that award these people a flattering label like “creative class” and pronounce what they are looking for and how cities can attract/retain them. What we see happening in cities across America is the result: an unmitigated backlash against the era of sprawl and its accomplices—strip malls, subdivisions, and big-box chains—nothing more, nothing less.
Indeed, the true genius of urbanism is that the marketing campaigns promoting it have seized upon a search for meaning that traditional institutions can no longer satisfy, promising, if only implicitly, to fill the gap. Just look at the shimmering, stylized artist renditions accompanying every new upscale urban development. Rays of light from the heavens above shower the newly paved sidewalks, reflecting boundlessly off the glass buildings and brightening the lives of the multi-hued populace carrying fresh fruits and vegetables in their canvas tote bags.
Urbanism has become the secular religion of choice practiced with the enthusiasm of a Pentecostal tent revival, and the amenitized high-rise the new house of worship. It, after all, promises to fulfill or at least facilitate all of one’s needs while on Earth—with everything from rooftop community gathering space to sunlit Saturday morning yoga classes in the atrium.
This isn’t a new idea. In his celebrated and remarkably enduring 1949 essay, “Here is New York” (pdf). E.B. White addressed the spiritual life that a city offers:
Urbanism, as well, has deftly aligned itself with human progress. It trumpets terms like “smart growth,” “sustainability,” “resilience,” and “scalability” to demonstrate both its concern with the quality of our lives and its progressive street cred. It champions urban “green space” as the solution for everything from obesity to asthma. But green spaces aren’t even parks. Often people can only use them during prescribed times and in particular ways—concerts, film screenings, seasonal outdoor markets. Moreover, they’re usually owned by a developer who likely built it as a concession for a sweet deal on the land. Yet this is what we celebrate? A paltry scrap of flora? Which just begs a question Thomas Frank posed in his Baffler essay skewering the “vibrancy” movement so many cities have staked their futures on:
by Dax-Devlon Ross, TMN | Read more:
Image: Steven Guerrisi
Each day in New York an army of street-sweeping trucks fan across the boroughs purportedly inhaling the litter and waste that parks itself in curbside crevices along residential blocks. (Commercial districts are typically cleaned overnight.) If you’ve ever seen one of these massive contraptions you’ve probably wondered how much they truly clean—rather than just disperse the dirt and debris to another location for the next day’s job—and whether they do more environmental harm than good. And if you happen to be a car-owning New Yorker, the sound of a street sweeper even one or two blocks away can easily trigger a chain of panicked questions starting with “What time is it?” followed by “What day is it?” before landing on “What side of the street am I parked on?”
Alternate-side parking is a part of life in New York City. Both for New Yorkers and the city they live in, which relies on parking violation revenue to provide city services. Last year alone the city raked in $70 million from 1.2 million alternate-side parking violations at $55 a pop. Is it any wonder the Dept. of Sanitation fought a recent proposal that would allow car owners to re-park as soon as the street sweeper finished its work—rather than waste countless hours idling just to honor the official parking rules?New York’s parking wars embody the modern city’s twisted relationship with its dwellers. Officials know street sweeping is largely ineffective and environmentally harmful. They know the fine bears no relation to the underlying offense (spare me the “social cost” argument) and targets working people living in the low-income outer-borough neighborhoods where parking is tight and cars are essential since mass transit is less available. They know that even if everyone earnestly tries to follow the law, there aren’t nearly enough spots for everyone during alternate-side parking times. They know the average urbanite has zero sympathy (disdain is more like it) for drivers even though the billions the city rakes in each year from bridge and tunnel tolls subsidize their train and bus commutes.
The suggestion that alternate-side parking fines exist for any reason other than revenue is vulgar and pretentious. Yet no official, elected or otherwise, will ever come out and admit alternate-side parking rules have been engineered to extract what amounts to a backdoor tax. Doing so would undermine the movement heralding the smart city as humanity’s redeemer. Cities, we are told again and again by the sustainability expert, are our destiny. (...)
Yet this is the crux of urbanism’s shell game. I don’t believe white urbanites are an inherently favored species. Stock images of attractive white couples may adorn the latest luxury condo, but not because urbanism has a special place in its heart for them. It’s economics, pure and simple. This, of course, contradicts the prevailing propaganda pumping out of government public relations offices across the country. The modern city cares about our health and wellness. It wants to be livable, sustainable, and walkable—vibrant. It wants to provide us with amenities and opportunities to experience culture, food, and community. According to the urbanist, the city wishes us to believe it can be both affordable and upscale. That it is invested in our children’s education, our safety, our careers. It is all things to all people. It has a heart.
And in fact, the city will sometimes tease us. The train you desperately need will arrive on time. There will be parking on the block, an open table at a new restaurant. Your favorite artist will be playing in the park, for free. In that moment, you will believe that things could not be any better than they are. You will feel the soothing satisfaction of having made the right choice in life. You will forget the infinite frustrations and heartaches you endure. You will rationalize your overpriced micro-dwelling as a social good. You will believe the life the city offers has been created to suit your unique and discriminating needs and tastes. And you will be wrong.
Here’s what really happens. First, a city hires a think tank to come up with a revitalization plan (pdf). That plan typically entails attracting young people with skills and education and retaining slightly older people with money and small children. Case in point: Washington, DC, in the early 2000s. As I’ve written elsewhere (pdf), in 2001 Brookings Institute economist Alice Rivlin published a report entitled “Envisioning a Future Washington” in which she mapped a revitalization plan that became a blueprint for gentrification. Urban planning and design firms are then hired to figure out how to make a city more desirable to these people. They conduct surveys, mine the data, and issue reports that award these people a flattering label like “creative class” and pronounce what they are looking for and how cities can attract/retain them. What we see happening in cities across America is the result: an unmitigated backlash against the era of sprawl and its accomplices—strip malls, subdivisions, and big-box chains—nothing more, nothing less.
Indeed, the true genius of urbanism is that the marketing campaigns promoting it have seized upon a search for meaning that traditional institutions can no longer satisfy, promising, if only implicitly, to fill the gap. Just look at the shimmering, stylized artist renditions accompanying every new upscale urban development. Rays of light from the heavens above shower the newly paved sidewalks, reflecting boundlessly off the glass buildings and brightening the lives of the multi-hued populace carrying fresh fruits and vegetables in their canvas tote bags.
Urbanism has become the secular religion of choice practiced with the enthusiasm of a Pentecostal tent revival, and the amenitized high-rise the new house of worship. It, after all, promises to fulfill or at least facilitate all of one’s needs while on Earth—with everything from rooftop community gathering space to sunlit Saturday morning yoga classes in the atrium.
This isn’t a new idea. In his celebrated and remarkably enduring 1949 essay, “Here is New York” (pdf). E.B. White addressed the spiritual life that a city offers:
Many people who have no real independence of spirit depend on the city’s tremendous variety and sources of excitement for spiritual sustenance and maintenance of morale … I think that although many persons are here for some excess of spirit (which cause them to break away from their small town), some, too, are here from a deficiency of spirit, who find in New York a protection, or an easy substitution.White’s essay isolates the beauty of New York: It is a love letter. By all means, I invite you to be taken with it; I am. His city offers the range of rewards—sights and sounds and things to do. I marvel at the way White’s city operates, they way it manages to instill order and achieve artistry. In White’s capable hands, cities are humanity’s premier expression of civilization.
Urbanism, as well, has deftly aligned itself with human progress. It trumpets terms like “smart growth,” “sustainability,” “resilience,” and “scalability” to demonstrate both its concern with the quality of our lives and its progressive street cred. It champions urban “green space” as the solution for everything from obesity to asthma. But green spaces aren’t even parks. Often people can only use them during prescribed times and in particular ways—concerts, film screenings, seasonal outdoor markets. Moreover, they’re usually owned by a developer who likely built it as a concession for a sweet deal on the land. Yet this is what we celebrate? A paltry scrap of flora? Which just begs a question Thomas Frank posed in his Baffler essay skewering the “vibrancy” movement so many cities have staked their futures on:
… [W]hy is it any better to pander to the “creative class” than it is to pander to the traditional business class? Yes, one strategy uses “incentives” and tax cuts to get companies to move from one state to another, while the other advises us to emphasize music festivals and art galleries when we make our appeal to that exalted cohort. But neither approach imagines a future arising from something other than government abasing itself before the wealthy.To be fair, in as much as cities can be said to have a consciousness, they fully comprehend their vulnerability. Urban planners know perfectly well that if the delicate balance between safety and prosperity is lost, then disinvestment and abandonment can strike. But they have also learned that people can be manipulated to identify with the city and thereby tolerate just about anything it dishes.
by Dax-Devlon Ross, TMN | Read more:
Image: Steven Guerrisi
Tuesday, February 24, 2015
Whistlin' Dixie
Driving south from the North, we tried to spot exactly where the real South begins. We looked for the South in hand-scrawled signs on the roadside advertising ‘Boil Peanut’, in one-room corrugated tin Baptist churches that are little more than holy sheds, in the crumbling plantation homes with their rose gardens and secrets. In the real South, we thought, ships ought to turn to riverboats, cold Puritanism to swampy hellfire, coarse industrialists with a passion for hotels and steel to the genteel ease of the cotton planter.
Most of what we believe about the South, wrote W.J. Cash in the 1930s, exists in our imagination. But, he wrote, we shouldn’t take this to mean that the South is therefore unreal. The real South, wrote Cash in The Mind of the South, exists in unreality. It is the tendency toward unreality, toward romanticism, toward escape, that defines the mind of the South.
The unreality that shaped the South took many forms. In the South, wrote Cash (himself a Southern man), is “a mood in which the mind yields almost perforce to drift and in which the imagination holds unchecked sway, a mood in which nothing any more seems improbable save the puny inadequateness of fact, nothing incredible save the bareness of truth.” Most people still believe, wrote Cash — but no more than Southerners themselves — in a South built by European aristocrats who erected castles from scrub. This imaginary South, wrote Cash, was “a sort of stagepiece out of the eighteenth century,” where gentlemen planters and exquisite ladies in farthingales spoke softly on the steps of their stately mansions. But well-adjusted men of position and power, he wrote, “do not embark on frail ships for a dismal frontier… The laborer, faced with starvation; the debtor, anxious to get out of jail; the apprentice, eager for a fling at adventure; the small landowner and shopkeeper, faced with bankruptcy and hopeful of a fortune in tobacco; the neurotic, haunted by failure and despair” — only these would go.”
The dominant trait of the mind of the South, wrote Cash, was an intense individualism — an individualism the likes of which the world hadn’t seen since Renaissance days. In the backcountry, the Southern man’s ambitions were unbounded. For each who stood on his own little property, his individual will was imperial law. In the South, wrote Cash, wealth and rank were not so important as they were in older societies. “Great personal courage, unusual physical powers, the ability to drink a quart of whiskey or to lose one’s whole capital on the turn of a card without the quiver of a muscle — these are at least as important as possessions, and infinitely more important than heraldic crests.”
The average white Southern man (for this man was Cash’s main focus) was a romantic, but it was a romance bordering on bedlam. Any ordinary man tends to be a hedonist and a romantic, but take that man away from Old World traditions, wrote Cash, and stick him in the frontier wilds. Take away the skepticism and realism necessary for ambition and he falls back on imagination. His world becomes rooted in the fantastic, the unbelievable, and his emotions lie close to the surface. Life on the Southern frontier was harsh but free — it could make a man’s ego feel large.
The Southern landscape, too, had an unreal quality, “itself,” wrote Cash, “a sort of cosmic conspiracy against reality in favor of romance.” In this country of “extravagant color, of proliferating foliage and bloom, of flooding yellow sunlight, and, above all, perhaps, of haze,” the “pale blue fogs [that] hang above the valleys in the morning,” the outlines of reality blur. The atmosphere smokes “rendering every object vague and problematical.” A soft languor creeps through the blood and into the brain, wrote Cash, and the mood of the South becomes like a drunken reverie, where facts drift far away. “But I must tell you also that the sequel to this mood,” wrote Cash, “is invariably a thunderstorm. For days — for weeks, it may be — the land lies thus in reverie and then …”
The romanticism of the South, wrote W.J. Cash, was one that tended toward violence. It was a violence the Southern man often turned toward himself as much as those around him. The reverie turns to sadness and the sadness to a sense of foreboding and the foreboding to despair. Nerves start to wilt under the terrifying sun, questions arise that have no answers, and “even the soundest grow a bit neurotic.” When the rains break, as they will, and the South becomes a land of fury, the descent into unreality takes hold. Pleasure becomes sin, and all are stripped naked before the terror of truth.
Most of what we believe about the South, wrote W.J. Cash in the 1930s, exists in our imagination. But, he wrote, we shouldn’t take this to mean that the South is therefore unreal. The real South, wrote Cash in The Mind of the South, exists in unreality. It is the tendency toward unreality, toward romanticism, toward escape, that defines the mind of the South.The unreality that shaped the South took many forms. In the South, wrote Cash (himself a Southern man), is “a mood in which the mind yields almost perforce to drift and in which the imagination holds unchecked sway, a mood in which nothing any more seems improbable save the puny inadequateness of fact, nothing incredible save the bareness of truth.” Most people still believe, wrote Cash — but no more than Southerners themselves — in a South built by European aristocrats who erected castles from scrub. This imaginary South, wrote Cash, was “a sort of stagepiece out of the eighteenth century,” where gentlemen planters and exquisite ladies in farthingales spoke softly on the steps of their stately mansions. But well-adjusted men of position and power, he wrote, “do not embark on frail ships for a dismal frontier… The laborer, faced with starvation; the debtor, anxious to get out of jail; the apprentice, eager for a fling at adventure; the small landowner and shopkeeper, faced with bankruptcy and hopeful of a fortune in tobacco; the neurotic, haunted by failure and despair” — only these would go.”
The dominant trait of the mind of the South, wrote Cash, was an intense individualism — an individualism the likes of which the world hadn’t seen since Renaissance days. In the backcountry, the Southern man’s ambitions were unbounded. For each who stood on his own little property, his individual will was imperial law. In the South, wrote Cash, wealth and rank were not so important as they were in older societies. “Great personal courage, unusual physical powers, the ability to drink a quart of whiskey or to lose one’s whole capital on the turn of a card without the quiver of a muscle — these are at least as important as possessions, and infinitely more important than heraldic crests.”
The average white Southern man (for this man was Cash’s main focus) was a romantic, but it was a romance bordering on bedlam. Any ordinary man tends to be a hedonist and a romantic, but take that man away from Old World traditions, wrote Cash, and stick him in the frontier wilds. Take away the skepticism and realism necessary for ambition and he falls back on imagination. His world becomes rooted in the fantastic, the unbelievable, and his emotions lie close to the surface. Life on the Southern frontier was harsh but free — it could make a man’s ego feel large.
The Southern landscape, too, had an unreal quality, “itself,” wrote Cash, “a sort of cosmic conspiracy against reality in favor of romance.” In this country of “extravagant color, of proliferating foliage and bloom, of flooding yellow sunlight, and, above all, perhaps, of haze,” the “pale blue fogs [that] hang above the valleys in the morning,” the outlines of reality blur. The atmosphere smokes “rendering every object vague and problematical.” A soft languor creeps through the blood and into the brain, wrote Cash, and the mood of the South becomes like a drunken reverie, where facts drift far away. “But I must tell you also that the sequel to this mood,” wrote Cash, “is invariably a thunderstorm. For days — for weeks, it may be — the land lies thus in reverie and then …”
The romanticism of the South, wrote W.J. Cash, was one that tended toward violence. It was a violence the Southern man often turned toward himself as much as those around him. The reverie turns to sadness and the sadness to a sense of foreboding and the foreboding to despair. Nerves start to wilt under the terrifying sun, questions arise that have no answers, and “even the soundest grow a bit neurotic.” When the rains break, as they will, and the South becomes a land of fury, the descent into unreality takes hold. Pleasure becomes sin, and all are stripped naked before the terror of truth.
by Stefany Anne Golberg, The Smart Set | Read more:
Image: uncredited
Subscribe to:
Comments (Atom)









