Saturday, May 28, 2016
The Blasé Surrealism of Haruki Murakami
[ed. I've read most of Murakami's books, including 1Q84. As a stylist he's probably as good as anyone, but whenever I finish one of his books I invariably throw my hands up and go "Is that it?" The Atlantic had a pretty good summary a while ago of what a new reader might expect (slightly edited here): Is the novel’s hero an adrift, feckless man in his mid-30s? Does he have a shrewd girl Friday who doubles as his romantic interest? Does the story begin with the inexplicable disappearance of a person close to the narrator? Is there a metaphysical journey to an alternate plane of reality? Are there gratuitous references to Western novels, films, and popular culture? Which eastern-European composer provides the soundtrack, and will enjoy skyrocketing CD sales in the months ahead—Bartók, Prokofiev, Smetana? Are there ominous omens, signifying nothing; dreams that resist interpretation; cryptic mysteries that will never be resolved? Check, check, check and check. In every book. I'm pretty much done with Murakami, which is too bad because I really do enjoy his writing.]
Three days ago, I began to read 1Q84 by Haruki Murakami. At 1157 pages, 1Q84 is a mammoth novel, a veritable brick of a book, similar in proportion to the unfinished copy of Infinite Jest that currently rests about 15 feet away from me.

I blame the Internet for this. Or, rather, I blame my own frequent inability to resist the gravity of Twitter or Facebook or Reddit or Instagram or Snapchat. I don’t even want to think about how many books I could have read this year, had my time on social media been replaced by time spent reading books. But, I guess, that’s what I chose to do, so I’ll take responsibility for my actions. I did, at least, manage to read a whole hell of a lot of great essays on the Internet (see this list and this list). (...)
But I’ve digressed somewhat from the intended topic of this essay: Haruki Murakami. Murakami is one such wizard whose works surround me, swallow me, permeate my being, and transport me to worlds that feel no less real than the one in which I’m typing these words.
And so far, his magnum opus, 1Q84, is no exception. As I mentioned earlier, 1Q84 was arguably a poor choice of book for me to begin reading at this time. (...)
So, yes, this renewed tripod of habits is helping me to read more. But that’s not the only catalyst. I correctly suspected that Murakami could draw me back into book-reading because he is a writer who seems unfailingly to write irresistible page-turners. There are a few reasons for this that I can see.
For one, I swear he’s discovered the Platonic ideal combination of steady pacing and incomplete-yet-tantalizing information. Having completed four of his novels now, I can tell you that his novels always seem to revolve around some mystery that needs to be solved, and he does an excellent job of hinting at the grandiose and ominous nature of the mystery within the first few pages, while providing almost no information regarding the mystery’s actual attributes or dimensions. As the novels progress, he gradually reveals the mystery’s shocking, sprawling architecture and all-penetrating implications, dispensing just enough detail at just the right intervals to keep his readers (or, me at least) hopelessly ensnared.
The protagonists in Murakami’s novels tend to be ordinary, solitary people who suddenly find themselves wrapped up in some sort of epic, supernormal circumstances and must undertake a quest that is as much a quest to the heart of their true identity as it is a quest through the external world.
Another trademark of Murakami’s is something I call Blasé Surrealism (let me know if you think of a better name or if one already exists). Blasé Surrealism is characterized by melding mundane, humdrum realism with elements of surrealism and magic realism, while also incorporating (in Murakami’s case, at least) abstract, metaphysical commentary/comparisons intermittently throughout the story. Murakami’s stories are told in a matter-of-fact tone, as if everything that is happening is quite commonplace, and much of it is. But then he’ll nonchalantly introduce a portal to an alternate reality or include a line like, “Hundreds of butterflies flitted in and out of sight like short-lived punctuation marks in a stream of consciousness without beginning or end.”
He’s so casual, so blasé, about this, that the flow of the story isn’t interrupted. Strange, surreal things happen in Murakami novels, but they seem completely natural because he acts like they are. The reader just goes right along with him. Thus Murakami manages effectively to marry normal and abnormal, real and surreal, conventional and magical. The best comparison I can make is to say that reading a Murakami novel is like being in a dream, in that things are clearly off, clearly not the way they typically are, and yet one doesn’t really notice or care, accepting things at face-value. This makes for a uniquely mind-stirring, almost psychedelic, reading experience.
by Jordan Bates, Refine the Mind | Read more:
Image: IQ84
Why I Bought a Chromebook Instead of a Mac
Chromebooks have surpassed sales of Mac laptops in the United States for the first time ever. And that doesn’t surprise me. Because roughly a year ago I made the same switch. Formerly a lifelong Mac user, I bought my first PC ever in the form of a Chromebook. And I’m never looking back.
Driven by the kind of passion that can only be found in the recently converted, I have aided and abetted friends in renouncing the sins of gluttony and pride uniquely found in the House of Apples. I have helped them find salvation with the Book of Chrome. Glory be the Kingdom of Chrome, for your light shines down upon us at a quarter of the price.
Make no mistake, I grew up on Macs. The first computer I remember my Dad bringing home when I was 5 years old was a Mac. Our family computer throughout the 1990s was a Mac. I used that Mac Performa throughout middle school, and it gave me treasured memories of playing Dark Forces and first discovering the internet. My high school graduation present from my parents in 2002 was my first Mac laptop. And I would continue to buy Mac desktops and laptops for the next decade and a half.
But something happened about a year ago when my Macbook Air was running on fumes. I looked at the Macs and gave my brain a half-second to entertain other options. I owned a functioning Mac desktop, which is my primary machine for heavy lifting. But I started to wonder why I wasn’t entertaining other options for my mobile machine.
The biggest consideration was price. When all was said and done, even the cheapest Mac laptop was going to set me back about $1,300 after taxes and AppleCare. And the siren song of a computer under $200 was calling my name. I got the Acer Chromebook with 2GB of RAM and a 16GB drive. It cost a shockingly low $173. And it was worth every penny. It even came with 100GB of Google Drive storage and twelve GoGo inflight internet passes. If you travel enough, the thing literally pays for itself in airline wifi access.
I rarely have to edit video and my photo manipulation needs are minimal. So when I walk down to the coffee shop to work, what the hell do I need doing that can’t be done on a Chromebook? Nothing, is the answer. Precisely nothing. And if you’re being totally honest with yourself you should probably ask the same question.
Computers have essentially become disposable, for better and for worse. We’ve seen this trend in electronics over the past decade and it’s a great thing from the perspective of American consumers. More people can afford e-readers and tablets that now cost just $50. The mid-2000s dream of “one laptop per child,” which sought to bring the price of mobile computers down to $100, has become a reality thanks to Chromebooks and tablets made by companies like Acer, HP, and Amazon. And with more and more of our computing needs being met by web browsers alone, the average consumer is seeing less incentive to buy a Mac.
This trend should obviously terrify Apple. Computers have become fungible commodities, just like HDTVs before them. Which is to say that the average American doesn’t view a TV as high-tech that requires much homework these days. Any TV will do. Look at the screen and look at the price. Does it look like a TV? Yep. Is it cheap? Double yep. Whip out the credit card.
Driven by the kind of passion that can only be found in the recently converted, I have aided and abetted friends in renouncing the sins of gluttony and pride uniquely found in the House of Apples. I have helped them find salvation with the Book of Chrome. Glory be the Kingdom of Chrome, for your light shines down upon us at a quarter of the price.

But something happened about a year ago when my Macbook Air was running on fumes. I looked at the Macs and gave my brain a half-second to entertain other options. I owned a functioning Mac desktop, which is my primary machine for heavy lifting. But I started to wonder why I wasn’t entertaining other options for my mobile machine.
The biggest consideration was price. When all was said and done, even the cheapest Mac laptop was going to set me back about $1,300 after taxes and AppleCare. And the siren song of a computer under $200 was calling my name. I got the Acer Chromebook with 2GB of RAM and a 16GB drive. It cost a shockingly low $173. And it was worth every penny. It even came with 100GB of Google Drive storage and twelve GoGo inflight internet passes. If you travel enough, the thing literally pays for itself in airline wifi access.
I rarely have to edit video and my photo manipulation needs are minimal. So when I walk down to the coffee shop to work, what the hell do I need doing that can’t be done on a Chromebook? Nothing, is the answer. Precisely nothing. And if you’re being totally honest with yourself you should probably ask the same question.
Computers have essentially become disposable, for better and for worse. We’ve seen this trend in electronics over the past decade and it’s a great thing from the perspective of American consumers. More people can afford e-readers and tablets that now cost just $50. The mid-2000s dream of “one laptop per child,” which sought to bring the price of mobile computers down to $100, has become a reality thanks to Chromebooks and tablets made by companies like Acer, HP, and Amazon. And with more and more of our computing needs being met by web browsers alone, the average consumer is seeing less incentive to buy a Mac.
This trend should obviously terrify Apple. Computers have become fungible commodities, just like HDTVs before them. Which is to say that the average American doesn’t view a TV as high-tech that requires much homework these days. Any TV will do. Look at the screen and look at the price. Does it look like a TV? Yep. Is it cheap? Double yep. Whip out the credit card.
by Matt Novak, Gizmodo | Read more:
Image: Shutterstock/Acer
Our Nightmare
Most of us, I imagine, are not consistent political optimists or pessimists. We instead react – and usually overreact – to the short-term political trends before us, unable to look beyond the next election cycle and its immediate impact on ourselves and our political movements. I remember, immediately after the re-election of George W. Bush in 2004, a particularly voluble conservative blogger arguing that it was time for conservatives to “curb stomp” the left, to secure the final victory over liberals and Democrats. Four years later, of course, a very different political revolution appeared to be at hand, and some progressives made the same kind of ill-considered predictions. Neither permanent political victory has come to pass, with Democrats enjoying structural advantages in presidential elections and Republicans making hay with a well-oiled electoral machine in Congressional elections. How long those conditions persist, who can say.
But partisan politics are only a part of the actual political conditions that dictate our lives. Politics, culture, and economics fuse together to create our lived experience. And that experience is bound up in vague but powerful expectations about success, what it means, and who it’s for. There is a future that appears increasingly likely to me, a bleak future, and one which subverts traditional partisan lines. In this future, the meritocratic school of liberalism produces economic outcomes that would be at home with laissez faire economic conservatives, to the detriment of almost all of us.
The future that I envision amounts, depending on your perspective, to either a betrayal of the liberal dream or its completion. In this future, the traditional foundations of liberalism in economic justice and redistribution are amputated from the push for diversity in terms of race, gender, sexual identity, and related issues. (...)
Traditionally, both equality and diversity have been important to liberalism. There are obvious reasons for this connection. To begin with, the persistent inequality and injustice that afflict people of color and women in our society are powerfully represented in economic outcomes, with black and Hispanic Americans and women all suffering from clear and significant gaps in income, wealth, and similar measures of economic success. Economic justice is therefore inseparable from our efforts to truly combat racial and gender inequality. What’s more, the moral case for economic justice stems from the same foundations as the case against racism and sexism, a profound moral duty to provide for all people and to ensure that they live lives of material security and social dignity. The traditional liberal message has therefore been to emphasize the need for diverse institutions and economic justice as intertwined phenomena.
In recent years, however, the liberal imagination has become far less preoccupied with economic issues. Real-world activism retains its focus on economic outcomes, but the media that must function as an incubator of ideas, in any healthy political movement, has grown less and less interested in economic questions as such. Liberal publications devote far less ink, virtual or physical, to core issues of redistribution and worker power than they once did. Follow prominent liberals on Twitter, browse through the world of social justice Tumblr, read socially and culturally liberal websites. You might go weeks without reading the word “union.” Economic issues just aren’t central to the political conceptions of many younger liberals; they devote endless hours to decoding the feminism of Rihanna but display little interest in, say, a guaranteed minimum income or nationalizing the banks. Indeed, the mining of pop cultural minutia for minimally-plausible political content has become such a singular obsession within liberal media that it sometimes appears to be crowding out all over considerations. (...)
As The American Conservative’s Noah Millman once wrote, “the culture war turns politics into a question of identity, of tribalism, and hence narrows the effective choice in elections. We no longer vote for the person who better represents our interests, but for the person who talks our talk, sees the world the way we do, is one of us…. And it’s a good basis for politics from the perspective of economic elites. If the battle between Left and Right is fundamentally over social questions like abortion and gay marriage, then it is not fundamentally over questions like who is making a killing off of government policies and who is getting screwed.” The point is not that those culture war questions are unimportant, but that by treating them as cultural issues, our system pulls them up from their roots in economic foundations and turns them into yet another set of linguistic, symbolic problems. My argument, fundamentally, is that we face a future where strategic superficial diversity among our wealthy elites will only deepen the distraction Millman is describing. Such a future would be disastrous for most women and most people of color, but to many, would represent victory against racism and sexism.

The future that I envision amounts, depending on your perspective, to either a betrayal of the liberal dream or its completion. In this future, the traditional foundations of liberalism in economic justice and redistribution are amputated from the push for diversity in terms of race, gender, sexual identity, and related issues. (...)
Traditionally, both equality and diversity have been important to liberalism. There are obvious reasons for this connection. To begin with, the persistent inequality and injustice that afflict people of color and women in our society are powerfully represented in economic outcomes, with black and Hispanic Americans and women all suffering from clear and significant gaps in income, wealth, and similar measures of economic success. Economic justice is therefore inseparable from our efforts to truly combat racial and gender inequality. What’s more, the moral case for economic justice stems from the same foundations as the case against racism and sexism, a profound moral duty to provide for all people and to ensure that they live lives of material security and social dignity. The traditional liberal message has therefore been to emphasize the need for diverse institutions and economic justice as intertwined phenomena.
In recent years, however, the liberal imagination has become far less preoccupied with economic issues. Real-world activism retains its focus on economic outcomes, but the media that must function as an incubator of ideas, in any healthy political movement, has grown less and less interested in economic questions as such. Liberal publications devote far less ink, virtual or physical, to core issues of redistribution and worker power than they once did. Follow prominent liberals on Twitter, browse through the world of social justice Tumblr, read socially and culturally liberal websites. You might go weeks without reading the word “union.” Economic issues just aren’t central to the political conceptions of many younger liberals; they devote endless hours to decoding the feminism of Rihanna but display little interest in, say, a guaranteed minimum income or nationalizing the banks. Indeed, the mining of pop cultural minutia for minimally-plausible political content has become such a singular obsession within liberal media that it sometimes appears to be crowding out all over considerations. (...)
As The American Conservative’s Noah Millman once wrote, “the culture war turns politics into a question of identity, of tribalism, and hence narrows the effective choice in elections. We no longer vote for the person who better represents our interests, but for the person who talks our talk, sees the world the way we do, is one of us…. And it’s a good basis for politics from the perspective of economic elites. If the battle between Left and Right is fundamentally over social questions like abortion and gay marriage, then it is not fundamentally over questions like who is making a killing off of government policies and who is getting screwed.” The point is not that those culture war questions are unimportant, but that by treating them as cultural issues, our system pulls them up from their roots in economic foundations and turns them into yet another set of linguistic, symbolic problems. My argument, fundamentally, is that we face a future where strategic superficial diversity among our wealthy elites will only deepen the distraction Millman is describing. Such a future would be disastrous for most women and most people of color, but to many, would represent victory against racism and sexism.
by Fredrik deBoer | Read more:
Image: Getty
The Persian Rug May Not Be Long for This World
For centuries, Iran’s famed carpets have been produced by hand along the nomad trail in this region of high plains around the ancient city of Shiraz.
Sheep grazed in high mountain pastures and shorn only once a year produce a thick, long wool ideal for the tough thread used in carpet making.
But high-quality production of hand-woven carpets is no longer sustainable on the migration route of the nomads, said Hamid Zollanvari, one of Iran’s biggest carpet makers and dealers.
Instead, he had built a factory with 16 huge cooking pots, where on a recent cool, sunny spring day men in blue overalls stirred the pots with long wooden sticks, boiling and coloring the thread. As the colored waters bubbled, they looked like live volcanos. The air smelled of sheep.
Another room was stacked with herbs. Eucalyptus leaves, indigo, black curd, turmeric, acorn shells and alum, ingredients for the different colors. “The Iranian carpet is 100 percent organic,” Mr. Zollanvari declared. “No machinery is involved.”
It is a scene that seems as ageless as the women who sit before the looms and weave the rugs, a process that can take as long as a year. And now even the factory is threatened. With six years of Western sanctions on the carpet business and punishing competition from rugs machine-made in China and India, these are hard times for the craft of Persian rug making. Many veterans wonder whether it can survive.
Over the centuries invaders, politicians and Iran’s enemies have left their mark on Iran’s carpets, said Prof. Hashem Sedghamiz, a local authority on carpets, sitting in the green courtyard of his restored Qajar-dynasty house in Shiraz. The outsiders demanded changes, started using chemicals for coloring and, most recently, imposed sanctions on the rugs. Those were blows, he said, damaging but not destructive.
But now, Mr. Sedghamiz said, the end is near. Ultimately he said, it is modernity — that all-devouring force that is changing societies at breakneck speed — that is killing the Persian carpet, Iran’s pride and joy. “People simply are no longer interested in quality.”
Or in paying for it, he might have added. (...)
One thing is for sure: Iran’s carpets are among the most complex and labor-intensive handicrafts in the world.
It is on the endless green slopes of Fars Province, in Iran’s heartland, that the “mother of all carpets,” among the first in the world, is produced: the hand-woven nomadic Persian rug.
The process starts with around 1.6 million sheep grazed by shepherds from the nomadic Qashqai and Bakhtiari tribes, who produce that tough, long-fibered wool so perfect for carpets.
Women take over from there, making thread from the wool by hand, twisting it with their fingers. The finished thread is bundled and then dyed, using natural ingredients like pomegranate peels for deep red or wine leaves for green. After days of boiling on a wooden fire, the threads are dried by the cool winds that blow in from the north each afternoon.
Only then does the weaving start. Weavers, almost all of them women, spend several months to a year bent over a horizontally placed loom, stringing and knotting thousands of threads. Some follow established patterns, some create their own. When the carpet is finally done, it is cut, washed and put out in the sun to dry.
“It’s so time consuming, real hand work,” said Mr. Zollanvari, the carpet dealer. “A labor of love. And what does it cost?” he asked, before answering the question himself: “Almost nothing.” A 6-by-9-foot handwoven carpet costs around $400 in Shiraz, depending on the pattern and quality.
Sheep grazed in high mountain pastures and shorn only once a year produce a thick, long wool ideal for the tough thread used in carpet making.

Instead, he had built a factory with 16 huge cooking pots, where on a recent cool, sunny spring day men in blue overalls stirred the pots with long wooden sticks, boiling and coloring the thread. As the colored waters bubbled, they looked like live volcanos. The air smelled of sheep.
Another room was stacked with herbs. Eucalyptus leaves, indigo, black curd, turmeric, acorn shells and alum, ingredients for the different colors. “The Iranian carpet is 100 percent organic,” Mr. Zollanvari declared. “No machinery is involved.”
It is a scene that seems as ageless as the women who sit before the looms and weave the rugs, a process that can take as long as a year. And now even the factory is threatened. With six years of Western sanctions on the carpet business and punishing competition from rugs machine-made in China and India, these are hard times for the craft of Persian rug making. Many veterans wonder whether it can survive.
Over the centuries invaders, politicians and Iran’s enemies have left their mark on Iran’s carpets, said Prof. Hashem Sedghamiz, a local authority on carpets, sitting in the green courtyard of his restored Qajar-dynasty house in Shiraz. The outsiders demanded changes, started using chemicals for coloring and, most recently, imposed sanctions on the rugs. Those were blows, he said, damaging but not destructive.
But now, Mr. Sedghamiz said, the end is near. Ultimately he said, it is modernity — that all-devouring force that is changing societies at breakneck speed — that is killing the Persian carpet, Iran’s pride and joy. “People simply are no longer interested in quality.”
Or in paying for it, he might have added. (...)
One thing is for sure: Iran’s carpets are among the most complex and labor-intensive handicrafts in the world.
It is on the endless green slopes of Fars Province, in Iran’s heartland, that the “mother of all carpets,” among the first in the world, is produced: the hand-woven nomadic Persian rug.
The process starts with around 1.6 million sheep grazed by shepherds from the nomadic Qashqai and Bakhtiari tribes, who produce that tough, long-fibered wool so perfect for carpets.
Women take over from there, making thread from the wool by hand, twisting it with their fingers. The finished thread is bundled and then dyed, using natural ingredients like pomegranate peels for deep red or wine leaves for green. After days of boiling on a wooden fire, the threads are dried by the cool winds that blow in from the north each afternoon.
Only then does the weaving start. Weavers, almost all of them women, spend several months to a year bent over a horizontally placed loom, stringing and knotting thousands of threads. Some follow established patterns, some create their own. When the carpet is finally done, it is cut, washed and put out in the sun to dry.
“It’s so time consuming, real hand work,” said Mr. Zollanvari, the carpet dealer. “A labor of love. And what does it cost?” he asked, before answering the question himself: “Almost nothing.” A 6-by-9-foot handwoven carpet costs around $400 in Shiraz, depending on the pattern and quality.
by Thomas Erdbrink, NY Times | Read more:
Image: Newsha TavakolianSix True Things About Dinner With Obama
Bun Cha is a typical Hanoi dish, decidedly everyday, and much loved by locals . To the consternation, no doubt, of the Secret Service (who were very cool about it) I was recently joined for dinner by the leader of the free world in a working class joint near the old quarter of town for an upcoming episode of Parts Unknown.
by Anthony Bourdain, Li.st | Read more:
Image: uncredited
Friday, May 27, 2016
Lexington Lab Band
[ed. Repost. Best cover band, ever. See also: Kid Charlemagne, Life in the Fast Lane, Voodoo Child, and more.]
Late Night Pack by Vans (burger slip-ons shown above)
via:
[ed. See also: The Skate Mental x Nike SB Janoski “Pepperoni Pizza”]
Thursday, May 26, 2016
Trail Blazing
For a great long while, I thought there was only one kind of bud: whatever the fuck was available. The first time I smoked weed (And by “smoked weed” I mean “got high”), I was by most accounts pretty old — twenty-two. There had been two former, rather desultory attempts. Once, at a bonfire on Repulse Bay Beach in Hong Kong when I was fifteen (Hong Kong is renowned for several things, but marijuana is not one of them), and another time in Texas, in the garage of some skater dude who was a year older, very hot, and had an identical twin I would’ve gladly settled for. I was green, the weed less so.
The first time I ever smoked successfully, I was working in Brooklyn, in the dead of winter, for profoundly exploitative wages. On the upside, the job happened to come with a young, chill boss who daily smoked two blunts wrapped in Vanilla Dutch Masters, and was fairly generous about sharing. The weed was dopey, didn’t have a name, and helped temper the indignation I felt trekking ninety minutes with two train changes and a bus ride — in the snow — to get to work. That was thirteen years ago.
By the time I moved to California in my thirties, weed was becoming legal, and I secured a cannabis card for dubious medical reasons and credible recreational ones. I learned there was not only a dazzling kaleidoscope of marijuana strains to choose from, but that, depending on my hankering, I could calibrate the weed to my desired vibe. What a time to be alive! No more feeling catatonic on a dinner date or hyper-social and chatty at the movie theater — I was on the path to finding The Perfect High. Not, like, One High to Rule Them All, but more like, the superlative vibe for every chill sitch in my life. The perfect high, of course, is largely subjective. We’re all physiological snowflakes with wildly differing operating systems. It’s why some people can have a grand time on edibles (me) but other people (my best friend Brooke) go bat-shit crazy, curling up in the fetal position until the mania subsides.
There are significant differences in how the body metabolizes the nearly one hundred different cannabinoids present in cannabis. Phytocannabinoids, found in cannabis flowers, are the chemical compounds that we respond to. (We also produce cannabinoids in our bodies — called endogenous cannabinoids or endocannabinoids). The cannabinoid system is old, I mean ancient; even worms respond to cannabinoids. It regulates a bunch of basic processes in our bodies — the immune system, memory, sleep and inflammation. We have cannabinoid receptors in all sorts of places.
You guys: we’re basically designed to get high.
Of all the cannabinoids in cannabis, THC (Tetrahydrocannabinol) and CBD (Cannabidiol) are the most famous, with the prevailing agreement that THC is heady and CBD is about the body high. But it’s the ninety-odd other cannabinoids acting in concert with them that make each high unique. This synergistic effect — the harmonious interplay, and the permutations of cannabinoids — is what makes each strain so darned mysterious. Elan Rae, the in-house cannabis expert for Marley Natural (the official Bob Marley cannabis brand) described the “entourage effect,” as it’s called, as “the combined effect of the cannabinoid profile. It doesn’t allow you to specifically ascribe an effect to one cannabinoid.” To wit: it’s not the amount of THC that gets you high, but how it reacts with a slew of other cannabinoids.
So while you may not know the exact chemistry of why you’re getting a certain type of high, it stands to reason that you can use guidelines to land in the neighborhood of the high you’re after. Think of it this way: you want a kicky, effervescent vinho verde for picnics or beaches, a jigger of bourbon for cozy autumnal nights, and nineteen pitchers of pre-mixed margarita if you want a pernicious hangover to cap off an evening of homicidal mania and sexual regret. Similarly, you’ll want a playful, low-impact Sativa for an al fresco activity, and an Indica or Indica-dominant hybrid for cuffin’ season.
And what exactly is the difference between Indica and Sativa? Within the Cannabis genus, they are two separate species. Pretty much everything we smoke is one, the other, or a hybrid of the two. Indicas are mellower and harder-hitting, perfect for Olympiad-level chilling after a long day. They’re often prescribed to people who have trouble sleeping or need to manage pain. The plant phenotypically tends to be shorter and bushier, with thicker individual leaves. Sativas, on the other hand, tend to be neurologically wavier, generally better for a daytime high. They make most of us feel alert, and they’re excellent for idea generation, provided you don’t fall into too many disparate wormholes. The flower looks like the platonic ideal of weed; it’s the kind you get on a pair of Huf socks, or embroidered onto a red, gold, and green hat.
To say there’s a weed for every occasion is an understatement. Like German nouns, there’s an exact cannabis strain to complement “sentimental pessimism” or the “anguish one feels when comparing the shortcomings of reality to an idealized state of the world.” Some weed is built for fucking, and other weed is for ugly-crying at 4AM at season two of Bojack Horseman because you relate way too hard to an anthropomorphized cartoon horse and his drinking problem. (No judgment.)
It is with this knowledge, clear eyes, and a full heart that I went to my reputable Los Angeles medical center (not to be confused with any old run-of-the-mill bongmonger) and secured eight strains to try: Platinum Jack, XJ13, Dutch Treat, Pineapple Express, J1, Gorilla Glue, Berner’s Cookies and NorCal OG.
The first time I ever smoked successfully, I was working in Brooklyn, in the dead of winter, for profoundly exploitative wages. On the upside, the job happened to come with a young, chill boss who daily smoked two blunts wrapped in Vanilla Dutch Masters, and was fairly generous about sharing. The weed was dopey, didn’t have a name, and helped temper the indignation I felt trekking ninety minutes with two train changes and a bus ride — in the snow — to get to work. That was thirteen years ago.

There are significant differences in how the body metabolizes the nearly one hundred different cannabinoids present in cannabis. Phytocannabinoids, found in cannabis flowers, are the chemical compounds that we respond to. (We also produce cannabinoids in our bodies — called endogenous cannabinoids or endocannabinoids). The cannabinoid system is old, I mean ancient; even worms respond to cannabinoids. It regulates a bunch of basic processes in our bodies — the immune system, memory, sleep and inflammation. We have cannabinoid receptors in all sorts of places.
You guys: we’re basically designed to get high.
Of all the cannabinoids in cannabis, THC (Tetrahydrocannabinol) and CBD (Cannabidiol) are the most famous, with the prevailing agreement that THC is heady and CBD is about the body high. But it’s the ninety-odd other cannabinoids acting in concert with them that make each high unique. This synergistic effect — the harmonious interplay, and the permutations of cannabinoids — is what makes each strain so darned mysterious. Elan Rae, the in-house cannabis expert for Marley Natural (the official Bob Marley cannabis brand) described the “entourage effect,” as it’s called, as “the combined effect of the cannabinoid profile. It doesn’t allow you to specifically ascribe an effect to one cannabinoid.” To wit: it’s not the amount of THC that gets you high, but how it reacts with a slew of other cannabinoids.
So while you may not know the exact chemistry of why you’re getting a certain type of high, it stands to reason that you can use guidelines to land in the neighborhood of the high you’re after. Think of it this way: you want a kicky, effervescent vinho verde for picnics or beaches, a jigger of bourbon for cozy autumnal nights, and nineteen pitchers of pre-mixed margarita if you want a pernicious hangover to cap off an evening of homicidal mania and sexual regret. Similarly, you’ll want a playful, low-impact Sativa for an al fresco activity, and an Indica or Indica-dominant hybrid for cuffin’ season.
And what exactly is the difference between Indica and Sativa? Within the Cannabis genus, they are two separate species. Pretty much everything we smoke is one, the other, or a hybrid of the two. Indicas are mellower and harder-hitting, perfect for Olympiad-level chilling after a long day. They’re often prescribed to people who have trouble sleeping or need to manage pain. The plant phenotypically tends to be shorter and bushier, with thicker individual leaves. Sativas, on the other hand, tend to be neurologically wavier, generally better for a daytime high. They make most of us feel alert, and they’re excellent for idea generation, provided you don’t fall into too many disparate wormholes. The flower looks like the platonic ideal of weed; it’s the kind you get on a pair of Huf socks, or embroidered onto a red, gold, and green hat.
To say there’s a weed for every occasion is an understatement. Like German nouns, there’s an exact cannabis strain to complement “sentimental pessimism” or the “anguish one feels when comparing the shortcomings of reality to an idealized state of the world.” Some weed is built for fucking, and other weed is for ugly-crying at 4AM at season two of Bojack Horseman because you relate way too hard to an anthropomorphized cartoon horse and his drinking problem. (No judgment.)
It is with this knowledge, clear eyes, and a full heart that I went to my reputable Los Angeles medical center (not to be confused with any old run-of-the-mill bongmonger) and secured eight strains to try: Platinum Jack, XJ13, Dutch Treat, Pineapple Express, J1, Gorilla Glue, Berner’s Cookies and NorCal OG.
by Mary H.K. Choi, The Awl | Read more:
Image: Retinafunk
Manpris
[ed. So, I learned a new term today: "Manpri" (also called Dude Capris)]. There seems to be quite a bit of angst involving the whole topic. See also: The Great Manpris Debate.]
We used to think that nothing could stump us. Show us a man trying to pass himself off as straight, for whatever reason, and we’d call him out faster than #Ryan Seacrest could ask for a hair straightener.
Enter the Manpri.
There are two kinds of men who wear Manpris. Yard boys and gay hipsters. Ladies shouldn’t be hitting on either of these men anyway. Straight men don’t even know what Manpris are.
What are these ambiguous bottoms anyway? They are calf length trousers, somewhere between a short and a pant. Your grandmother would probably call them peddle pushers. Regardless, they’re fucking fabulous. And they’re fucking gay.
by Anonymous, Straightmendont | Read more:
Image: Straightmendont

Enter the Manpri.
There are two kinds of men who wear Manpris. Yard boys and gay hipsters. Ladies shouldn’t be hitting on either of these men anyway. Straight men don’t even know what Manpris are.
What are these ambiguous bottoms anyway? They are calf length trousers, somewhere between a short and a pant. Your grandmother would probably call them peddle pushers. Regardless, they’re fucking fabulous. And they’re fucking gay.
by Anonymous, Straightmendont | Read more:
Image: Straightmendont
The Citizen-Soldier: Moral Risk and the Modern Military
I can’t say that I joined the military because of 9/11. Not exactly. By the time I got around to it the main U.S. military effort had shifted to Iraq, a war I’d supported though one which I never associated with al-Qaida or Osama bin Laden. But without 9/11, we might not have been at war there, and if we hadn’t been at war, I wouldn’t have joined.
It was a strange time to make the decision, or at least, it seemed strange to many of my classmates and professors. I raised my hand and swore my oath of office on May 11, 2005. It was a year and a half after Saddam Hussein’s capture. The weapons of mass destruction had not been found. The insurgency was growing. It wasn’t just the wisdom of the invasion that was in doubt, but also the competence of the policymakers. Then-Secretary of Defense Donald Rumsfeld had been proven wrong about almost every major post-invasion decision, from troop levels to post-war reconstruction funds. Anybody paying close attention could tell that Iraq was spiraling into chaos, and the once jubilant public mood about our involvement in the war, with over 70 percent of Americans in 2003 nodding along in approval, was souring. But the potential for failure, and the horrific cost in terms of human lives that failure would entail, only underscored for me why I should do my part. This was my grand cause, my test of citizenship. (...)
There’s a joke among veterans, “Well, we were winning Iraq when I was there,” and the reason it’s a joke is because to be in the military is to be acutely conscious of how much each person relies on the larger organization. In boot camp, to be called “an individual” is a slur. A Marine on his or her own is not a militarily significant unit. At the Basic School, the orders we were taught to write always included a lost Marine plan, which means every order given carries with it the implicit message: you are nothing without the group. The Bowe Bergdahl case is a prime example of what happens when one soldier takes it upon himself to find the war he felt he was owed—a chance to be like the movie character Jason Bourne, as Bergdahl explained on tapes played by the podcast Serial. The intense anger directed at Bergdahl from rank and file soldiers, an anger sometimes hard for a civilian public raised on notions of American individualism to comprehend, is the anger of a collective whose members depend on each other for their very lives directed toward one who, through sheer self-righteous idiocy, violated the intimate bonds of camaraderie. By abandoning his post in Afghanistan, Bergdahl made his fellow soldiers’ brutally hard, dangerous, and possibly futile mission even harder and more dangerous and more futile, thereby breaking the cardinal rule of military life: don’t be a buddy fucker. You are not the hero of this movie.
But a soldier doesn’t just rely on his squad-mates, or on the leadership of his platoon and company. There’s close air support, communications, and logistics. Reliable weapons, ammunition, and supplies.
Today, we’re still mobilized for war, though in a manner perfectly designed to ensure we don’t think about it too much. Since we have an all-volunteer force, participation in war is a matter of choice, not a requirement of citizenship, and those in the military represent only a tiny fraction of the country—what historian Andrew Bacevich calls “the 1 percent army. “ So the average civilian’s chance of knowing any member of the service is correspondingly small.
Moreover, we’re expanding those aspects of warfighting that fly under the radar. Our drone program continues to grow, as does the special operations forces community, which has expanded from 45,600 special forces personnel in 2001 to 70,000 today, with further increases planned. The average American is even less likely to know a drone pilot or a member of a special ops unit—or to know much about what they actually do, either, since you can’t embed a reporter with a drone or with SEAL Team 6. Our Special Operations Command has become, in the words of former Lieutenant Colonel John Nagl, “an almost industrial-scale counterterrorism killing machine.”
Though it’s true that citizens do vote for the leaders who run this machine, we’ve absolved ourselves from demanding a serious debate about it in Congress. We’re still operating under a decade-old Authorization for Use of Military Force issued in the wake of 9/11, before some of the groups we’re currently fighting even existed, and it’s unlikely, despite attempts from Senators Tim Kaine (D-Va.) and Jeff Flake (R-Ariz.), that Congress will issue a new one any time soon. We wage war “with or without congressional action,” in the words of President Obama at his final State of the Union Address, which means that the American public remains insulated from considering the consequences. Even if they voted for the president ordering these strikes, there’s seemingly little reason for citizens to feel personally culpable when they go wrong.
It’s that sense of a personal stake in war that the veteran experiences viscerally, and which is so hard for the civilian to feel. The philosopher Nancy Sherman has explained post-war resentment as resulting from a broken contract between society and the veterans who serve. “They may feel guilt toward themselves and resentment at commanders for betrayals,” she writes, “but also, more than we are willing to acknowledge, they feel resentment toward us for our indifference toward their wars and afterwars, and for not even having to bear the burden of a war tax for over a decade of war. Reactive emotions, like resentment or trust, presume some kind of community—or at least are invocations to reinvoke one or convoke one anew.”
The debt owed them, then, is not simply one of material benefits. There’s a remarkable piece in Harper’s Magazine titled, “It’s Not That I’m Lazy,” published in 1946 and signed by an anonymous veteran, which argues, “There’s a kind of emptiness inside me that tells me that I’ve still got something coming. It’s not a pension that I’m looking for. What I paid out wasn’t money; it was part of myself. I want to be paid back in kind, in something human.”
That sounds right to me: “something human,” though I’m not sure what form it would take. When I first came back from Iraq, I thought it meant a public reckoning with the war, with its costs not just for Americans but for Iraqis as well. As time goes by, and particularly as I watch a U.S. presidential debate in which candidates have offered up carpet bombing, torture, and other kinds of war crimes as the answer to complex problems that the military has long since learned will only worsen if we attempt such simplistic and immoral solutions, I’ve given up on hoping that will happen anytime soon. If the persistence of U.S. military bases named after Confederate generals is any indication, it might not happen in my lifetime. The Holocaust survivor Jean Améry, considering Germany’s post-war rehabilitation, would conclude, “Society … thinks only about its continued existence.” Decades later Ta-Nehisi Coates, considering the difficulty, if not impossibility, of finding solutions for various historic tragedies, would write, “I think we all see our ‘theories and visions’ come to dust in the ‘starving, bleeding, captive land’ which is everywhere, which is politics.”
by Phil Klay, Brookings Institution | Read more:
Image: Reuters
It was a strange time to make the decision, or at least, it seemed strange to many of my classmates and professors. I raised my hand and swore my oath of office on May 11, 2005. It was a year and a half after Saddam Hussein’s capture. The weapons of mass destruction had not been found. The insurgency was growing. It wasn’t just the wisdom of the invasion that was in doubt, but also the competence of the policymakers. Then-Secretary of Defense Donald Rumsfeld had been proven wrong about almost every major post-invasion decision, from troop levels to post-war reconstruction funds. Anybody paying close attention could tell that Iraq was spiraling into chaos, and the once jubilant public mood about our involvement in the war, with over 70 percent of Americans in 2003 nodding along in approval, was souring. But the potential for failure, and the horrific cost in terms of human lives that failure would entail, only underscored for me why I should do my part. This was my grand cause, my test of citizenship. (...)

But a soldier doesn’t just rely on his squad-mates, or on the leadership of his platoon and company. There’s close air support, communications, and logistics. Reliable weapons, ammunition, and supplies.
The entire apparatus of war—all of it ultimately resting on American industry and on the tax dollars that each of us pays. “The image of war as armed combat merges into the more extended image of a gigantic labor process,” wrote Ernst Jünger, a German writer and veteran of World War I. After the Second World War Kurt Vonnegut would come to a similar conclusion, reflecting not only on the planes and crews, the bullets and bombs and shell fragments, but also where those came from: the factories “operating night and day,” the transportation lines for the raw materials, and the miners working to extract them. Think too hard about the front-line soldier, you end up thinking about all that was needed to put him there.
Today, we’re still mobilized for war, though in a manner perfectly designed to ensure we don’t think about it too much. Since we have an all-volunteer force, participation in war is a matter of choice, not a requirement of citizenship, and those in the military represent only a tiny fraction of the country—what historian Andrew Bacevich calls “the 1 percent army. “ So the average civilian’s chance of knowing any member of the service is correspondingly small.
Moreover, we’re expanding those aspects of warfighting that fly under the radar. Our drone program continues to grow, as does the special operations forces community, which has expanded from 45,600 special forces personnel in 2001 to 70,000 today, with further increases planned. The average American is even less likely to know a drone pilot or a member of a special ops unit—or to know much about what they actually do, either, since you can’t embed a reporter with a drone or with SEAL Team 6. Our Special Operations Command has become, in the words of former Lieutenant Colonel John Nagl, “an almost industrial-scale counterterrorism killing machine.”
Though it’s true that citizens do vote for the leaders who run this machine, we’ve absolved ourselves from demanding a serious debate about it in Congress. We’re still operating under a decade-old Authorization for Use of Military Force issued in the wake of 9/11, before some of the groups we’re currently fighting even existed, and it’s unlikely, despite attempts from Senators Tim Kaine (D-Va.) and Jeff Flake (R-Ariz.), that Congress will issue a new one any time soon. We wage war “with or without congressional action,” in the words of President Obama at his final State of the Union Address, which means that the American public remains insulated from considering the consequences. Even if they voted for the president ordering these strikes, there’s seemingly little reason for citizens to feel personally culpable when they go wrong.
It’s that sense of a personal stake in war that the veteran experiences viscerally, and which is so hard for the civilian to feel. The philosopher Nancy Sherman has explained post-war resentment as resulting from a broken contract between society and the veterans who serve. “They may feel guilt toward themselves and resentment at commanders for betrayals,” she writes, “but also, more than we are willing to acknowledge, they feel resentment toward us for our indifference toward their wars and afterwars, and for not even having to bear the burden of a war tax for over a decade of war. Reactive emotions, like resentment or trust, presume some kind of community—or at least are invocations to reinvoke one or convoke one anew.”
The debt owed them, then, is not simply one of material benefits. There’s a remarkable piece in Harper’s Magazine titled, “It’s Not That I’m Lazy,” published in 1946 and signed by an anonymous veteran, which argues, “There’s a kind of emptiness inside me that tells me that I’ve still got something coming. It’s not a pension that I’m looking for. What I paid out wasn’t money; it was part of myself. I want to be paid back in kind, in something human.”
That sounds right to me: “something human,” though I’m not sure what form it would take. When I first came back from Iraq, I thought it meant a public reckoning with the war, with its costs not just for Americans but for Iraqis as well. As time goes by, and particularly as I watch a U.S. presidential debate in which candidates have offered up carpet bombing, torture, and other kinds of war crimes as the answer to complex problems that the military has long since learned will only worsen if we attempt such simplistic and immoral solutions, I’ve given up on hoping that will happen anytime soon. If the persistence of U.S. military bases named after Confederate generals is any indication, it might not happen in my lifetime. The Holocaust survivor Jean Améry, considering Germany’s post-war rehabilitation, would conclude, “Society … thinks only about its continued existence.” Decades later Ta-Nehisi Coates, considering the difficulty, if not impossibility, of finding solutions for various historic tragedies, would write, “I think we all see our ‘theories and visions’ come to dust in the ‘starving, bleeding, captive land’ which is everywhere, which is politics.”
by Phil Klay, Brookings Institution | Read more:
Image: Reuters
Labels:
Critical Thought,
Government,
history,
Military,
Politics,
Security
Elevated Bus Swallows Cars and Straddles Roads
Imagine hovering over city streets at 40 miles per hour, zooming past congested traffic and sidewalks filled with pedestrians. That’s how engineering company, Transit Explore Bus, wants to transport people with its Land Airbus—an electric elevated bus that straddles roads on specially made tracks.
The company unveiled a cute mini model of the Land Airbus at the recent International High-Tech Expo in Beijing. In the video, you can see cars enter the mouth of the bus’s cavernous temporary tunnel, and safely come out the other end. The proposed vehicle can span two roads and is elevated so cars taller than two meters (six-foot-seven) high can drive underneath, China Xinhua News reports in the video.
“The bus will save lots of road space,” Song Youzhou, chief engineer of the Land Airbus project, says in the video. “It has the same function as the subway, but costs only 16 percent of what subway costs. Manufacturing and construction are also much shorter than for the subway.”
by Lauren Young, Atlas Obscura | Read more:
Image: YouTube
I Have Met the Enemy, and It Is the Airlines
Summer is upon us, and we are facing important travel decisions. Such as who to blame when we get stuck in interminable airport lines.
So many options. There’s the government, but how many times can you can complain about Congress in the course of a lifetime? There’s the public — air traffic up 12 percent since 2011. But really, people, don’t blame yourself.
Let’s pick a rant that’s good for you, good for me, good for the lines in security: Make the airlines stop charging fees for checked baggage.
Seems simple, doesn’t it? Plus, if you do manage to make it to your flight, these are the same people who will be announcing there’s a $3 fee if you want a snack.
The largest airlines charge $25 for the first checked bag, thus encouraging people to drag their belongings through the airport, clogging the X-ray lines and slowing the boarding process as everybody fights to cram one last rolling duffel into the overhead compartment.
The idea that travelers should be hit by an extra charge for, um, having luggage began in 2008, when the cost of fuel went through the roof. We understood the airlines’ pain, sort of. Maybe. But now fuel prices have fallen into the cellar. The airlines are taking in stupendous profits — last year nearly $26 billion after taxes, up from $2.3 billion in 2010.
Yet the baggage fees are still with us. In fact, they’ve gone up by about two-thirds. Last year, the nation’s airlines made more than $3.8 billion off what I believe it is fair to call a scam. It’s also an excellent way to make your prices look lower than they really are when people surf for the cheapest ticket, a number that never includes details like the special fees for bags, food, canceling a reservation, booking by phone, sitting in a minimally more comfortable emergency row or, in some cases, requesting a pillow.
Shouldn’t the airlines offer up the baggage fee as a token of solidarity with their miserable passengers? The idea has come up. Homeland Security Secretary Jeh Johnson asked the airlines to “consider possibly” this modest bow to air travel sanity. Two U.S. senators, Edward Markey of Massachusetts and Richard Blumenthal of Connecticut, wrote a letter to the airlines asking them to just drop the fees during the high-traffic summer months.
We pause now for the sound of silence and crickets chirping.
The airlines have maximized profits by making travel as miserable as possible. The Boeing Company found a way to cram 14 more seats into its largest twin-engine jetliner by reducing the size of the lavatories. Bloomberg quoted a Boeing official as reporting that “the market reaction has been good — really positive.” We presume the market in question does not involve the actual passengers.
So many options. There’s the government, but how many times can you can complain about Congress in the course of a lifetime? There’s the public — air traffic up 12 percent since 2011. But really, people, don’t blame yourself.

Seems simple, doesn’t it? Plus, if you do manage to make it to your flight, these are the same people who will be announcing there’s a $3 fee if you want a snack.
The largest airlines charge $25 for the first checked bag, thus encouraging people to drag their belongings through the airport, clogging the X-ray lines and slowing the boarding process as everybody fights to cram one last rolling duffel into the overhead compartment.
The idea that travelers should be hit by an extra charge for, um, having luggage began in 2008, when the cost of fuel went through the roof. We understood the airlines’ pain, sort of. Maybe. But now fuel prices have fallen into the cellar. The airlines are taking in stupendous profits — last year nearly $26 billion after taxes, up from $2.3 billion in 2010.
Yet the baggage fees are still with us. In fact, they’ve gone up by about two-thirds. Last year, the nation’s airlines made more than $3.8 billion off what I believe it is fair to call a scam. It’s also an excellent way to make your prices look lower than they really are when people surf for the cheapest ticket, a number that never includes details like the special fees for bags, food, canceling a reservation, booking by phone, sitting in a minimally more comfortable emergency row or, in some cases, requesting a pillow.
Shouldn’t the airlines offer up the baggage fee as a token of solidarity with their miserable passengers? The idea has come up. Homeland Security Secretary Jeh Johnson asked the airlines to “consider possibly” this modest bow to air travel sanity. Two U.S. senators, Edward Markey of Massachusetts and Richard Blumenthal of Connecticut, wrote a letter to the airlines asking them to just drop the fees during the high-traffic summer months.
We pause now for the sound of silence and crickets chirping.
The airlines have maximized profits by making travel as miserable as possible. The Boeing Company found a way to cram 14 more seats into its largest twin-engine jetliner by reducing the size of the lavatories. Bloomberg quoted a Boeing official as reporting that “the market reaction has been good — really positive.” We presume the market in question does not involve the actual passengers.
by Gail Collins, NY Times | Read more:
Image: Robert Nickelsberg/GettyCritique of Humanitarian Reason
Never have there been more refugees in the world as today: an estimated 45 million in total. So what's the current relationship between international law, emancipatory politics and the rights of the rightless?
On 16 February 2014, The New York Times Magazine ran an article entitled Container City." "Container City" refers to the Kilis camp in southern Turkey housing 14, 000 refugees from Syria. Protected by high gates and surrounded by barbed wire, Kilis from the outside shares features with many refugee camps all over the world that make them indistinguishable from prisons or criminal detention centres. Kilis houses its population in 2,053 identical containers, spread in neat rows. The pictures that accompany the article remind one of shipping containers at a harbour. Each container is a 23-by-10-foot trailer with three rooms; and a colour TV with close to 1000 channels, probably picking up programs from all the surrounding countries of the Mediterranean.
Yet there are some unique features of Kilis besides the cleanliness of its streets and the organization of proper electricity, water and sewage services which led one Syrian resident to refer to it as "a five star hotel." There are schools in the camp, sex-segregated according to the wishes of the Syrians; three grocery stores where refugees can buy supplies with a credit card; a beauty salon and a barbershop where refugees get free haircuts and other services; art workshops and gymnastics classes. But despite all this: "Nobody likes living there [...I]t is hard for us," said Basheer Alito, the section leader who was so effusive in his praise for the camps and the Turks. "Inside, we're unhappy. In my heart, it's temporary, not permanent."
The Kilis refugee camp is by now one of hundreds in dozens of countries around the world. A report by the United Nations High Commissioner of Refugees notes that by mid-2014, the number of refugees worldwide stood at the highest level on record, namely at around 45 million; and with no end in sight to conflicts in places such as Syria, Central African Republic and the Democratic Republic of the Congo this number will only continue to increase. As the number of refugees has grown worldwide, not only has the number of camps grown as well, but the camps have ceased to be places where one held people temporarily; rather, they have become semi-permanent. The largest refugee camp in the world, Kenya's Dadaab, is 20 years old and houses 420,000 refugees. The Palestinian refugee camps in Southern Lebanon are in many cases nearly 70 to 50 years old, depending on whether the refugee population was created in 1948 or 1968. The refugees who live in these camps, and who in some cases have spent their entire lives there, become PRSs, that is, those in a "protracted refugee situation."
Refugees, asylees, IDPs (internally displaced persons), PRSs, stateless persons: these are new categories of human beings created by an international state-system in turmoil, human beings who are subject to a special kind of precarious existence. Although they share with other "suffering strangers" the status of victimhood and become the objects of our compassion – or as the UNHCR report puts it, become "persons of concern" – their plight reveals the most fateful disjunction between so-called "human rights" – or "the rights of man", in the older locution – and "the rights of the citizen"; between the universal claims to human dignity and the specificities of indignity suffered by those who possess only human rights. From Hannah Arendt's famous discussion of the "right to have rights" in The Origins of Totalitarianism to Giorgio Agamben's homo sacer to Judith Butler's "precarious lives" and Jacques Rancière's call to "the enactment of rights", the asylum seeker, the stateless and the refugee have become metaphors as well as symptoms of a much deeper malaise in the politics of modernity.
Yet as political fatigue about internationalism has gripped the United States in the wake of the interventions in Afghanistan and Iraq, and president Obama's politics of caution in Syria has created further moral quagmires, we have moved from "the right to have rights" to the "critique of humanitarian reason." Didier Fassin, who for many years worked with Médecins Sans Frontières in a high capacity, and to whom we owe this term, defines it as follows: "Humanitarian reason governs precarious lives: the lives of the unemployed and the asylum seeker, the lives of sick immigrants and people with AIDS, the lives of disaster victims and victims of conflict – threatened and forgotten lives that humanitarian government brings into existence by protecting and revealing them." Subtitled "A Moral History of the Present", Fassin's felicitous book signals a more widespread retreat from the politics of human rights which began shortly after the US invasion of Afghanistan and Iraq to a denunciation of human rights, in the words of the Columbia historian, Samuel Moyn, as an "antipolitics" that survived as a "moral utopia when political utopias died." Some sought to achieve, writes Moyn, in his provocatively titled book, The Last Utopia: Human Rights of History, "through a moral critique of politics the sense of pure cause that had once been sought in politics itself"; further, human rights substituted a "plausible morality for failed politics." Fassin himself is more careful and balanced than Moyn in his critique of human rights discourse and practice, but nonetheless both works and the success they have enjoyed document an important moment at least in the zeitgeschichte of the United State's recent political culture.
This intellectual and political disillusionment was heralded even before Moyn's 2010 book. In a trenchant article of 2004 entitled "Who is the subject of the rights of man?", after the US wars in Afghanistan and Iraq were at their height, Jacques Rancière begins by noting how the Rights of Man, or in more contemporary language, Human Rights, which were rejuvenated by the dissident movements of Eastern Europe and the Soviet Union in the 1970s and '80s, became transformed in the first decade of the twenty-first century into "the rights of the rightless, of the populations hunted out of their homes and land and threatened by ethnic slaughter. They appeared more and more as the rights of the victims, the rights of those who were unable to enact any rights or even any claims in their name, so that eventually their rights had to be upheld by others, at the cost of shattering the edifice of International Rights, in the name of a new right to 'humanitarian interference' – which ultimately boiled down to the right to invasion." "Human rights, the rights of the rightless" became for Ranciere the ideological scaffolding for "humanitarian reason" at best and for "humanitarian intervention" at worst.
This prevalent mood of disillusionment and cynicism among many concerning human rights and humanitarian politics is understandable; but it is not defensible. Developments in international law since 1948 have tried to give new legal meaning to "human dignity" and "human rights". Admittedly, these developments have in turn generated the paradoxes of "humanitarian reason", but the way to work through these paradoxes is not to turn against the jus gentium, the law of nations, of our world; instead, we need a new conceptualization of the relationship between international law and emancipatory politics; a new way of understanding how to negotiate the "facticity" and the "validity" of the law, including international human rights and humanitarian law, such as to create new vistas for the political.
by Seyla Benhabib, Eurozone | Read more:
Image: U.S Dept. of State via Wikipedia
On 16 February 2014, The New York Times Magazine ran an article entitled Container City." "Container City" refers to the Kilis camp in southern Turkey housing 14, 000 refugees from Syria. Protected by high gates and surrounded by barbed wire, Kilis from the outside shares features with many refugee camps all over the world that make them indistinguishable from prisons or criminal detention centres. Kilis houses its population in 2,053 identical containers, spread in neat rows. The pictures that accompany the article remind one of shipping containers at a harbour. Each container is a 23-by-10-foot trailer with three rooms; and a colour TV with close to 1000 channels, probably picking up programs from all the surrounding countries of the Mediterranean.

The Kilis refugee camp is by now one of hundreds in dozens of countries around the world. A report by the United Nations High Commissioner of Refugees notes that by mid-2014, the number of refugees worldwide stood at the highest level on record, namely at around 45 million; and with no end in sight to conflicts in places such as Syria, Central African Republic and the Democratic Republic of the Congo this number will only continue to increase. As the number of refugees has grown worldwide, not only has the number of camps grown as well, but the camps have ceased to be places where one held people temporarily; rather, they have become semi-permanent. The largest refugee camp in the world, Kenya's Dadaab, is 20 years old and houses 420,000 refugees. The Palestinian refugee camps in Southern Lebanon are in many cases nearly 70 to 50 years old, depending on whether the refugee population was created in 1948 or 1968. The refugees who live in these camps, and who in some cases have spent their entire lives there, become PRSs, that is, those in a "protracted refugee situation."
Refugees, asylees, IDPs (internally displaced persons), PRSs, stateless persons: these are new categories of human beings created by an international state-system in turmoil, human beings who are subject to a special kind of precarious existence. Although they share with other "suffering strangers" the status of victimhood and become the objects of our compassion – or as the UNHCR report puts it, become "persons of concern" – their plight reveals the most fateful disjunction between so-called "human rights" – or "the rights of man", in the older locution – and "the rights of the citizen"; between the universal claims to human dignity and the specificities of indignity suffered by those who possess only human rights. From Hannah Arendt's famous discussion of the "right to have rights" in The Origins of Totalitarianism to Giorgio Agamben's homo sacer to Judith Butler's "precarious lives" and Jacques Rancière's call to "the enactment of rights", the asylum seeker, the stateless and the refugee have become metaphors as well as symptoms of a much deeper malaise in the politics of modernity.
Yet as political fatigue about internationalism has gripped the United States in the wake of the interventions in Afghanistan and Iraq, and president Obama's politics of caution in Syria has created further moral quagmires, we have moved from "the right to have rights" to the "critique of humanitarian reason." Didier Fassin, who for many years worked with Médecins Sans Frontières in a high capacity, and to whom we owe this term, defines it as follows: "Humanitarian reason governs precarious lives: the lives of the unemployed and the asylum seeker, the lives of sick immigrants and people with AIDS, the lives of disaster victims and victims of conflict – threatened and forgotten lives that humanitarian government brings into existence by protecting and revealing them." Subtitled "A Moral History of the Present", Fassin's felicitous book signals a more widespread retreat from the politics of human rights which began shortly after the US invasion of Afghanistan and Iraq to a denunciation of human rights, in the words of the Columbia historian, Samuel Moyn, as an "antipolitics" that survived as a "moral utopia when political utopias died." Some sought to achieve, writes Moyn, in his provocatively titled book, The Last Utopia: Human Rights of History, "through a moral critique of politics the sense of pure cause that had once been sought in politics itself"; further, human rights substituted a "plausible morality for failed politics." Fassin himself is more careful and balanced than Moyn in his critique of human rights discourse and practice, but nonetheless both works and the success they have enjoyed document an important moment at least in the zeitgeschichte of the United State's recent political culture.
This intellectual and political disillusionment was heralded even before Moyn's 2010 book. In a trenchant article of 2004 entitled "Who is the subject of the rights of man?", after the US wars in Afghanistan and Iraq were at their height, Jacques Rancière begins by noting how the Rights of Man, or in more contemporary language, Human Rights, which were rejuvenated by the dissident movements of Eastern Europe and the Soviet Union in the 1970s and '80s, became transformed in the first decade of the twenty-first century into "the rights of the rightless, of the populations hunted out of their homes and land and threatened by ethnic slaughter. They appeared more and more as the rights of the victims, the rights of those who were unable to enact any rights or even any claims in their name, so that eventually their rights had to be upheld by others, at the cost of shattering the edifice of International Rights, in the name of a new right to 'humanitarian interference' – which ultimately boiled down to the right to invasion." "Human rights, the rights of the rightless" became for Ranciere the ideological scaffolding for "humanitarian reason" at best and for "humanitarian intervention" at worst.
This prevalent mood of disillusionment and cynicism among many concerning human rights and humanitarian politics is understandable; but it is not defensible. Developments in international law since 1948 have tried to give new legal meaning to "human dignity" and "human rights". Admittedly, these developments have in turn generated the paradoxes of "humanitarian reason", but the way to work through these paradoxes is not to turn against the jus gentium, the law of nations, of our world; instead, we need a new conceptualization of the relationship between international law and emancipatory politics; a new way of understanding how to negotiate the "facticity" and the "validity" of the law, including international human rights and humanitarian law, such as to create new vistas for the political.
by Seyla Benhabib, Eurozone | Read more:
Image: U.S Dept. of State via Wikipedia
Labels:
Critical Thought,
Economics,
Government,
history,
Politics,
Psychology,
Security
Wednesday, May 25, 2016
UDub Women Clinch National Golf Championship
Washington caps its first NCAA Women's title with some killer celebrations
[ed. The University of Washington Huskies women's golf team defeated UCLA on Tuesday and Stanford today for their first NCAA National Golf Championship. What a great effort, and a real nail-biter! Congratulations to everyone, especially Mary Lou Mulflur, their coach of 33 years.]
Subscribe to:
Posts (Atom)