Tuesday, October 15, 2013


Frank Hazebroek Nemo at 42nd Street New York 
via:

The End of the Nation-State?


Every five years, the United States National Intelligence Council, which advises the director of the Central Intelligence Agency, publishes a report forecasting the long-term implications of global trends. Earlier this year it released its latest report, “Alternative Worlds,” which included scenarios for how the world would look a generation from now.

One scenario, “Nonstate World,” imagined a planet in which urbanization, technology and capital accumulation had brought about a landscape where governments had given up on real reforms and had subcontracted many responsibilities to outside parties, which then set up enclaves operating under their own laws.

The imagined date for the report’s scenarios is 2030, but at least for “Nonstate World,” it might as well be 2010: though most of us might not realize it, “nonstate world” describes much of how global society already operates. This isn’t to say that states have disappeared, or will. But they are becoming just one form of governance among many.

A quick scan across the world reveals that where growth and innovation have been most successful, a hybrid public-private, domestic-foreign nexus lies beneath the miracle. These aren’t states; they’re “para-states” — or, in one common parlance, “special economic zones.”

Across Africa, the Middle East and Asia, hundreds of such zones have sprung up in recent decades. In 1980, Shenzhen became China’s first; now they blanket China, which has become the world’s second largest economy.

The Arab world has more than 300 of them, though more than half are concentrated in one city: Dubai. Beginning with Jebel Ali Free Zone, which is today one of the world’s largest and most efficient ports, and now encompasses finance, media, education, health care and logistics, Dubai is as much a dense set of internationally regulated commercial hubs as it is the most populous emirate of a sovereign Arab federation.

This complex layering of territorial, legal and commercial authority goes hand in hand with the second great political trend of the age: devolution.

In the face of rapid urbanization, every city, state or province wants to call its own shots. And they can, as nations depend on their largest cities more than the reverse.

Mayor Michael R. Bloomberg of New York City is fond of saying, “I don’t listen to Washington much.” But it’s clear that Washington listens to him. The same is true for mayors elsewhere in the world, which is why at least eight former mayors are now heads of state.

Scotland and Wales in Britain, the Basque Country and Catalonia in Spain, British Columbia in Canada, Western Australia and just about every Indian state — all are places seeking maximum fiscal and policy autonomy from their national capitals.

by Parag Khanna, NY Times |  Read more:
Image: Javier Jaén

The New Canon

When I was an undergrad, my professor would talk about stars and directors by showing us actual slides of them, all loaded up into the Don Draper “Carousel.” Clips were on actual film, with actual projectionists, or an assortment of badly edited VHS tapes. When a professor recommended a film, I’d go to the video store and rent it for 99 cents, the standard fee for classic movies. I never missed a screening, because it would be nearly impossible to find many of the films on my own, let alone someone with a VHS that wasn’t the common room at the end of my dorm floor. It was the good old analog days, when film and media studies was still nascent, the internet only barely past dial-up, and internet media culture as we know it limited to a healthy growth of BBS, listservs, and AOL chat rooms. It was also less than 15 years ago.

My four years in college coincided with dramatic changes in digital technology, specifically the rise of the (cheap) DVD and the personal computer DVD player. Before, cinephilia meant access to art house theaters or a VHS/television combination in addition to whatever computer you had. . . . by the time I graduated, most computers came standard with a DVD player and ethernet, if not wireless, connectivity. That Fall, I signed up for Netflix. I envied those with TiVo. Two years later, the growing size of hard drives and bandwidths facilitated the piracy culture that had theretofore mostly been limited to music. Then YouTube. Then streaming Netflix. Then Hulu. Then AppleTV. Then HBOGO. Or something like that.

Today, we live in a television culture characterized by cord-cutters and time-shifters. Sure, many, many people still appointment view or surf channels old school style. I know this. I also know people watch the local news. Yet as a 30-something member of the middle class, I catch myself thinking that my consumption habits — I subscribe to Netflix, Hulu Plus, and Full Cable; I still appointment view several shows — are somewhat typical.

I’m so wrong, but not in the way I might have expected. My students taught me that. They watch Netflix, and they watch it hard. They watch it at the end of the night to wind down from studying, they watch it when they come home tipsy, they binge it on a lazy Saturday afternoon. Most use their family’s subscription; others filch passwords from friends. It’s so widely used that when I told my Mad Men class that their only text for the class was a streaming subscription, only one student had to acquire one. (I realize we’re talking about students at a liberal arts college, but I encountered the same levels of access at state universities. As for other populations, I really don’t know, because Netflix won’t tell me (or anyone) who’s using it.

Some students use Hulu, but never Hulu Plus — when it comes to network shows and keeping current, they just don’t care. For some super buzzy shows, like Game of Thrones and Girls, they pirate or find illegal streams. But as far as I can tell, the general sentiment goes something like this: if it’s not on Netflix, why bother?

by Anne Helen Petersen, LARB Blog |  Read more:
Image: Netflix

Kurt Solmssen, Red Centerboard
via:

Whatever Happened to Chores?

Recently, a close colleague sent me a flyer from a local children's shoe store that read: "Do you have a child who is interested in learning to tie their own shoes?" I confess that I sighed, thinking about how toddlers in most societies learn basic skills and self-care by routinely watching older siblings and other family members in the course of their everyday lives. They don't need a workshop on it.

Some note sensibly that expectations of children will vary according to what is needed to flourish in their respective societies. According to this logic, children in small-scale societies will learn subsistence skills and assist in tasks early in life. Alternatively, post-industrial societies privilege children's academic skills needed to grasp rapidly changing technologies, complex systems, and global influences on just about everything. Saddled with homework and extra-curricular activities, these children have little time to lend a hand at home or offer service to others in the community, the thinking goes.

In isolation this means-ends argument sounds reasonable. But in the context of the social and emotional fabric of middle-class families in many post-industrial societies, the weakened emphasis on children's practical contributions to the household warrants a second hearing. As widely noted, the hectic reality of middle class families in the US and Europe involves two parents working, raising a family, and maintaining a home. At 5 pm, you are likely to find a parent (typically mom) at home exhausted from work and literally running from one task to another – homework help, food prep, laundry, tidying up, readying kids for bed, and maybe sneaking a peek at email messages.

Parents invest huge amounts of time (and money) to nurture children's interests, intervene whenever children face a problem big or small, and give children sole credit for accomplishments that required considerable parental involvement. Yet, these same parents garner little or no assistance in chores from their children in return. In our UCLA Sloan study of Los Angeles households, children ignored, resisted, or refused to respond to parents' appeals to help in 22 out of the 30 families observed. In the 8 families where children were cooperative, they were requested to do very little (see Fast-Forward Family: Home, Work and Relationships in Middle Class America).

This is a somewhat uniquely American phenomenon. Middle class parents in other prosperous nations are less tolerant of children's reluctance to do their part around the house. In Sweden, for example, middle class parents insist that each family member is responsible for cleaning up after themselves and keeping the house in order. Small children are expected to clean their dishes and rooms. Sweden's idea of a universal social welfare state begins in early childhood.

The problem in many American households is that parents place a high value of their children's right to pursue their individual desires. It's as if children's "rights" obscure children's obligations. Is it my imagination or has "duty" dropped out of the American child-rearing lexicon?

by Elinor Ochs, Guardian |  Read more:
Image: Emmanuel Dunand/AFP/Getty Images

The Inner Life of James Bond


It’s 1927 and Ian Fleming, age 19, climbs off a train in the small Tyrolean town of Kitzbühel. He’s under a cloud: you can almost see it, small and discolored, parked a foot or so above his head, intermittently shedding rain. Fleming moves gracefully, but there is a sense of encumbrance about him, a kind of private sluggishness or surliness of mood. In his face—austere brow, thick-lidded eyes, bruiser’s nose, prissy mouth—severity blends with the instincts of the pleasure-seeker, the lotus-eater: a sadist’s face, really. Fleming has been dispatched to these mountains by his mother (“M,” as he sometimes calls her) because he’s made a mess of his education back in England. Shuffled out of Eton for some small scandal, he has more recently exited, under his little cloud, the officer’s academy at Sandhurst. Now he is entering the maternally mandated care of a British couple named Forbes Dennis, progressive educators and acolytes of the Viennese psychiatrist Alfred Adler. During the next four years, under the guidance of the Forbes Dennises, Fleming will move from Kitzbühel to Munich to Geneva, inhaling as he goes the headiest drafts of High Europe: Rilke, Kafka, Arthur Schnitzler, and the fathers of psychoanalysis. In due course, he will receive written permission from Carl Jung to translate one of the great man’s lectures, a disquisition on the alchemist and doctor Paracelsus. And 26 years later, he’ll sit down in his Jamaican villa and type The scent and smoke and sweat of a casino are nauseating at three in the morning.

Was James Bond—neck-snapper, escape artist, serial shagger—the last repudiation of his creator’s cultural pedigree? Take that, fancy books; take that, whiskered shrinks. I, Ian Fleming, give you a hero almost without psychology: a bleak circuit of appetites, sensations, and prejudices, driven by a mechanical imperative called “duty.” In Jungian-alchemical terms, 007 is like lead, the metal associated with the dark god Saturn, lying coldly at the bottom of the crucible and refusing transformation. Boil him, slash him, poison him, flog him with a carpet beater and shoot his woman—Bond will not be altered.

The distinction has to be made, before we go any further, between the Bond of Fleming’s novels and the Bond of the movies. For that matter, distinctions have to be made among the various movie Bonds. You can’t really play him, because there’s nothing to play; you have to be him. Sean Connery had the darkness and the hairy chest. George Lazenby was a misfire. Roger Moore was a brilliant anti-actor, sleek with the absurd good fortune of landing such a plum gig. Timothy Dalton, the late-’80s Bond, trailed terrible whiffs of the ’70s: almost everything he did felt anachronistic. Pierce Brosnan had such a likable face, too likable, surely, for 007. But Daniel Craig, our current Bond, has a soured, turned-off quality that is very satisfying. He seems almost to be playing the role under duress, his features thickened and smeared as if goons from the Fleming estate have been working him over between takes.

The latest film, Skyfall, took us deeper into Bond than any of the previous ones, to the very brink of identifiable psychology. The death of the mother figure in the remote chapel, the plunge through the ice, the resurrection motifs: we seemed at moments to be entering the phantasmagoria of the Bond title sequences themselves, those underworld (undersea, sometimes) montages of flames and bullets and writhing women, occasionally churned by shock waves of Shirley Bassey.

Fleming’s novels, too, skirt the droning vacuum of Bond’s inner life. Is he human at all? From time to time he slumps, depressively—as, for example, in the opening pages of Thunderball: “Again Bond dabbed with the bloodstained styptic pencil at the cut on his chin and despised the face that stared sullenly back at him from the mirror above the washbasin. Stupid, ignorant bastard!” But this discontent is due to the fact that he has a hangover, he is between missions (traditionally a dangerous moment for Bond), and he has cut himself shaving. An immediate and physical ennui, in other words. He’ll be all right in a minute.

The theologian Cardinal Newman wrote that as we come to understand “the nothingness of this world … we begin, by degrees, to perceive that there are but two beings in the whole universe, our own soul, and the God who made it.” So it is with the Bond books, the difference being that in Bond’s universe the two great solitaries of existence are Bond himself and his controller, M: the vinegary omnipotence, the “shrewd grey eyes.” M sends him out; M calls him back; Bond will die for M. The books contain other characters, of course. The villains glow fantastically, fanatically, cranking their evil plots; the CIA’s Felix Leiter and assorted sidekicks come and go; and there are always the women, the beautiful women who cannot resist him. (Bond has to be irresistible—his irresistibility, his crude magnetic pull, is what he has in the place of charm.) But this is a ghost parade. It all comes down to Bond, and M, and the mission.

by James Parker, Atlantic |  Read more:
Image: Kevin Christy

Monday, October 14, 2013


Mabuchi Toru, Dried Fish on the Table - 卓上干魚 1963
via:

Sunday, October 13, 2013

Ani DiFranco


Simon Palmer, The Woman Who Would Not Give Way
via:

Janiva Magness

Elephant Revival


Tina Kazakhishvili 
[She's never coming back]

Peter Lee Johnson

The Soaring Cost of a Simple Breath

The kitchen counter in the home of the Hayes family is scattered with the inhalers, sprays and bottles of pills that have allowed Hannah, 13, and her sister, Abby, 10, to excel at dance and gymnastics despite a horrific pollen season that has set off asthma attacks, leaving the girls struggling to breathe.

Asthma — the most common chronic disease that affects Americans of all ages, about 40 million people — can usually be well controlled with drugs. But being able to afford prescription medications in the United States often requires top-notch insurance or plenty of disposable income, and time to hunt for deals and bargains.

The arsenal of medicines in the Hayeses’ kitchen helps explain why. Pulmicort, a steroid inhaler, generally retails for over $175 in the United States, while pharmacists in Britain buy the identical product for about $20 and dispense it free of charge to asthma patients. Albuterol, one of the oldest asthma medicines, typically costs $50 to $100 per inhaler in the United States, but it was less than $15 a decade ago, before it was repatented.

“The one that really blew my mind was the nasal spray,” said Robin Levi, Hannah and Abby’s mother, referring to her $80 co-payment for Rhinocort Aqua, a prescription drug that was selling for more than $250 a month in Oakland pharmacies last year but costs under $7 in Europe, where it is available over the counter.

The Centers for Disease Control and Prevention puts the annual cost of asthma in the United States at more than $56 billion, including millions of potentially avoidable hospital visits and more than 3,300 deaths, many involving patients who skimped on medicines or did without.

“The thing is that asthma is so fixable,” said Dr. Elaine Davenport, who works in Oakland’s Breathmobile, a mobile asthma clinic whose patients often cannot afford high prescription costs. “All people need is medicine and education.” (...)

Unlike other countries, where the government directly or indirectly sets an allowed national wholesale price for each drug, the United States leaves prices to market competition among pharmaceutical companies, including generic drug makers. But competition is often a mirage in today’s health care arena — a surprising number of lifesaving drugs are made by only one manufacturer — and businesses often successfully blunt market forces.

Asthma inhalers, for example, are protected by strings of patents — for pumps, delivery systems and production processes — that are hard to skirt to make generic alternatives, even when the medicines they contain are old, as they almost all are.

The repatenting of older drugs like some birth control pills, insulin and colchicine, the primary treatment for gout, has rendered medicines that once cost pennies many times more expensive.

“The increases are stunning, and it’s very injurious to patients,” said Dr. Robert Morrow, a family practitioner in the Bronx. “Colchicine is a drug you could find in Egyptian mummies.”

Pharmaceutical companies also buttress high prices by choosing to sell a medicine by prescription, rather than over the counter, so that insurers cover a price tag that would be unacceptable to consumers paying full freight. They even pay generic drug makers not to produce cut-rate competitors in a controversial scheme called pay for delay.

Thanks in part to the $250 million last year spent on lobbying for pharmaceutical and health products — more than even the defense industry — the government allows such practices. Lawmakers in Washington have forbidden Medicare, the largest government purchaser of health care, to negotiate drug prices. Unlike its counterparts in other countries, the United States Patient-Centered Outcomes Research Institute, which evaluates treatments for coverage by federal programs, is not allowed to consider cost comparisons or cost-effectiveness in its recommendations. And importation of prescription medicines from abroad is illegal, even personal purchases from mail-order pharmacies.

“Our regulatory and approval system seems constructed to achieve high-priced outcomes,” said Dr. Peter Bach, the director of the Center for Health Policy and Outcomes at Memorial Sloan-Kettering Cancer Center. “We don’t give any reason for drug makers to charge less.”

And taxpayers and patients bear the consequences.

by Elisabeth Rosenthal, NY Times |  Read more:
Image: Josh Keller and Graham Roberts

The Others: On Being Foreign

[ed. See also: Japan: some impressions.]

The most generally satisfying experience of foreignness—complete bafflement, but with no sense of rejection—probably comes still from time spent in Japan. To the foreigner Japan appears as a Disneyland-like nation in which everyone has a well-defined role to play, including the foreigner, whose job it is to be foreign. Everything works to facilitate this role-playing, including a towering language barrier. The Japanese believe their language to be so difficult that it counts as something of an impertinence for a foreigner to speak it. Religion and morality appear to be reassuringly far from the Christian, Islamic or Judaic norms. Worries that Japan might Westernise, culturally as well as economically, have been allayed by the growing influence of China. It is going to get more Asian, not less.

Even in Japan, however, foreigners have ceased to function as objects of veneration, study and occasionally consumption. Once upon a time, in the ancient and medieval worlds, to count as properly foreign you had to seek out a life among peoples of a different skin colour or religion. They were probably an impossibly long distance away, they might well kill you when you got there, and if you went too far you might fall off the edge of the world.

At the dawn of the travelling age, writing an imaginary legal code for a Utopian society that he called Magnesia, Plato divided foreigners into two main categories. “Resident aliens” were allowed to settle for up to 20 years to do jobs unworthy of Magnesians, such as retail trade. “Temporary visitors” consisted of ambassadors, merchants, tourists and philosophers. Broaden that last category to include all scholars, and you have a taxonomy of travellers that held good until the invention of the stag party.

To be foreign got much more straightforward from the 17th century onwards, when Europe adopted a political system based on nation states, each with borders, sovereignty and citizenship. Travel-papers in hand, you could turn yourself into an officially recognised foreigner simply by visiting the country next door—which, with the advance of mechanised transport, became an ever more trivial undertaking. By the early 20th century most of the world was similarly compartmentalised.

The golden age of genteel foreignness began. The well-off, the artistic, the bored, the adventurous went abroad. (The broad masses went too, as empires, steamships and railways made travel cheaper and easier.) Foreignness was a means of escape—physical, psychological and moral. In another country you could flee easy categorisation by your education, your work, your class, your family, your accent, your politics. You could reinvent yourself, if only in your own mind. You were not caught up in the mundanities of the place you inhabited, any more than you wanted to be. You did not vote for the government, its problems were not your problems. You were irresponsible. Irresponsibility might seem to moralists an unsatisfactory condition for an adult, but in practice it can be a huge relief.

by The Economist |  Read more:
Image: C. Corr

Saturday, October 12, 2013