Friday, September 21, 2012
Hysteria
Such was the media excitement inspired by the appearance of a vibrator in a late 1990s episode of Sex And The City, one might have thought the device had only just been invented. Any misapprehension is about to be corrected by a new film, Hysteria, which tells the true story of the vibrator's inception. Described by its producers as a Merchant Ivory film with comedy, Hysteria's humour derives chiefly from the surprise of its subject's origins, which are as little known as they are improbable.
The vibrator was, in fact, invented by respectable Victorian doctors, who grew tired of bringing female patients to orgasm using their fingers alone, and so dreamt up a device to do the job for them. Their invention was regarded as a reputable medical instrument – no more improper than a stethoscope – but became wildly popular among Victorian and Edwardian gentlewomen, who soon began buying vibrators for themselves. For its early customers, a vibrator was nothing to be embarrassed about – unlike, it's probably safe to assume, many members of the film's contemporary audience, not to mention some of its stars.
"I've done a lot of 'out there' sexual movies," Maggie Gyllenhaal readily acknowledges, "but this one pushed even my boundaries." Gyllenhaal plays a spirited young Victorian lady, and the love interest of the doctor who invents the vibrator, but admits, "I just think there is something inherently embarrassing about a vibrator. It's not something most people say they've got; nobody talks about that, it's still a secret kind of thing. So it's very difficult," she adds, breaking into a laugh, "to imagine that 100 years ago women didn't have the vote, yet they were going to a doctor's office to get masturbated."
In 19th-century Britain, the condition known as hysteria – which the vibrator was invented to treat – was not a source of embarrassment at all. Hysteria's symptoms included chronic anxiety, irritability and abdominal heaviness, and early medical explanations were inclined to blame some or other fault in the uterus. But in fact these women were suffering from straightforward sexual frustration – and by the mid-19th century the problem had reached epidemic proportions, said to afflict up to 75% of the female population. Yet because the very idea of female sexual arousal was proscribed in Victorian times, the condition was classed as non-sexual. It followed, therefore, that its cure would likewise be regarded as medical rather than sexual.
The only consistently effective remedy was a treatment that had been practised by physicians for centuries, consisting of a "pelvic massage" – performed manually, until the patient reached a "hysterical paroxysm", after which she appeared miraculously restored. The pelvic massage was a highly lucrative staple of many medical practices in 19th-century London, with repeat business all but guaranteed. There is no evidence of any doctor taking pleasure from its provision; on the contrary, according to medical journals, most complained that it was tedious, time-consuming and physically tiring. This being the Victorian age of invention, the solution was obvious: devise a labour-saving device that would get the job done quicker.
by Decca Aitkenhead, The Guardian | Read more:
Photo: Good Vibrations

"I've done a lot of 'out there' sexual movies," Maggie Gyllenhaal readily acknowledges, "but this one pushed even my boundaries." Gyllenhaal plays a spirited young Victorian lady, and the love interest of the doctor who invents the vibrator, but admits, "I just think there is something inherently embarrassing about a vibrator. It's not something most people say they've got; nobody talks about that, it's still a secret kind of thing. So it's very difficult," she adds, breaking into a laugh, "to imagine that 100 years ago women didn't have the vote, yet they were going to a doctor's office to get masturbated."
In 19th-century Britain, the condition known as hysteria – which the vibrator was invented to treat – was not a source of embarrassment at all. Hysteria's symptoms included chronic anxiety, irritability and abdominal heaviness, and early medical explanations were inclined to blame some or other fault in the uterus. But in fact these women were suffering from straightforward sexual frustration – and by the mid-19th century the problem had reached epidemic proportions, said to afflict up to 75% of the female population. Yet because the very idea of female sexual arousal was proscribed in Victorian times, the condition was classed as non-sexual. It followed, therefore, that its cure would likewise be regarded as medical rather than sexual.
The only consistently effective remedy was a treatment that had been practised by physicians for centuries, consisting of a "pelvic massage" – performed manually, until the patient reached a "hysterical paroxysm", after which she appeared miraculously restored. The pelvic massage was a highly lucrative staple of many medical practices in 19th-century London, with repeat business all but guaranteed. There is no evidence of any doctor taking pleasure from its provision; on the contrary, according to medical journals, most complained that it was tedious, time-consuming and physically tiring. This being the Victorian age of invention, the solution was obvious: devise a labour-saving device that would get the job done quicker.
by Decca Aitkenhead, The Guardian | Read more:
Photo: Good Vibrations
Google News at 10: How the Algorithm Won Over the News Industry
This was a strange thing. This was the leader of the most powerful company in the world, informing a roomful of professionals how earnestly he would prefer that their profession not die. And yet the speech itself -- I attended it -- felt oddly appropriate in its strangeness. Particularly in light of surrounding events, which would find Bob Woodward accusing Google of killing newspapers. And Les Hinton, then the publisher of the Wall Street Journal, referring to Google's news aggregation service as a "digital vampire." Which would mesh well, of course, with the similarly vampiric accusations that would come from Hinton's boss, Rupert Murdoch -- accusations addressed not just toward Google News, but toward Google as a media platform. A platform that was, Murdoch declared in January 2012, the "piracy leader."
What a difference nine months make. Earlier this week, Murdoch's 20th Century Fox got into business, officially, with Captain Google, cutting a deal to sell and rent the studio's movies and TV shows through YouTube and Google Play. It's hard not to see Murdoch's grudging acceptance of Google as symbolic of a broader transition: producers' own grudging acceptance of a media environment in which they are no longer the primary distributors of their own work. This week's Pax Murdochiana suggests an ecosystem that will find producers and amplifiers working collaboratively, rather than competitively. And working, intentionally or not, toward the earnest end that Schmidt expressed two years ago: "the survival of high-quality journalism."
"100,000 Business Opportunities"
There is, on the one hand, an incredibly simple explanation for the shift in news organizations' attitude toward Google: clicks. Google News was founded 10 years ago -- September 22, 2002 -- and has since functioned not merely as an aggregator of news, but also as a source of traffic to news sites. Google News, its executives tell me, now "algorithmically harvests" articles from more than 50,000 news sources across 72 editions and 30 languages. And Google News-powered results, Google says, are viewed by about 1 billion unique users a week. (Yep, that's billion with a b.) Which translates, for news outlets overall, to more than 4 billion clicks each month: 1 billion from Google News itself and an additional 3 billion from web search.
As a Google representative put it, "That's about 100,000 business opportunities we provide publishers every minute."
Google emphasizes numbers like these not just because they are fairly staggering in the context of a numbers-challenged news industry, but also because they help the company to make its case to that industry. (For more on this, see James Fallows's masterful piece from the June 2010 issue of The Atlantic.) Talking to Google News executives and team members myself in 2010 -- the height of the industry's aggregatory backlash -- I often got a sense of veiled frustration. And of just a bit of bafflement. When you believe that you're working to amplify the impact of good journalism, it can be strange to find yourself publicly resented by journalists. It can be even stranger to find yourself referred to as a vampire. Or a pirate. Or whatever.
by Megan Garber, The Atlantic | Read more:
Why I Eloped
“In about 20 minutes!” I said, trying to sound perky instead of scared. Though we had decided to get married a few weeks prior, we told almost no one beforehand—not even our parents. And now, we were standing just outside the office of the man who would perform the ceremony.
“You’re getting married today?” she said, shocked. I braced myself for the worst—for her to say that I was robbing her of a precious time in a mother’s life. But she instead declared her unmitigated delight. And with that blessing on hand, I was wed. Chris, the officiant, and I were the only three people in the room.
Now a mere month into my marriage, perhaps it is dangerous to declare, “We did it the right way.” But as I look back at my humble little wedding, I feel pride—and the more I think about it, the more it seems that everyone should elope.
I love a good wedding just as I love any party with an open bar and “The Electric Slide.” But unless you are wealthy, come from a family that has never known strife, enjoy giving up an entire year of your life to planning, and can smile in the face of any possible wedding disaster (and mean it, not just for pictures), you should elope. That’s because weddings—even small-scale ones—are more pageant than sincerity.
True, I was never the fairy tale wedding type. As a child, I didn’t play bride unless peer-pressured. I can’t recall ever fantasizing about my wedding dress, let alone the flowers, the color scheme, or the cake. (Well, maybe the taste of the cake.) My father died when I was 11, and though I could foresee regretting many moments we would never share, walking down the aisle wasn’t among them. Because despite the popular idea that “every little girl dreams of her wedding”—an idea that keeps TLC churning out wedding reality shows—this is not so. I always dreamed of a lifelong partnership but never thought much of the froufrou affair.
The obvious reason to elope is the money. Over the summer, Brides magazine reported that, even in these tough economic times, the average couple spends nearly $27,000 on their nuptials. I have some doubts about that figure—the respondents were readers of Brides magazine and its website, a group already inclined to go veils-to-the-wall for a wedding. But there is no question that weddings, even those done on the cheap, cost far more than many couples can afford. While I have no qualms with the well-off (and their parents) shelling out for a classy affair, I did not want to go into debt or decimate my hard-earned savings for a party.
My primary objections to a “real” wedding go beyond the financial, however.
by Torie Bosch, Slate | Read more:
Photo: Gerald Williams
The Writing Revolution
In 2009, when Monica DiBella entered New Dorp, a notorious public high school on Staten Island, her academic future was cloudy. Monica had struggled to read in early childhood, and had repeated first grade. During her elementary-school years, she got more than 100 hours of tutoring, but by fourth grade, she’d fallen behind her classmates again. In the years that followed, Monica became comfortable with math and learned to read passably well, but never seemed able to express her thoughts in writing. During her freshman year at New Dorp, a ’70s-style brick behemoth near a grimy beach, her history teacher asked her to write an essay on Alexander the Great. At a loss, she jotted down her opinion of the Macedonian ruler: “I think Alexander the Great was one of the best military leaders.” An essay? “Basically, that wasn’t going to happen,” she says, sweeping her blunt-cut brown hair from her brown eyes. “It was like, well, I got a sentence down. What now?” Monica’s mother, Santa, looked over her daughter’s answer—six simple sentences, one of which didn’t make sense—with a mixture of fear and frustration. Even a coherent, well-turned paragraph seemed beyond her daughter’s ability. An essay? “It just didn’t seem like something Monica could ever do.”
For decades, no one at New Dorp seemed to know how to help low-performing students like Monica, and unfortunately, this troubled population made up most of the school, which caters primarily to students from poor and working-class families. In 2006, 82 percent of freshmen entered the school reading below grade level. Students routinely scored poorly on the English and history Regents exams, a New York State graduation requirement: the essay questions were just too difficult. Many would simply write a sentence or two and shut the test booklet. In the spring of 2007, when administrators calculated graduation rates, they found that four out of 10 students who had started New Dorp as freshmen had dropped out, making it one of the 2,000 or so lowest-performing high schools in the nation. City officials, who had been closing comprehensive high schools all over New York and opening smaller, specialized ones in their stead, signaled that New Dorp was in the crosshairs.
And so the school’s principal, Deirdre DeAngelis, began a detailed investigation into why, ultimately, New Dorp’s students were failing. By 2008, she and her faculty had come to a singular answer: bad writing. Students’ inability to translate thoughts into coherent, well-argued sentences, paragraphs, and essays was severely impeding intellectual growth in many subjects. Consistently, one of the largest differences between failing and successful students was that only the latter could express their thoughts on the page. If nothing else, DeAngelis and her teachers decided, beginning in the fall of 2009, New Dorp students would learn to write well. “When they told me about the writing program,” Monica says, “well, I was skeptical.” With disarming candor, sharp-edged humor, and a shy smile, Monica occupies the middle ground between child and adult—she can be both naive and knowing. “On the other hand, it wasn’t like I had a choice. I go to high school. I figured I’d give it a try.”
New Dorp’s Writing Revolution, which placed an intense focus, across nearly every academic subject, on teaching the skills that underlie good analytical writing, was a dramatic departure from what most American students—especially low performers—are taught in high school. The program challenged long-held assumptions about the students and bitterly divided the staff. It also yielded extraordinary results. By the time they were sophomores, the students who had begun receiving the writing instruction as freshmen were already scoring higher on exams than any previous New Dorp class. Pass rates for the English Regents, for example, bounced from 67 percent in June 2009 to 89 percent in 2011; for the global-history exam, pass rates rose from 64 to 75 percent. The school reduced its Regents-repeater classes—cram courses designed to help struggling students collect a graduation requirement—from five classes of 35 students to two classes of 20 students.
The number of kids enrolling in a program that allows them to take college-level classes shot up from 148 students in 2006 to 412 students last year. Most important, although the makeup of the school has remained about the same—roughly 40 percent of students are poor, a third are Hispanic, and 12 percent are black—a greater proportion of students who enter as freshmen leave wearing a cap and gown. This spring, the graduation rate is expected to hit 80 percent, a staggering improvement over the 63 percent figure that prevailed before the Writing Revolution began. New Dorp, once the black sheep of the borough, is being held up as a model of successful school turnaround. “To be able to think critically and express that thinking, it’s where we are going,” says Dennis Walcott, New York City’s schools chancellor. “We are thrilled with what has happened there.”
In the coming months, the conversation about the importance of formal writing instruction and its place in a public-school curriculum—the conversation that was central to changing the culture at New Dorp—will spread throughout the nation. Over the next two school years, 46 states will align themselves with the Common Core State Standards. For the first time, elementary-school students—who today mostly learn writing by constructing personal narratives, memoirs, and small works of fiction—will be required to write informative and persuasive essays. By high school, students will be expected to produce mature and thoughtful essays, not just in English class but in history and science classes as well.
For decades, no one at New Dorp seemed to know how to help low-performing students like Monica, and unfortunately, this troubled population made up most of the school, which caters primarily to students from poor and working-class families. In 2006, 82 percent of freshmen entered the school reading below grade level. Students routinely scored poorly on the English and history Regents exams, a New York State graduation requirement: the essay questions were just too difficult. Many would simply write a sentence or two and shut the test booklet. In the spring of 2007, when administrators calculated graduation rates, they found that four out of 10 students who had started New Dorp as freshmen had dropped out, making it one of the 2,000 or so lowest-performing high schools in the nation. City officials, who had been closing comprehensive high schools all over New York and opening smaller, specialized ones in their stead, signaled that New Dorp was in the crosshairs.
And so the school’s principal, Deirdre DeAngelis, began a detailed investigation into why, ultimately, New Dorp’s students were failing. By 2008, she and her faculty had come to a singular answer: bad writing. Students’ inability to translate thoughts into coherent, well-argued sentences, paragraphs, and essays was severely impeding intellectual growth in many subjects. Consistently, one of the largest differences between failing and successful students was that only the latter could express their thoughts on the page. If nothing else, DeAngelis and her teachers decided, beginning in the fall of 2009, New Dorp students would learn to write well. “When they told me about the writing program,” Monica says, “well, I was skeptical.” With disarming candor, sharp-edged humor, and a shy smile, Monica occupies the middle ground between child and adult—she can be both naive and knowing. “On the other hand, it wasn’t like I had a choice. I go to high school. I figured I’d give it a try.”
New Dorp’s Writing Revolution, which placed an intense focus, across nearly every academic subject, on teaching the skills that underlie good analytical writing, was a dramatic departure from what most American students—especially low performers—are taught in high school. The program challenged long-held assumptions about the students and bitterly divided the staff. It also yielded extraordinary results. By the time they were sophomores, the students who had begun receiving the writing instruction as freshmen were already scoring higher on exams than any previous New Dorp class. Pass rates for the English Regents, for example, bounced from 67 percent in June 2009 to 89 percent in 2011; for the global-history exam, pass rates rose from 64 to 75 percent. The school reduced its Regents-repeater classes—cram courses designed to help struggling students collect a graduation requirement—from five classes of 35 students to two classes of 20 students.
The number of kids enrolling in a program that allows them to take college-level classes shot up from 148 students in 2006 to 412 students last year. Most important, although the makeup of the school has remained about the same—roughly 40 percent of students are poor, a third are Hispanic, and 12 percent are black—a greater proportion of students who enter as freshmen leave wearing a cap and gown. This spring, the graduation rate is expected to hit 80 percent, a staggering improvement over the 63 percent figure that prevailed before the Writing Revolution began. New Dorp, once the black sheep of the borough, is being held up as a model of successful school turnaround. “To be able to think critically and express that thinking, it’s where we are going,” says Dennis Walcott, New York City’s schools chancellor. “We are thrilled with what has happened there.”
In the coming months, the conversation about the importance of formal writing instruction and its place in a public-school curriculum—the conversation that was central to changing the culture at New Dorp—will spread throughout the nation. Over the next two school years, 46 states will align themselves with the Common Core State Standards. For the first time, elementary-school students—who today mostly learn writing by constructing personal narratives, memoirs, and small works of fiction—will be required to write informative and persuasive essays. By high school, students will be expected to produce mature and thoughtful essays, not just in English class but in history and science classes as well.
Fitzgerald's Depression
Among our canonical twentieth-century writers, none suffered this pronouncement—one avoids labeling it a fate—more than F. Scott Fitzgerald. At what should have been the height of his novelistic powers in the mid 1930s, he was listless, reckless in his personal affairs, sick with tuberculosis and jaw-droppingly drunk. As Fitzgerald himself would later admit, he had become a poor caretaker of everything he possessed, even his own talent. After a decade of enviable productivity, his writing had slowed to a trickle of short stories, most of them published inEsquire, his one remaining reliable outlet, and many of these, as the scholar Ruth Prigozy describes them, “elliptical, unadorned, curiously enervated, barely stories at all.”
When the editors of The New Yorker categorically rejected the forty-year-old’s delicate slip of a short story “Thank You for the Light” in 1936 as “altogether out of the question,” their reasons hinged partially on its lack of merits. Few of Fitzgerald’s pieces from the period, this one included, clocked in at the standard commercial length of five thousand words and most of them gave the strong impression that they were both dashed off quickly and forced. They were. Yet I’d hazard that other, more complex reasons for its rejection were in play too, namely the ever-ephemeral nature of the artist’s image and his ability to reflect back to the nation its own acts of bad faith, manias, exuberances and bankrupt ideas.
With a penchant for casting his own experience as a particularly grandiose American brand of success and tragedy and with a proclivity for scripting the drama of the inner life in the language of economics, Fitzgerald declared elsewhere in 1936 that his happiness through the Jazz Age was as “unnatural as the Boom . . . and my recent experience parallels the wave of despair that swept the nation when the Boom was over.” In placing “Thank You” in the reject pile, the editors did not voice their concerns specifically in these national terms, but something like the outsized stakes involved in managing Fitzgerald’s reputation appeared to be on their minds. Calling the story “really too fantastic,” which is to say, ‘odd,’ they concluded, “It seems to us so curious and so unlike the kind of thing we associate with him.”
Not only did it not square with the dashing image of the lyrical, romantic wunderkind of the vertiginous Twenties—which Fitzgerald’s readers were emotionally invested in—but in its small way, it also pulled back the sheet to reveal the unforgiveable American sin of personal failure and diminished talent. As he wrote and sent out “curious” stories that bore the stylistic markings of someone else altogether, and as he watched them come back declined, Fitzgerald understood too well that the conditions of his literary celebrity lay in the past.
by Thomas Heise. Berfrois | Read more:
Illustration: Automat, Edward Hopper, 1927The Great Rift
In the span of about a week, starting on December 30, 2007, the day that President Mwai Kibaki stood awkwardly in an ill-fitting suit in the backyard of the Nairobi statehouse, Bible in hand, and had himself sworn in after a rigged election, Kenya went from one of the most orderly countries in sub-Saharan Africa to a war zone. The violence was as terrible as it was swift, but the real shock was that it could happen here at all. Kenya had just held two back-to-back national elections, in 2002 and 2005, that were widely praised as free and fair. According to pre-election polls, most Kenyans were backing the opposition candidate, Raila Odinga, and they were expecting a peaceful transfer of power, which has happened only a few times in Africa, but Kenya was thought to be the happy exception, and for good reason.
Having been stationed for the New York Times in Kenya for more than six years, and having reported on Kenya’s amazing distance runners, its second-to-none safari business, and its golf-club-wielding middle class, I watched this country prosper as many other countries in Africa remained stagnant or, worse, imploded further. Kenya was different. It was the anti-Congo, the anti-Burundi, the anti-Sudan, the opposite of African nations where violence rules and the infrastructure is sinking back into the weeds. I used to get back from those countries, places where I feared for my life all the time, and want to kiss the tarmac at Nairobi’s airport. In Kenya, things work. There’s an orderliness here inherited from the British, manifest in the cul-de-sacs with marked street signs in neat black lettering and the SUVs driven by the wildlife rangers somehow without a speck of dirt on them. There are Internet startups, investment banks, a thriving national airline. It is still Africa, and most people are still poor, but even that has been changing. In the mid-2000s, the economy was growing by about 6 percent per year, far faster than those of Western Europe or the U.S., adding hundreds of thousands of new jobs. Kenya’s middle class—around four million people making between three thousand and forty thousand dollars per year—is one of the continent’s largest.
Which is all to say that when Kibaki’s men openly hijacked the vote-counting process and forcibly installed their man, I, along with most Kenyans, was astounded and then quickly appalled. Within minutes of Kibaki taking the oath of office that day, thousands of protesters burst out of Kibera, an enormous shantytown, waving sticks, smashing shacks, burning tires, and hurling stones. Police poured into the streets to control them. In the next few days, gangs went from house to house across the country, dragging out people of certain tribes and clubbing them to death. It was horrifyingly clear what was starting to happen—tribal war—and that promising GDP or literacy-rate statistics were no longer relevant. (...)
The election was the first time in Kenya’s history that tribal politics was dragged into the open and the first time that there was a hotly competitive race between a Kikuyu (Kibaki) and a non-Kikuyu (Odinga, a Luo). There are aboutforty different ethnic groups or tribes in the country, each with its own language and customs, and the stolen election ignited long-simmering ethnic grievances that many Kenyans had thought, or maybe more aptly, had wished were redressed. In all, at least one thousand people were murdered and about one million displaced. The police, the judiciary, the army, the religious leaders, and especially the politicians all failed their country at the moment when they were needed most.
In much of Africa, if not the world, geography and ethnicity correlate, certain groups dominating certain areas. This was the basis of South Africa’s apartheid-era homeland policy, which sought to relegate every black person in the country to an ethnic homeland. In Kenya, single ethnic groups often overwhelmingly populate a place, like the Luos on the shores of Lake Victoria or the Kikuyus in the foothills around Mt. Kenya. Not so in the Rift Valley. Here Luos, Kikuyus, Kambas, Kipsigis, Nandes, Ogieks (the traditional hunters and gatherers), Luhyas, Masais, and Kisiis are all packed together, drawn by fertile soil and the opportunity for work, making the towns and the countryside cosmopolitan. The multiethnic Rift Valley was the epicenter of the violence, and death squads swept the hills with elemental killing tools—knives, rocks, and fire—singling out families to execute (the stripes of destruction I saw from the helicopter).
Kenya’s portion of the Great Rift Valley seems to belong to another world and another time—lakes so full of flamingoes that the water is actually pink when you scoop it up in your hands, sculpted green mountains nosing the sky, and soils so rich that just about any fruit or vegetable known to man can grow, from mangoes to guava to snow peas to cucumbers to miles and miles of high-quality, disease-resistant corn. Kenya’s natural beauty, so undeniable in the Rift Valley, sent it down a path different from other European colonies: few African areas attracted so many white settlers. South Africa, yes, and Rhodesia (now Zimbabwe) too, but they were qualitatively different, agricultural and mineral-based economies, with legions of working-class whites. Kenya, on the other hand, because of its wildlife and spectacular landscape, became a playground for aristocratic misfits. They came to shoot lions, drink gin, maybe try their hand at gentleman farming, and cheat on their wives. There was a famous expression from colonial-era Kenya: “Are you married, or do you live in Kenya?’’
Having been stationed for the New York Times in Kenya for more than six years, and having reported on Kenya’s amazing distance runners, its second-to-none safari business, and its golf-club-wielding middle class, I watched this country prosper as many other countries in Africa remained stagnant or, worse, imploded further. Kenya was different. It was the anti-Congo, the anti-Burundi, the anti-Sudan, the opposite of African nations where violence rules and the infrastructure is sinking back into the weeds. I used to get back from those countries, places where I feared for my life all the time, and want to kiss the tarmac at Nairobi’s airport. In Kenya, things work. There’s an orderliness here inherited from the British, manifest in the cul-de-sacs with marked street signs in neat black lettering and the SUVs driven by the wildlife rangers somehow without a speck of dirt on them. There are Internet startups, investment banks, a thriving national airline. It is still Africa, and most people are still poor, but even that has been changing. In the mid-2000s, the economy was growing by about 6 percent per year, far faster than those of Western Europe or the U.S., adding hundreds of thousands of new jobs. Kenya’s middle class—around four million people making between three thousand and forty thousand dollars per year—is one of the continent’s largest.
Which is all to say that when Kibaki’s men openly hijacked the vote-counting process and forcibly installed their man, I, along with most Kenyans, was astounded and then quickly appalled. Within minutes of Kibaki taking the oath of office that day, thousands of protesters burst out of Kibera, an enormous shantytown, waving sticks, smashing shacks, burning tires, and hurling stones. Police poured into the streets to control them. In the next few days, gangs went from house to house across the country, dragging out people of certain tribes and clubbing them to death. It was horrifyingly clear what was starting to happen—tribal war—and that promising GDP or literacy-rate statistics were no longer relevant. (...)
The election was the first time in Kenya’s history that tribal politics was dragged into the open and the first time that there was a hotly competitive race between a Kikuyu (Kibaki) and a non-Kikuyu (Odinga, a Luo). There are aboutforty different ethnic groups or tribes in the country, each with its own language and customs, and the stolen election ignited long-simmering ethnic grievances that many Kenyans had thought, or maybe more aptly, had wished were redressed. In all, at least one thousand people were murdered and about one million displaced. The police, the judiciary, the army, the religious leaders, and especially the politicians all failed their country at the moment when they were needed most.
In much of Africa, if not the world, geography and ethnicity correlate, certain groups dominating certain areas. This was the basis of South Africa’s apartheid-era homeland policy, which sought to relegate every black person in the country to an ethnic homeland. In Kenya, single ethnic groups often overwhelmingly populate a place, like the Luos on the shores of Lake Victoria or the Kikuyus in the foothills around Mt. Kenya. Not so in the Rift Valley. Here Luos, Kikuyus, Kambas, Kipsigis, Nandes, Ogieks (the traditional hunters and gatherers), Luhyas, Masais, and Kisiis are all packed together, drawn by fertile soil and the opportunity for work, making the towns and the countryside cosmopolitan. The multiethnic Rift Valley was the epicenter of the violence, and death squads swept the hills with elemental killing tools—knives, rocks, and fire—singling out families to execute (the stripes of destruction I saw from the helicopter).
Kenya’s portion of the Great Rift Valley seems to belong to another world and another time—lakes so full of flamingoes that the water is actually pink when you scoop it up in your hands, sculpted green mountains nosing the sky, and soils so rich that just about any fruit or vegetable known to man can grow, from mangoes to guava to snow peas to cucumbers to miles and miles of high-quality, disease-resistant corn. Kenya’s natural beauty, so undeniable in the Rift Valley, sent it down a path different from other European colonies: few African areas attracted so many white settlers. South Africa, yes, and Rhodesia (now Zimbabwe) too, but they were qualitatively different, agricultural and mineral-based economies, with legions of working-class whites. Kenya, on the other hand, because of its wildlife and spectacular landscape, became a playground for aristocratic misfits. They came to shoot lions, drink gin, maybe try their hand at gentleman farming, and cheat on their wives. There was a famous expression from colonial-era Kenya: “Are you married, or do you live in Kenya?’’
by Jeffrey Gettleman, Lapham's Quarterly | Read more:
Image: Discovery Adventures
Thursday, September 20, 2012
Kamisaka Sekka (1866 - 1942) Japanese Woodblock Print
Rolling Hillside
Sekka’s A World of Things Series (Momoyogusa)
via:
Where Is Cuba Going?
This was the first time I was in post-Fidel Cuba. It was funny to think that not long ago, there were smart people who doubted that such a thing could exist, i.e., who believed that with the fall of Fidel would come the fall of Communism on the island. But Fidel didn’t fall. He did fall, physically — on the tape that gets shown over and over in Miami, of him coming down the ramp after giving that speech in 2004 and tumbling and breaking his knee — but his leadership didn’t. He executed one of the most brilliantly engineered successions in history, a succession that was at the same time a self-entrenchment. First, he faked his own death in a way: serious intestinal operation, he might not make it. Raúl is brought in as “acting president.” A year and a half later, Castro mostly recovered. But Raúl is officially named president, with Castro’s approval. It was almost as if, “Is Fidel still . . . ?” Amazing. So now they rule together, with Raúl out front, but everyone understanding that Fidel retains massive authority. Not to say that Raúl doesn’t wield power — he has always had plenty — but it’s a partnership of some kind. What comes after is as much of a mystery as ever.
Our relationship with them seems just as uncertain. Barack Obama was going to open things up, and he did tinker with the rules regarding travel, but now they say that when you try to follow these rules, you get caught up in all kinds of forms and tape. He eased the restrictions on remittances, so more money is making it back to the island, and that may have made the biggest difference so far. Boats with medical and other relief supplies have recently left Miami, sailing straight to the island, which hasn’t happened in decades. These humanitarian shipments can, according to The Miami Herald, include pretty much anything a Cuban-American family wants to send to its relatives: Barbie dolls, electronics, sugary cereal. In many cases, you have a situation in which the family is first wiring money over, then shipping the goods. The money is used on the other side to pay the various fees associated with getting the stuff. So it’s as if you’re reaching over and re-buying the merchandise for your relatives. The money, needless to say, goes to the government. Still, capitalism is making small inroads. And Raúl has taken baby steps toward us: Cubans can own their own cars, operate their own businesses, own property. That’s all new. For obvious reasons it’s not an immediate possibility for a vast majority of the people, and it could be taken away tomorrow morning by decree, but it matters.
Otherwise, our attitude toward Cuba feels very wait and see, as what we’re waiting to see grows less and less clear. We’ve learned to live with it, like when the doctor says, “What you have could kill you, but not before you die a natural death.” Earlier this year Obama said to a Spanish newspaper: “No authoritarian regime will last forever. The day will come in which the Cuban people will be free.” Not, notice, no dictator can live forever, but no “authoritarian regime.” But how long can one last? Two hundred years?
Perhaps a second term will be different. All presidents, if they want to mess with our Cuba relations at even the microscopic level, find themselves up against the Florida community, and those are large, powerful and arguably insane forces.
My wife’s people got out in the early 1960s, so they’ve been in the States for half a century. Lax regulations, strict regulations. It’s all a oneness. They take, I suppose, a Cuban view, that matters on the island are perpetually and in some way inherently screwed up and have been forever.
There was a moment in the taxi, a little nothing exchange but so densely underlayered with meaning that if you could pass it through an extracting machine, you would understand a lot about how it is between Cubans and Cuban-Americans. The driver, a guy who said he grew up in Havana, told a tiny lie, or a half lie. The fact that you can’t even say whether it was a lie or not is significant. My wife had asked him to explain for me the way it works with Cuba’s two separate currencies, CUPs and CUCs, Cuban pesos and convertible pesos (also called “chavitos” or simply “dollars”). When I was last there, we didn’t use either of these, though both existed. We paid for everything in actual, green U.S. dollars. That’s what people wanted. There were stores in which you could pay in only dollars. But in 2004, Castro decided — partly as a gesture of contempt for the U.S. embargo — that he would abolish the use of U.S. dollars on the island and enforce the use of CUCs, pegged to the U.S. dollar but distinct from it. This coexisted alongside the original currency, which would remain pegged to the spirit of the revolution. For obvious reasons, the actual Cuban peso is worth much less than the other, dollar-equivalent Cuban peso, something on the order of 25 to 1. But the driver said simply, “No, they are equal.”
“Really?” my wife said. “No . . . that can’t be.”
He insisted that there was no difference between the relative values of the currencies. They were the same.
He knew that this was wrong. He probably could have told you the exchange rates from that morning. But he also knew that it had a rightness in it. For official accounting purposes, the two currencies are considered equivalent. Their respective values might fluctuate on a given day, of course, but it couldn’t be said that the CUP was worth less than the CUC That’s partly what he meant. He also meant that if you’re going to fly to Cuba from Miami and rub it in my face that our money is worth one twenty-fifth of yours, I’m gonna feed you some hilarious communist math and see how you like it. Cubans call it la doble moral. Meaning, different situations call forth different ethical codes. He wasn’t being deceptive. He was saying what my wife forced him to say. She had been a bit breezy, it seemed, in mentioning the unevenness between the currencies, which is the kind of absurdity her family would laugh at affectionately in the kitchen. But they don’t have to suffer it anymore. And he was partly reminding her of that, fencing her off from a conversation in which Cubans would joke together about the notion that the CUP and the CUC had even the slightest connection to each other. That was for them, that laughter. So, a very complex statement, that not-quite-lie. After it, he was totally friendly and dropped us at one of the Cuban-owned tourist hotels on the edge of Havana.
People walking by on the street didn’t seem as skinny. That was the most instantly perceptible difference, if you were seeing Raúl’s Cuba for the first time. They weren’t sickly looking before, but under Fidel you noticed more the way men’s shirts flapped about them and the knobbiness of women’s knees. Now people were filling out their clothes. The island’s overall dietary level had apparently gone up a tick. (One possible factor involved was an increase in the amount of food coming over from the United States. Unknown to most people, we do sell a lot of agricultural products to Cuba, second only in value to Brazil. Under a law that Bill Clinton squeaked through on his way out, Cuba purchases food and medicine from us on a cash basis, meaning, bizarrely, that a lot of the chicken in the arroz con pollo consumed on the island by Canadian tourists is raised in the Midwest — the embargo/blockade has always been messy when you lean in close).
by John Jeremiah Sullivan, NY Times | Read more:
Photo: Andrew Moore/Yancey Richardson Gallery
The Next Panic
This summer, many government officials and private investors finally seemed to realize that the crisis in the euro zone was not some passing aberration, but rather a result of deep-seated political, economic, and financial problems that will take many years to resolve. The on-again, off-again euro turmoil has already proved immensely damaging to nearly all Europeans, and its negative impact is now being felt around the world. Most likely there is worse to come—and soon.
But the economic disasters of our time—which involve big banks in rich countries, call into question the viability of government debt, and seriously threaten the reach of even the most self-confident nations—will not end with the euro debacle. The euro zone is well down the path to severe crisis, but other industrialized democracies are hot on its heels. Do not let the euro zone’s troubles distract you from the bigger picture: we are all in a mess.
Who could be next in line for a gut-wrenching loss of confidence in its growth prospects, its sovereign debt, and its banking system? Think about Japan.
Japan’s post-war economic miracle ended badly in the late 1980s, when the value of land and stocks spiked dramatically and then crashed. This boom-and-bust cycle left people, companies, and banks with debts that took many years to work off. Headline-growth rates slowed after 1990, leading some observers to speak of one or more “lost decades.”
But this isn’t the full picture: after a post-war baby boom, population growth in Japan decelerated sharply; the number of working-age people has declined fairly rapidly since the mid-’90s. Once you account for that, Japan’s economic performance looks much better. The growth in Japan’s output per working-age person—a measure of productivity for those who have jobs—has actually kept up with most of Europe’s, and has lagged only slightly behind that of the United States. Japan is a rich country with low unemployment. Its private sector is by no means broken.
So why is Japan’s government now one of the most indebted in the world, with a gross debt that’s 235.8 percent of GDP and a net debt (taking some government assets into account) that’s 135.2 percent of GDP? (In the euro zone, only Greece has government debt approaching the Japanese level.)
After World War II, Japan built a financial system modeled on those of Europe and the United States. Financial intermediation is an old and venerable idea—connecting people with savings to other people wanting to make investments. Such a sensible use of savings was taken to a new level in Japan, the U.S., and Europe in the decades following 1945—helping to fuel unprecedented growth for entrepreneurs and a genuine accumulation of wealth for the burgeoning middle class.
But such success brings vulnerability. Modern financial systems also permit governments to borrow large sums from investors, and as finance has evolved, that borrowing has become easier and cheaper. In the most-advanced countries, governments have increasingly taken advantage of expanding markets for short-maturity debt, whose principal is due soon after the loan is made. This has allowed them to borrow far more, and at cheaper rates, than they otherwise would have been able to do. Typically, these governments then take out new loans as the old ones come due, “rolling over” their debts. This year, for example, the Japanese government needs to issue debt amounting to 59.1 percent of GDP; that is, for every $10 that Japan’s economy generates this year, the government will need to borrow $6. It will probably be able to do so at very low interest rates—currently well below 1 percent. (...)
About half of the Japanese government’s annual budget now goes to pensions and interest payments. As the government has spent more and more to support its growing elderly population, Japanese savers have willingly financed ever-increasing public-sector debts.
Elderly people hold their savings in the form of cash and bank deposits. The banks, in turn, hold a great deal of government debt. The Bank of Japan (the country’s central bank) also buys government bonds—this is how it provides liquid reserves to commercial banks and cash to households. Similarly, Japan’s private pension plans—many promising a defined benefit—own a great deal of government bonds, to back their future payments. Few foreigners hold Japanese government debt—95 percent of it is in the hands of locals.
Given Japan’s demographic decline, it would make sense to invest national savings abroad, in countries where populations are younger and still growing, and returns on capital are surely higher. These other nations should be able to pay back loans when they are richer and older, supplying some of the funds needed to meet Japan’s pension promises and other obligations. This is the strategy that Singapore and Norway, for example, have undertaken in recent decades.
Instead, the Japanese government is using private savings to fund current spending, such as pensions and wage payments. With projected annual budget deficits between 7 and 10 percent of GDP, Japanese savers are essentially tendering their savings in return for newly issued government debt, which is not backed by hard assets. It is backed only by an aging, shrinking population of taxpayers.
But the economic disasters of our time—which involve big banks in rich countries, call into question the viability of government debt, and seriously threaten the reach of even the most self-confident nations—will not end with the euro debacle. The euro zone is well down the path to severe crisis, but other industrialized democracies are hot on its heels. Do not let the euro zone’s troubles distract you from the bigger picture: we are all in a mess.
Who could be next in line for a gut-wrenching loss of confidence in its growth prospects, its sovereign debt, and its banking system? Think about Japan.
Japan’s post-war economic miracle ended badly in the late 1980s, when the value of land and stocks spiked dramatically and then crashed. This boom-and-bust cycle left people, companies, and banks with debts that took many years to work off. Headline-growth rates slowed after 1990, leading some observers to speak of one or more “lost decades.”
But this isn’t the full picture: after a post-war baby boom, population growth in Japan decelerated sharply; the number of working-age people has declined fairly rapidly since the mid-’90s. Once you account for that, Japan’s economic performance looks much better. The growth in Japan’s output per working-age person—a measure of productivity for those who have jobs—has actually kept up with most of Europe’s, and has lagged only slightly behind that of the United States. Japan is a rich country with low unemployment. Its private sector is by no means broken.
So why is Japan’s government now one of the most indebted in the world, with a gross debt that’s 235.8 percent of GDP and a net debt (taking some government assets into account) that’s 135.2 percent of GDP? (In the euro zone, only Greece has government debt approaching the Japanese level.)
After World War II, Japan built a financial system modeled on those of Europe and the United States. Financial intermediation is an old and venerable idea—connecting people with savings to other people wanting to make investments. Such a sensible use of savings was taken to a new level in Japan, the U.S., and Europe in the decades following 1945—helping to fuel unprecedented growth for entrepreneurs and a genuine accumulation of wealth for the burgeoning middle class.
But such success brings vulnerability. Modern financial systems also permit governments to borrow large sums from investors, and as finance has evolved, that borrowing has become easier and cheaper. In the most-advanced countries, governments have increasingly taken advantage of expanding markets for short-maturity debt, whose principal is due soon after the loan is made. This has allowed them to borrow far more, and at cheaper rates, than they otherwise would have been able to do. Typically, these governments then take out new loans as the old ones come due, “rolling over” their debts. This year, for example, the Japanese government needs to issue debt amounting to 59.1 percent of GDP; that is, for every $10 that Japan’s economy generates this year, the government will need to borrow $6. It will probably be able to do so at very low interest rates—currently well below 1 percent. (...)
About half of the Japanese government’s annual budget now goes to pensions and interest payments. As the government has spent more and more to support its growing elderly population, Japanese savers have willingly financed ever-increasing public-sector debts.
Elderly people hold their savings in the form of cash and bank deposits. The banks, in turn, hold a great deal of government debt. The Bank of Japan (the country’s central bank) also buys government bonds—this is how it provides liquid reserves to commercial banks and cash to households. Similarly, Japan’s private pension plans—many promising a defined benefit—own a great deal of government bonds, to back their future payments. Few foreigners hold Japanese government debt—95 percent of it is in the hands of locals.
Given Japan’s demographic decline, it would make sense to invest national savings abroad, in countries where populations are younger and still growing, and returns on capital are surely higher. These other nations should be able to pay back loans when they are richer and older, supplying some of the funds needed to meet Japan’s pension promises and other obligations. This is the strategy that Singapore and Norway, for example, have undertaken in recent decades.
Instead, the Japanese government is using private savings to fund current spending, such as pensions and wage payments. With projected annual budget deficits between 7 and 10 percent of GDP, Japanese savers are essentially tendering their savings in return for newly issued government debt, which is not backed by hard assets. It is backed only by an aging, shrinking population of taxpayers.
by Peter Boone and Simon Johnson, The Atlantic | Read more:
Photo: Koji Sasahara/APThe Eastwood Conundrum
Clint Eastwood has a special place. It's in his bungalow on the Warner Bros. lot. It's in the corner of the low couch outside his office. It is located directly under a framed letter from a script reader telling him that the script for "Cut-Whore Killing" — which became Unforgiven — was a disgrace. It's diagonally across the room from an out-of-tune upright piano. It faces a big flat-screen TV, and sits kitty-corner to a wall occupied by an enormous poster advertising Per un Pugno di Dollari — A Fistful of Dollars, his first movie with Sergio Leone.
Eastwood has occupied the bungalow since 1976. He and his fledgling production company, Malpaso, had just left Universal. He was making The Outlaw Josey Wales and incurred the wrath of the directors' union for firing the director and taking over himself. He wanted to make movies his way — he wanted to make what members of his crew call "Clint Movies" — and Warner Bros. wanted him to feel comfortable doing so. It gave him the bungalow, and with it a place where only he can sit.
It is not called a special place, and visitors who make the mistake of sitting in it are not kicked out, not exactly. They are only told, by his assistant, that they are sitting in the place where Clint Eastwood "feels comfortable." This is not a reference to the softness of the couch. This is the first lesson in how Eastwood does business: He does not do anything unless he feels comfortable. He does not make a movie unless he feels comfortable. He does not hire an actor unless he feels comfortable, and once he's on the set he sees to it that his actors and everyone else who works for him feel comfortable in return. He makes most of his movies about people in extremely uncomfortable situations, and the precondition for making them is a feeling of comfort that should never be confused with ease. He sees work as the necessary ingredient of comfort and comfort as the necessary ingredient of work. He draws no distinction between them, and makes his movies — Clint Movies — as a way of demonstrating that they are the same. (...)
He has made Clint Movies in four phases of his life. He has made Clint Movies as an actor, as an icon, as an artist, and now he is making Clint Movies as an old man. He has controlled both his career and his image through the making of Clint Movies; a Clint Movie is indeed the expression of his desire for comfort and control, and he is able to keep making them as an old man because of the choices he made as a young one.
He has starred in fifty-one movies and directed thirty-two. He also composes his soundtracks and pilots a helicopter. He works out every day and plays golf every weekend. He is husband to a woman thirty-five years younger than he is, and is father to seven children ranging in ages from forty-five to fifteen by five different women. He takes long family vacations. He is a famously loyal friend and the employer of sixty-odd souls. He served as a small-town mayor, claims to be a libertarian, and recently endorsed Mitt Romney's presidential run. He disparages Barack Obama every chance he gets and did a famous commercial for a car company he believes should have been allowed to die. He has survived one plane crash and didn't board a doomed flight that killed several of his friends. He is an Army veteran who never served in a war and possibly the most prolific cinematic killer of all time. He embodies ingrained American badassery and exists as a principle as much as he lives as a person; he also shows up as the reluctant supporting player on a reality-TV series starring his wife and daughters.
And yet for all that he has done and decided to do, he has lived a life of epic refusals. He has refused to stop working, but he has also always refused to work harder than he has to. He has made Clint Movie after Clint Movie, but the Clint Movie is itself defined by what he won't do. He won't go over budget. He won't go over schedule. He won't storyboard. He won't produce a shot list. He won't rehearse. He doesn't say "Action" — "When you say 'Action' even the horses get nervous" — and he doesn't say "Cut." He won't, in the words of his friend Morgan Freeman, "shoot a foot of film until the script is done," and once the script is done, he won't change it. He doesn't heed the notes supplied by studio executives, and when Warner Bros. tried to tone down the racial innuendo in Nick Schenk's script for Gran Torino, he told them, according to Schenk, "Take it or leave it." He won't accept the judgment of test screenings; as he once told one of his screenwriters, "If they're so interested in the opinion of a grocery-store clerk in Reseda, let them hire him to make the movie."
What he will do and has always done: use his leverage, in all senses of the word. He earned his leverage as an actor, as an icon, and as an artist; he is using it as an old man, with executives, with other actors, and with audiences. With executives, the Clint Movie exists as a form of leverage, because it exists as a form of thrift. With actors, he's leveraged both his artistry and his iconic status — his fifty years of stardom.
by Tom Junod, Esquire | Read more:
Photo: Nigel Parry
Eastwood has occupied the bungalow since 1976. He and his fledgling production company, Malpaso, had just left Universal. He was making The Outlaw Josey Wales and incurred the wrath of the directors' union for firing the director and taking over himself. He wanted to make movies his way — he wanted to make what members of his crew call "Clint Movies" — and Warner Bros. wanted him to feel comfortable doing so. It gave him the bungalow, and with it a place where only he can sit.
It is not called a special place, and visitors who make the mistake of sitting in it are not kicked out, not exactly. They are only told, by his assistant, that they are sitting in the place where Clint Eastwood "feels comfortable." This is not a reference to the softness of the couch. This is the first lesson in how Eastwood does business: He does not do anything unless he feels comfortable. He does not make a movie unless he feels comfortable. He does not hire an actor unless he feels comfortable, and once he's on the set he sees to it that his actors and everyone else who works for him feel comfortable in return. He makes most of his movies about people in extremely uncomfortable situations, and the precondition for making them is a feeling of comfort that should never be confused with ease. He sees work as the necessary ingredient of comfort and comfort as the necessary ingredient of work. He draws no distinction between them, and makes his movies — Clint Movies — as a way of demonstrating that they are the same. (...)
He has made Clint Movies in four phases of his life. He has made Clint Movies as an actor, as an icon, as an artist, and now he is making Clint Movies as an old man. He has controlled both his career and his image through the making of Clint Movies; a Clint Movie is indeed the expression of his desire for comfort and control, and he is able to keep making them as an old man because of the choices he made as a young one.
He has starred in fifty-one movies and directed thirty-two. He also composes his soundtracks and pilots a helicopter. He works out every day and plays golf every weekend. He is husband to a woman thirty-five years younger than he is, and is father to seven children ranging in ages from forty-five to fifteen by five different women. He takes long family vacations. He is a famously loyal friend and the employer of sixty-odd souls. He served as a small-town mayor, claims to be a libertarian, and recently endorsed Mitt Romney's presidential run. He disparages Barack Obama every chance he gets and did a famous commercial for a car company he believes should have been allowed to die. He has survived one plane crash and didn't board a doomed flight that killed several of his friends. He is an Army veteran who never served in a war and possibly the most prolific cinematic killer of all time. He embodies ingrained American badassery and exists as a principle as much as he lives as a person; he also shows up as the reluctant supporting player on a reality-TV series starring his wife and daughters.
And yet for all that he has done and decided to do, he has lived a life of epic refusals. He has refused to stop working, but he has also always refused to work harder than he has to. He has made Clint Movie after Clint Movie, but the Clint Movie is itself defined by what he won't do. He won't go over budget. He won't go over schedule. He won't storyboard. He won't produce a shot list. He won't rehearse. He doesn't say "Action" — "When you say 'Action' even the horses get nervous" — and he doesn't say "Cut." He won't, in the words of his friend Morgan Freeman, "shoot a foot of film until the script is done," and once the script is done, he won't change it. He doesn't heed the notes supplied by studio executives, and when Warner Bros. tried to tone down the racial innuendo in Nick Schenk's script for Gran Torino, he told them, according to Schenk, "Take it or leave it." He won't accept the judgment of test screenings; as he once told one of his screenwriters, "If they're so interested in the opinion of a grocery-store clerk in Reseda, let them hire him to make the movie."
What he will do and has always done: use his leverage, in all senses of the word. He earned his leverage as an actor, as an icon, and as an artist; he is using it as an old man, with executives, with other actors, and with audiences. With executives, the Clint Movie exists as a form of leverage, because it exists as a form of thrift. With actors, he's leveraged both his artistry and his iconic status — his fifty years of stardom.
by Tom Junod, Esquire | Read more:
Photo: Nigel Parry
Subscribe to:
Posts (Atom)