Saturday, September 22, 2012
Still Too Pricey
Facebook has a business model in need of a radical change and a still-rich $61 billion market value. What's not to "like"? Plenty.
Facebook's 40% plunge from its initial-public-offering price of $38 in May has millions of investors asking a single question: Is the stock a buy? The short answer is "No." After a recent rally, to $23 from a low of $17.55, the stock trades at high multiples of both sales and earnings, even as uncertainty about the outlook for its business grows.
The rapid shift in Facebook's user base to mobile platforms—more than half of users now access the site on smartphones and tablets—appears to have caught the company by surprise. Facebook (ticker: FB) founder and CEO Mark Zuckerberg must find a way to monetize its mobile traffic because usage on traditional PCs, where the company makes virtually all of its money, is declining in its large and established markets. That trend isn't likely to change. (...)
The bull case for Facebook is that Zuckerberg & Co. will find creative ways to generate huge revenue from its 955 million monthly active users, be it from mobile and desktop advertising, e-commerce, search, online-game payments, or sources that have yet to emerge. Pay no attention to depressed current earnings, the argument goes. Facebook is just getting started.
Facebook's 40% plunge from its initial-public-offering price of $38 in May has millions of investors asking a single question: Is the stock a buy? The short answer is "No." After a recent rally, to $23 from a low of $17.55, the stock trades at high multiples of both sales and earnings, even as uncertainty about the outlook for its business grows.The rapid shift in Facebook's user base to mobile platforms—more than half of users now access the site on smartphones and tablets—appears to have caught the company by surprise. Facebook (ticker: FB) founder and CEO Mark Zuckerberg must find a way to monetize its mobile traffic because usage on traditional PCs, where the company makes virtually all of its money, is declining in its large and established markets. That trend isn't likely to change. (...)
The bull case for Facebook is that Zuckerberg & Co. will find creative ways to generate huge revenue from its 955 million monthly active users, be it from mobile and desktop advertising, e-commerce, search, online-game payments, or sources that have yet to emerge. Pay no attention to depressed current earnings, the argument goes. Facebook is just getting started.
Facebook now gets $5 annually in revenue per user. That could easily double or triple in the next five years, bulls say. In a recent interview at the TechCrunch Disrupt conference, Zuckerberg said, "It's easy to underestimate how fundamentally good mobile is for us." His argument, coming after Facebook's brand-damaging IPO fiasco and a halving of the stock, was something only a mother, or a true believer, could love. This year Facebook is expected to get 5% of its revenue from mobile. "Literally six months ago we didn't run a single ad on mobile," Zuckerberg said. Facebook executives declined to speak with Barron's.
"Anyone who owns Facebook should be exceptionally troubled that they're still trying to 'figure out' mobile monetization and had to lay out $1 billion for Instagram because some start-up had figured out mobile pictures better than Facebook," says one institutional investor, referring to Facebook's April deal for two-year-old Instagram, whose smartphone app for mobile photo-sharing became a big hit (and at the time had yet to generate a nickel in revenue).
"Anyone who owns Facebook should be exceptionally troubled that they're still trying to 'figure out' mobile monetization and had to lay out $1 billion for Instagram because some start-up had figured out mobile pictures better than Facebook," says one institutional investor, referring to Facebook's April deal for two-year-old Instagram, whose smartphone app for mobile photo-sharing became a big hit (and at the time had yet to generate a nickel in revenue).
by Andrew Bary, Barrons | Read more:
Anthropology of Tailgating
To the untrained eye, these game-day rituals appear to be little more than a wild party, a hedonistic excuse to get loaded and eat barbecue. Not at all. They are, according to Notre Dame anthropologist John Sherry, bustling microcosms of society where self-regulatory neighborhoods foster inter-generational community, nurture tradition and build the team’s brand.
Sherry didn’t always feel this way. There was a time when he considered tailgating a boisterous nuisance, little more than a gauntlet of unrelated and unruly celebrations to be run if he were to reach his seat in Notre Dame Stadium. But then he had an epiphany: What if there was meaning to the madness?
“One day I slowed down and paid attention to things that were going on that weren’t individual celebrations,” he said of research presented in A Cultural Analysis of Tailgating. “It was much more nuanced that I had thought before.”
Sherry consulted the existing literature on the subject and found bupkis. Most studies on tailgating come to Onion-esque conclusions like “tailgating leads to drunkenness” or examine the environmental impact(.pdf) of all that trash. Sherry looked deeper into tailgating and saw a whole lot of consumption akin to that of, say, ancient harvest festivals. He recruited colleague Tonya Bradford, trained a few research assistants and started attending tailgate parties and interviewing fans to learn more.
Notre Dame was a convenient place to start, given its rich football tradition. But Sherry and Co. hit the road too, attending Irish away games and checking the scene at Big Ten Conference schools. They talked to fans of every stripe, from alumni with six-figure RVs to students. And they discovered what every true football fan eventually discovers.
“What we really found was a real active and orchestrated effort in community building,” said Sherry. “People have tailgated in the same place for years, they have tailgated through generations, they have encountered strangers who have passed through and adopted them to their families and became fast friends. They have created neighborhoods.”
This much was obvious Saturday at the University of Utah-Brigham Young University game I attended. The parking lot around Eccles Stadium was thick with trucks and trailers and RVs, the air was thick with the smell of cooking meat. The lot was divided into “streets” and “neighborhoods” populated by fans who have in many cases known each other for years.
by Beth Carter, Wired | Read more:
Photo: Mike Roemer/Associated Press
Friday, September 21, 2012
Hysteria
Such was the media excitement inspired by the appearance of a vibrator in a late 1990s episode of Sex And The City, one might have thought the device had only just been invented. Any misapprehension is about to be corrected by a new film, Hysteria, which tells the true story of the vibrator's inception. Described by its producers as a Merchant Ivory film with comedy, Hysteria's humour derives chiefly from the surprise of its subject's origins, which are as little known as they are improbable.
The vibrator was, in fact, invented by respectable Victorian doctors, who grew tired of bringing female patients to orgasm using their fingers alone, and so dreamt up a device to do the job for them. Their invention was regarded as a reputable medical instrument – no more improper than a stethoscope – but became wildly popular among Victorian and Edwardian gentlewomen, who soon began buying vibrators for themselves. For its early customers, a vibrator was nothing to be embarrassed about – unlike, it's probably safe to assume, many members of the film's contemporary audience, not to mention some of its stars.
"I've done a lot of 'out there' sexual movies," Maggie Gyllenhaal readily acknowledges, "but this one pushed even my boundaries." Gyllenhaal plays a spirited young Victorian lady, and the love interest of the doctor who invents the vibrator, but admits, "I just think there is something inherently embarrassing about a vibrator. It's not something most people say they've got; nobody talks about that, it's still a secret kind of thing. So it's very difficult," she adds, breaking into a laugh, "to imagine that 100 years ago women didn't have the vote, yet they were going to a doctor's office to get masturbated."
In 19th-century Britain, the condition known as hysteria – which the vibrator was invented to treat – was not a source of embarrassment at all. Hysteria's symptoms included chronic anxiety, irritability and abdominal heaviness, and early medical explanations were inclined to blame some or other fault in the uterus. But in fact these women were suffering from straightforward sexual frustration – and by the mid-19th century the problem had reached epidemic proportions, said to afflict up to 75% of the female population. Yet because the very idea of female sexual arousal was proscribed in Victorian times, the condition was classed as non-sexual. It followed, therefore, that its cure would likewise be regarded as medical rather than sexual.
The only consistently effective remedy was a treatment that had been practised by physicians for centuries, consisting of a "pelvic massage" – performed manually, until the patient reached a "hysterical paroxysm", after which she appeared miraculously restored. The pelvic massage was a highly lucrative staple of many medical practices in 19th-century London, with repeat business all but guaranteed. There is no evidence of any doctor taking pleasure from its provision; on the contrary, according to medical journals, most complained that it was tedious, time-consuming and physically tiring. This being the Victorian age of invention, the solution was obvious: devise a labour-saving device that would get the job done quicker.
by Decca Aitkenhead, The Guardian | Read more:
Photo: Good Vibrations
The vibrator was, in fact, invented by respectable Victorian doctors, who grew tired of bringing female patients to orgasm using their fingers alone, and so dreamt up a device to do the job for them. Their invention was regarded as a reputable medical instrument – no more improper than a stethoscope – but became wildly popular among Victorian and Edwardian gentlewomen, who soon began buying vibrators for themselves. For its early customers, a vibrator was nothing to be embarrassed about – unlike, it's probably safe to assume, many members of the film's contemporary audience, not to mention some of its stars."I've done a lot of 'out there' sexual movies," Maggie Gyllenhaal readily acknowledges, "but this one pushed even my boundaries." Gyllenhaal plays a spirited young Victorian lady, and the love interest of the doctor who invents the vibrator, but admits, "I just think there is something inherently embarrassing about a vibrator. It's not something most people say they've got; nobody talks about that, it's still a secret kind of thing. So it's very difficult," she adds, breaking into a laugh, "to imagine that 100 years ago women didn't have the vote, yet they were going to a doctor's office to get masturbated."
In 19th-century Britain, the condition known as hysteria – which the vibrator was invented to treat – was not a source of embarrassment at all. Hysteria's symptoms included chronic anxiety, irritability and abdominal heaviness, and early medical explanations were inclined to blame some or other fault in the uterus. But in fact these women were suffering from straightforward sexual frustration – and by the mid-19th century the problem had reached epidemic proportions, said to afflict up to 75% of the female population. Yet because the very idea of female sexual arousal was proscribed in Victorian times, the condition was classed as non-sexual. It followed, therefore, that its cure would likewise be regarded as medical rather than sexual.
The only consistently effective remedy was a treatment that had been practised by physicians for centuries, consisting of a "pelvic massage" – performed manually, until the patient reached a "hysterical paroxysm", after which she appeared miraculously restored. The pelvic massage was a highly lucrative staple of many medical practices in 19th-century London, with repeat business all but guaranteed. There is no evidence of any doctor taking pleasure from its provision; on the contrary, according to medical journals, most complained that it was tedious, time-consuming and physically tiring. This being the Victorian age of invention, the solution was obvious: devise a labour-saving device that would get the job done quicker.
by Decca Aitkenhead, The Guardian | Read more:
Photo: Good Vibrations
Google News at 10: How the Algorithm Won Over the News Industry
This was a strange thing. This was the leader of the most powerful company in the world, informing a roomful of professionals how earnestly he would prefer that their profession not die. And yet the speech itself -- I attended it -- felt oddly appropriate in its strangeness. Particularly in light of surrounding events, which would find Bob Woodward accusing Google of killing newspapers. And Les Hinton, then the publisher of the Wall Street Journal, referring to Google's news aggregation service as a "digital vampire." Which would mesh well, of course, with the similarly vampiric accusations that would come from Hinton's boss, Rupert Murdoch -- accusations addressed not just toward Google News, but toward Google as a media platform. A platform that was, Murdoch declared in January 2012, the "piracy leader."
What a difference nine months make. Earlier this week, Murdoch's 20th Century Fox got into business, officially, with Captain Google, cutting a deal to sell and rent the studio's movies and TV shows through YouTube and Google Play. It's hard not to see Murdoch's grudging acceptance of Google as symbolic of a broader transition: producers' own grudging acceptance of a media environment in which they are no longer the primary distributors of their own work. This week's Pax Murdochiana suggests an ecosystem that will find producers and amplifiers working collaboratively, rather than competitively. And working, intentionally or not, toward the earnest end that Schmidt expressed two years ago: "the survival of high-quality journalism."
"100,000 Business Opportunities"
There is, on the one hand, an incredibly simple explanation for the shift in news organizations' attitude toward Google: clicks. Google News was founded 10 years ago -- September 22, 2002 -- and has since functioned not merely as an aggregator of news, but also as a source of traffic to news sites. Google News, its executives tell me, now "algorithmically harvests" articles from more than 50,000 news sources across 72 editions and 30 languages. And Google News-powered results, Google says, are viewed by about 1 billion unique users a week. (Yep, that's billion with a b.) Which translates, for news outlets overall, to more than 4 billion clicks each month: 1 billion from Google News itself and an additional 3 billion from web search.
As a Google representative put it, "That's about 100,000 business opportunities we provide publishers every minute."
Google emphasizes numbers like these not just because they are fairly staggering in the context of a numbers-challenged news industry, but also because they help the company to make its case to that industry. (For more on this, see James Fallows's masterful piece from the June 2010 issue of The Atlantic.) Talking to Google News executives and team members myself in 2010 -- the height of the industry's aggregatory backlash -- I often got a sense of veiled frustration. And of just a bit of bafflement. When you believe that you're working to amplify the impact of good journalism, it can be strange to find yourself publicly resented by journalists. It can be even stranger to find yourself referred to as a vampire. Or a pirate. Or whatever.
by Megan Garber, The Atlantic | Read more:
Why I Eloped
“In about 20 minutes!” I said, trying to sound perky instead of scared. Though we had decided to get married a few weeks prior, we told almost no one beforehand—not even our parents. And now, we were standing just outside the office of the man who would perform the ceremony.
“You’re getting married today?” she said, shocked. I braced myself for the worst—for her to say that I was robbing her of a precious time in a mother’s life. But she instead declared her unmitigated delight. And with that blessing on hand, I was wed. Chris, the officiant, and I were the only three people in the room.
Now a mere month into my marriage, perhaps it is dangerous to declare, “We did it the right way.” But as I look back at my humble little wedding, I feel pride—and the more I think about it, the more it seems that everyone should elope.
I love a good wedding just as I love any party with an open bar and “The Electric Slide.” But unless you are wealthy, come from a family that has never known strife, enjoy giving up an entire year of your life to planning, and can smile in the face of any possible wedding disaster (and mean it, not just for pictures), you should elope. That’s because weddings—even small-scale ones—are more pageant than sincerity.
True, I was never the fairy tale wedding type. As a child, I didn’t play bride unless peer-pressured. I can’t recall ever fantasizing about my wedding dress, let alone the flowers, the color scheme, or the cake. (Well, maybe the taste of the cake.) My father died when I was 11, and though I could foresee regretting many moments we would never share, walking down the aisle wasn’t among them. Because despite the popular idea that “every little girl dreams of her wedding”—an idea that keeps TLC churning out wedding reality shows—this is not so. I always dreamed of a lifelong partnership but never thought much of the froufrou affair.
The obvious reason to elope is the money. Over the summer, Brides magazine reported that, even in these tough economic times, the average couple spends nearly $27,000 on their nuptials. I have some doubts about that figure—the respondents were readers of Brides magazine and its website, a group already inclined to go veils-to-the-wall for a wedding. But there is no question that weddings, even those done on the cheap, cost far more than many couples can afford. While I have no qualms with the well-off (and their parents) shelling out for a classy affair, I did not want to go into debt or decimate my hard-earned savings for a party.
My primary objections to a “real” wedding go beyond the financial, however.
by Torie Bosch, Slate | Read more:
Photo: Gerald Williams
The Writing Revolution
In 2009, when Monica DiBella entered New Dorp, a notorious public high school on Staten Island, her academic future was cloudy. Monica had struggled to read in early childhood, and had repeated first grade. During her elementary-school years, she got more than 100 hours of tutoring, but by fourth grade, she’d fallen behind her classmates again. In the years that followed, Monica became comfortable with math and learned to read passably well, but never seemed able to express her thoughts in writing. During her freshman year at New Dorp, a ’70s-style brick behemoth near a grimy beach, her history teacher asked her to write an essay on Alexander the Great. At a loss, she jotted down her opinion of the Macedonian ruler: “I think Alexander the Great was one of the best military leaders.” An essay? “Basically, that wasn’t going to happen,” she says, sweeping her blunt-cut brown hair from her brown eyes. “It was like, well, I got a sentence down. What now?” Monica’s mother, Santa, looked over her daughter’s answer—six simple sentences, one of which didn’t make sense—with a mixture of fear and frustration. Even a coherent, well-turned paragraph seemed beyond her daughter’s ability. An essay? “It just didn’t seem like something Monica could ever do.”
For decades, no one at New Dorp seemed to know how to help low-performing students like Monica, and unfortunately, this troubled population made up most of the school, which caters primarily to students from poor and working-class families. In 2006, 82 percent of freshmen entered the school reading below grade level. Students routinely scored poorly on the English and history Regents exams, a New York State graduation requirement: the essay questions were just too difficult. Many would simply write a sentence or two and shut the test booklet. In the spring of 2007, when administrators calculated graduation rates, they found that four out of 10 students who had started New Dorp as freshmen had dropped out, making it one of the 2,000 or so lowest-performing high schools in the nation. City officials, who had been closing comprehensive high schools all over New York and opening smaller, specialized ones in their stead, signaled that New Dorp was in the crosshairs.
And so the school’s principal, Deirdre DeAngelis, began a detailed investigation into why, ultimately, New Dorp’s students were failing. By 2008, she and her faculty had come to a singular answer: bad writing. Students’ inability to translate thoughts into coherent, well-argued sentences, paragraphs, and essays was severely impeding intellectual growth in many subjects. Consistently, one of the largest differences between failing and successful students was that only the latter could express their thoughts on the page. If nothing else, DeAngelis and her teachers decided, beginning in the fall of 2009, New Dorp students would learn to write well. “When they told me about the writing program,” Monica says, “well, I was skeptical.” With disarming candor, sharp-edged humor, and a shy smile, Monica occupies the middle ground between child and adult—she can be both naive and knowing. “On the other hand, it wasn’t like I had a choice. I go to high school. I figured I’d give it a try.”
New Dorp’s Writing Revolution, which placed an intense focus, across nearly every academic subject, on teaching the skills that underlie good analytical writing, was a dramatic departure from what most American students—especially low performers—are taught in high school. The program challenged long-held assumptions about the students and bitterly divided the staff. It also yielded extraordinary results. By the time they were sophomores, the students who had begun receiving the writing instruction as freshmen were already scoring higher on exams than any previous New Dorp class. Pass rates for the English Regents, for example, bounced from 67 percent in June 2009 to 89 percent in 2011; for the global-history exam, pass rates rose from 64 to 75 percent. The school reduced its Regents-repeater classes—cram courses designed to help struggling students collect a graduation requirement—from five classes of 35 students to two classes of 20 students.
The number of kids enrolling in a program that allows them to take college-level classes shot up from 148 students in 2006 to 412 students last year. Most important, although the makeup of the school has remained about the same—roughly 40 percent of students are poor, a third are Hispanic, and 12 percent are black—a greater proportion of students who enter as freshmen leave wearing a cap and gown. This spring, the graduation rate is expected to hit 80 percent, a staggering improvement over the 63 percent figure that prevailed before the Writing Revolution began. New Dorp, once the black sheep of the borough, is being held up as a model of successful school turnaround. “To be able to think critically and express that thinking, it’s where we are going,” says Dennis Walcott, New York City’s schools chancellor. “We are thrilled with what has happened there.”
In the coming months, the conversation about the importance of formal writing instruction and its place in a public-school curriculum—the conversation that was central to changing the culture at New Dorp—will spread throughout the nation. Over the next two school years, 46 states will align themselves with the Common Core State Standards. For the first time, elementary-school students—who today mostly learn writing by constructing personal narratives, memoirs, and small works of fiction—will be required to write informative and persuasive essays. By high school, students will be expected to produce mature and thoughtful essays, not just in English class but in history and science classes as well.
For decades, no one at New Dorp seemed to know how to help low-performing students like Monica, and unfortunately, this troubled population made up most of the school, which caters primarily to students from poor and working-class families. In 2006, 82 percent of freshmen entered the school reading below grade level. Students routinely scored poorly on the English and history Regents exams, a New York State graduation requirement: the essay questions were just too difficult. Many would simply write a sentence or two and shut the test booklet. In the spring of 2007, when administrators calculated graduation rates, they found that four out of 10 students who had started New Dorp as freshmen had dropped out, making it one of the 2,000 or so lowest-performing high schools in the nation. City officials, who had been closing comprehensive high schools all over New York and opening smaller, specialized ones in their stead, signaled that New Dorp was in the crosshairs.
And so the school’s principal, Deirdre DeAngelis, began a detailed investigation into why, ultimately, New Dorp’s students were failing. By 2008, she and her faculty had come to a singular answer: bad writing. Students’ inability to translate thoughts into coherent, well-argued sentences, paragraphs, and essays was severely impeding intellectual growth in many subjects. Consistently, one of the largest differences between failing and successful students was that only the latter could express their thoughts on the page. If nothing else, DeAngelis and her teachers decided, beginning in the fall of 2009, New Dorp students would learn to write well. “When they told me about the writing program,” Monica says, “well, I was skeptical.” With disarming candor, sharp-edged humor, and a shy smile, Monica occupies the middle ground between child and adult—she can be both naive and knowing. “On the other hand, it wasn’t like I had a choice. I go to high school. I figured I’d give it a try.”
New Dorp’s Writing Revolution, which placed an intense focus, across nearly every academic subject, on teaching the skills that underlie good analytical writing, was a dramatic departure from what most American students—especially low performers—are taught in high school. The program challenged long-held assumptions about the students and bitterly divided the staff. It also yielded extraordinary results. By the time they were sophomores, the students who had begun receiving the writing instruction as freshmen were already scoring higher on exams than any previous New Dorp class. Pass rates for the English Regents, for example, bounced from 67 percent in June 2009 to 89 percent in 2011; for the global-history exam, pass rates rose from 64 to 75 percent. The school reduced its Regents-repeater classes—cram courses designed to help struggling students collect a graduation requirement—from five classes of 35 students to two classes of 20 students.
The number of kids enrolling in a program that allows them to take college-level classes shot up from 148 students in 2006 to 412 students last year. Most important, although the makeup of the school has remained about the same—roughly 40 percent of students are poor, a third are Hispanic, and 12 percent are black—a greater proportion of students who enter as freshmen leave wearing a cap and gown. This spring, the graduation rate is expected to hit 80 percent, a staggering improvement over the 63 percent figure that prevailed before the Writing Revolution began. New Dorp, once the black sheep of the borough, is being held up as a model of successful school turnaround. “To be able to think critically and express that thinking, it’s where we are going,” says Dennis Walcott, New York City’s schools chancellor. “We are thrilled with what has happened there.”
In the coming months, the conversation about the importance of formal writing instruction and its place in a public-school curriculum—the conversation that was central to changing the culture at New Dorp—will spread throughout the nation. Over the next two school years, 46 states will align themselves with the Common Core State Standards. For the first time, elementary-school students—who today mostly learn writing by constructing personal narratives, memoirs, and small works of fiction—will be required to write informative and persuasive essays. By high school, students will be expected to produce mature and thoughtful essays, not just in English class but in history and science classes as well.
Fitzgerald's Depression
Among our canonical twentieth-century writers, none suffered this pronouncement—one avoids labeling it a fate—more than F. Scott Fitzgerald. At what should have been the height of his novelistic powers in the mid 1930s, he was listless, reckless in his personal affairs, sick with tuberculosis and jaw-droppingly drunk. As Fitzgerald himself would later admit, he had become a poor caretaker of everything he possessed, even his own talent. After a decade of enviable productivity, his writing had slowed to a trickle of short stories, most of them published inEsquire, his one remaining reliable outlet, and many of these, as the scholar Ruth Prigozy describes them, “elliptical, unadorned, curiously enervated, barely stories at all.”
When the editors of The New Yorker categorically rejected the forty-year-old’s delicate slip of a short story “Thank You for the Light” in 1936 as “altogether out of the question,” their reasons hinged partially on its lack of merits. Few of Fitzgerald’s pieces from the period, this one included, clocked in at the standard commercial length of five thousand words and most of them gave the strong impression that they were both dashed off quickly and forced. They were. Yet I’d hazard that other, more complex reasons for its rejection were in play too, namely the ever-ephemeral nature of the artist’s image and his ability to reflect back to the nation its own acts of bad faith, manias, exuberances and bankrupt ideas.
With a penchant for casting his own experience as a particularly grandiose American brand of success and tragedy and with a proclivity for scripting the drama of the inner life in the language of economics, Fitzgerald declared elsewhere in 1936 that his happiness through the Jazz Age was as “unnatural as the Boom . . . and my recent experience parallels the wave of despair that swept the nation when the Boom was over.” In placing “Thank You” in the reject pile, the editors did not voice their concerns specifically in these national terms, but something like the outsized stakes involved in managing Fitzgerald’s reputation appeared to be on their minds. Calling the story “really too fantastic,” which is to say, ‘odd,’ they concluded, “It seems to us so curious and so unlike the kind of thing we associate with him.”
Not only did it not square with the dashing image of the lyrical, romantic wunderkind of the vertiginous Twenties—which Fitzgerald’s readers were emotionally invested in—but in its small way, it also pulled back the sheet to reveal the unforgiveable American sin of personal failure and diminished talent. As he wrote and sent out “curious” stories that bore the stylistic markings of someone else altogether, and as he watched them come back declined, Fitzgerald understood too well that the conditions of his literary celebrity lay in the past.
by Thomas Heise. Berfrois | Read more:
Illustration: Automat, Edward Hopper, 1927The Great Rift
In the span of about a week, starting on December 30, 2007, the day that President Mwai Kibaki stood awkwardly in an ill-fitting suit in the backyard of the Nairobi statehouse, Bible in hand, and had himself sworn in after a rigged election, Kenya went from one of the most orderly countries in sub-Saharan Africa to a war zone. The violence was as terrible as it was swift, but the real shock was that it could happen here at all. Kenya had just held two back-to-back national elections, in 2002 and 2005, that were widely praised as free and fair. According to pre-election polls, most Kenyans were backing the opposition candidate, Raila Odinga, and they were expecting a peaceful transfer of power, which has happened only a few times in Africa, but Kenya was thought to be the happy exception, and for good reason.
Having been stationed for the New York Times in Kenya for more than six years, and having reported on Kenya’s amazing distance runners, its second-to-none safari business, and its golf-club-wielding middle class, I watched this country prosper as many other countries in Africa remained stagnant or, worse, imploded further. Kenya was different. It was the anti-Congo, the anti-Burundi, the anti-Sudan, the opposite of African nations where violence rules and the infrastructure is sinking back into the weeds. I used to get back from those countries, places where I feared for my life all the time, and want to kiss the tarmac at Nairobi’s airport. In Kenya, things work. There’s an orderliness here inherited from the British, manifest in the cul-de-sacs with marked street signs in neat black lettering and the SUVs driven by the wildlife rangers somehow without a speck of dirt on them. There are Internet startups, investment banks, a thriving national airline. It is still Africa, and most people are still poor, but even that has been changing. In the mid-2000s, the economy was growing by about 6 percent per year, far faster than those of Western Europe or the U.S., adding hundreds of thousands of new jobs. Kenya’s middle class—around four million people making between three thousand and forty thousand dollars per year—is one of the continent’s largest.
Which is all to say that when Kibaki’s men openly hijacked the vote-counting process and forcibly installed their man, I, along with most Kenyans, was astounded and then quickly appalled. Within minutes of Kibaki taking the oath of office that day, thousands of protesters burst out of Kibera, an enormous shantytown, waving sticks, smashing shacks, burning tires, and hurling stones. Police poured into the streets to control them. In the next few days, gangs went from house to house across the country, dragging out people of certain tribes and clubbing them to death. It was horrifyingly clear what was starting to happen—tribal war—and that promising GDP or literacy-rate statistics were no longer relevant. (...)
The election was the first time in Kenya’s history that tribal politics was dragged into the open and the first time that there was a hotly competitive race between a Kikuyu (Kibaki) and a non-Kikuyu (Odinga, a Luo). There are aboutforty different ethnic groups or tribes in the country, each with its own language and customs, and the stolen election ignited long-simmering ethnic grievances that many Kenyans had thought, or maybe more aptly, had wished were redressed. In all, at least one thousand people were murdered and about one million displaced. The police, the judiciary, the army, the religious leaders, and especially the politicians all failed their country at the moment when they were needed most.
In much of Africa, if not the world, geography and ethnicity correlate, certain groups dominating certain areas. This was the basis of South Africa’s apartheid-era homeland policy, which sought to relegate every black person in the country to an ethnic homeland. In Kenya, single ethnic groups often overwhelmingly populate a place, like the Luos on the shores of Lake Victoria or the Kikuyus in the foothills around Mt. Kenya. Not so in the Rift Valley. Here Luos, Kikuyus, Kambas, Kipsigis, Nandes, Ogieks (the traditional hunters and gatherers), Luhyas, Masais, and Kisiis are all packed together, drawn by fertile soil and the opportunity for work, making the towns and the countryside cosmopolitan. The multiethnic Rift Valley was the epicenter of the violence, and death squads swept the hills with elemental killing tools—knives, rocks, and fire—singling out families to execute (the stripes of destruction I saw from the helicopter).
Kenya’s portion of the Great Rift Valley seems to belong to another world and another time—lakes so full of flamingoes that the water is actually pink when you scoop it up in your hands, sculpted green mountains nosing the sky, and soils so rich that just about any fruit or vegetable known to man can grow, from mangoes to guava to snow peas to cucumbers to miles and miles of high-quality, disease-resistant corn. Kenya’s natural beauty, so undeniable in the Rift Valley, sent it down a path different from other European colonies: few African areas attracted so many white settlers. South Africa, yes, and Rhodesia (now Zimbabwe) too, but they were qualitatively different, agricultural and mineral-based economies, with legions of working-class whites. Kenya, on the other hand, because of its wildlife and spectacular landscape, became a playground for aristocratic misfits. They came to shoot lions, drink gin, maybe try their hand at gentleman farming, and cheat on their wives. There was a famous expression from colonial-era Kenya: “Are you married, or do you live in Kenya?’’
Having been stationed for the New York Times in Kenya for more than six years, and having reported on Kenya’s amazing distance runners, its second-to-none safari business, and its golf-club-wielding middle class, I watched this country prosper as many other countries in Africa remained stagnant or, worse, imploded further. Kenya was different. It was the anti-Congo, the anti-Burundi, the anti-Sudan, the opposite of African nations where violence rules and the infrastructure is sinking back into the weeds. I used to get back from those countries, places where I feared for my life all the time, and want to kiss the tarmac at Nairobi’s airport. In Kenya, things work. There’s an orderliness here inherited from the British, manifest in the cul-de-sacs with marked street signs in neat black lettering and the SUVs driven by the wildlife rangers somehow without a speck of dirt on them. There are Internet startups, investment banks, a thriving national airline. It is still Africa, and most people are still poor, but even that has been changing. In the mid-2000s, the economy was growing by about 6 percent per year, far faster than those of Western Europe or the U.S., adding hundreds of thousands of new jobs. Kenya’s middle class—around four million people making between three thousand and forty thousand dollars per year—is one of the continent’s largest.
Which is all to say that when Kibaki’s men openly hijacked the vote-counting process and forcibly installed their man, I, along with most Kenyans, was astounded and then quickly appalled. Within minutes of Kibaki taking the oath of office that day, thousands of protesters burst out of Kibera, an enormous shantytown, waving sticks, smashing shacks, burning tires, and hurling stones. Police poured into the streets to control them. In the next few days, gangs went from house to house across the country, dragging out people of certain tribes and clubbing them to death. It was horrifyingly clear what was starting to happen—tribal war—and that promising GDP or literacy-rate statistics were no longer relevant. (...)
The election was the first time in Kenya’s history that tribal politics was dragged into the open and the first time that there was a hotly competitive race between a Kikuyu (Kibaki) and a non-Kikuyu (Odinga, a Luo). There are aboutforty different ethnic groups or tribes in the country, each with its own language and customs, and the stolen election ignited long-simmering ethnic grievances that many Kenyans had thought, or maybe more aptly, had wished were redressed. In all, at least one thousand people were murdered and about one million displaced. The police, the judiciary, the army, the religious leaders, and especially the politicians all failed their country at the moment when they were needed most.
In much of Africa, if not the world, geography and ethnicity correlate, certain groups dominating certain areas. This was the basis of South Africa’s apartheid-era homeland policy, which sought to relegate every black person in the country to an ethnic homeland. In Kenya, single ethnic groups often overwhelmingly populate a place, like the Luos on the shores of Lake Victoria or the Kikuyus in the foothills around Mt. Kenya. Not so in the Rift Valley. Here Luos, Kikuyus, Kambas, Kipsigis, Nandes, Ogieks (the traditional hunters and gatherers), Luhyas, Masais, and Kisiis are all packed together, drawn by fertile soil and the opportunity for work, making the towns and the countryside cosmopolitan. The multiethnic Rift Valley was the epicenter of the violence, and death squads swept the hills with elemental killing tools—knives, rocks, and fire—singling out families to execute (the stripes of destruction I saw from the helicopter).
Kenya’s portion of the Great Rift Valley seems to belong to another world and another time—lakes so full of flamingoes that the water is actually pink when you scoop it up in your hands, sculpted green mountains nosing the sky, and soils so rich that just about any fruit or vegetable known to man can grow, from mangoes to guava to snow peas to cucumbers to miles and miles of high-quality, disease-resistant corn. Kenya’s natural beauty, so undeniable in the Rift Valley, sent it down a path different from other European colonies: few African areas attracted so many white settlers. South Africa, yes, and Rhodesia (now Zimbabwe) too, but they were qualitatively different, agricultural and mineral-based economies, with legions of working-class whites. Kenya, on the other hand, because of its wildlife and spectacular landscape, became a playground for aristocratic misfits. They came to shoot lions, drink gin, maybe try their hand at gentleman farming, and cheat on their wives. There was a famous expression from colonial-era Kenya: “Are you married, or do you live in Kenya?’’
by Jeffrey Gettleman, Lapham's Quarterly | Read more:
Image: Discovery Adventures
Thursday, September 20, 2012
Kamisaka Sekka (1866 - 1942) Japanese Woodblock Print
Rolling Hillside
Sekka’s A World of Things Series (Momoyogusa)
via:
Where Is Cuba Going?
This was the first time I was in post-Fidel Cuba. It was funny to think that not long ago, there were smart people who doubted that such a thing could exist, i.e., who believed that with the fall of Fidel would come the fall of Communism on the island. But Fidel didn’t fall. He did fall, physically — on the tape that gets shown over and over in Miami, of him coming down the ramp after giving that speech in 2004 and tumbling and breaking his knee — but his leadership didn’t. He executed one of the most brilliantly engineered successions in history, a succession that was at the same time a self-entrenchment. First, he faked his own death in a way: serious intestinal operation, he might not make it. Raúl is brought in as “acting president.” A year and a half later, Castro mostly recovered. But Raúl is officially named president, with Castro’s approval. It was almost as if, “Is Fidel still . . . ?” Amazing. So now they rule together, with Raúl out front, but everyone understanding that Fidel retains massive authority. Not to say that Raúl doesn’t wield power — he has always had plenty — but it’s a partnership of some kind. What comes after is as much of a mystery as ever.
Our relationship with them seems just as uncertain. Barack Obama was going to open things up, and he did tinker with the rules regarding travel, but now they say that when you try to follow these rules, you get caught up in all kinds of forms and tape. He eased the restrictions on remittances, so more money is making it back to the island, and that may have made the biggest difference so far. Boats with medical and other relief supplies have recently left Miami, sailing straight to the island, which hasn’t happened in decades. These humanitarian shipments can, according to The Miami Herald, include pretty much anything a Cuban-American family wants to send to its relatives: Barbie dolls, electronics, sugary cereal. In many cases, you have a situation in which the family is first wiring money over, then shipping the goods. The money is used on the other side to pay the various fees associated with getting the stuff. So it’s as if you’re reaching over and re-buying the merchandise for your relatives. The money, needless to say, goes to the government. Still, capitalism is making small inroads. And Raúl has taken baby steps toward us: Cubans can own their own cars, operate their own businesses, own property. That’s all new. For obvious reasons it’s not an immediate possibility for a vast majority of the people, and it could be taken away tomorrow morning by decree, but it matters.
Otherwise, our attitude toward Cuba feels very wait and see, as what we’re waiting to see grows less and less clear. We’ve learned to live with it, like when the doctor says, “What you have could kill you, but not before you die a natural death.” Earlier this year Obama said to a Spanish newspaper: “No authoritarian regime will last forever. The day will come in which the Cuban people will be free.” Not, notice, no dictator can live forever, but no “authoritarian regime.” But how long can one last? Two hundred years?
Perhaps a second term will be different. All presidents, if they want to mess with our Cuba relations at even the microscopic level, find themselves up against the Florida community, and those are large, powerful and arguably insane forces.
My wife’s people got out in the early 1960s, so they’ve been in the States for half a century. Lax regulations, strict regulations. It’s all a oneness. They take, I suppose, a Cuban view, that matters on the island are perpetually and in some way inherently screwed up and have been forever.
There was a moment in the taxi, a little nothing exchange but so densely underlayered with meaning that if you could pass it through an extracting machine, you would understand a lot about how it is between Cubans and Cuban-Americans. The driver, a guy who said he grew up in Havana, told a tiny lie, or a half lie. The fact that you can’t even say whether it was a lie or not is significant. My wife had asked him to explain for me the way it works with Cuba’s two separate currencies, CUPs and CUCs, Cuban pesos and convertible pesos (also called “chavitos” or simply “dollars”). When I was last there, we didn’t use either of these, though both existed. We paid for everything in actual, green U.S. dollars. That’s what people wanted. There were stores in which you could pay in only dollars. But in 2004, Castro decided — partly as a gesture of contempt for the U.S. embargo — that he would abolish the use of U.S. dollars on the island and enforce the use of CUCs, pegged to the U.S. dollar but distinct from it. This coexisted alongside the original currency, which would remain pegged to the spirit of the revolution. For obvious reasons, the actual Cuban peso is worth much less than the other, dollar-equivalent Cuban peso, something on the order of 25 to 1. But the driver said simply, “No, they are equal.”
“Really?” my wife said. “No . . . that can’t be.”
He insisted that there was no difference between the relative values of the currencies. They were the same.
He knew that this was wrong. He probably could have told you the exchange rates from that morning. But he also knew that it had a rightness in it. For official accounting purposes, the two currencies are considered equivalent. Their respective values might fluctuate on a given day, of course, but it couldn’t be said that the CUP was worth less than the CUC That’s partly what he meant. He also meant that if you’re going to fly to Cuba from Miami and rub it in my face that our money is worth one twenty-fifth of yours, I’m gonna feed you some hilarious communist math and see how you like it. Cubans call it la doble moral. Meaning, different situations call forth different ethical codes. He wasn’t being deceptive. He was saying what my wife forced him to say. She had been a bit breezy, it seemed, in mentioning the unevenness between the currencies, which is the kind of absurdity her family would laugh at affectionately in the kitchen. But they don’t have to suffer it anymore. And he was partly reminding her of that, fencing her off from a conversation in which Cubans would joke together about the notion that the CUP and the CUC had even the slightest connection to each other. That was for them, that laughter. So, a very complex statement, that not-quite-lie. After it, he was totally friendly and dropped us at one of the Cuban-owned tourist hotels on the edge of Havana.
People walking by on the street didn’t seem as skinny. That was the most instantly perceptible difference, if you were seeing Raúl’s Cuba for the first time. They weren’t sickly looking before, but under Fidel you noticed more the way men’s shirts flapped about them and the knobbiness of women’s knees. Now people were filling out their clothes. The island’s overall dietary level had apparently gone up a tick. (One possible factor involved was an increase in the amount of food coming over from the United States. Unknown to most people, we do sell a lot of agricultural products to Cuba, second only in value to Brazil. Under a law that Bill Clinton squeaked through on his way out, Cuba purchases food and medicine from us on a cash basis, meaning, bizarrely, that a lot of the chicken in the arroz con pollo consumed on the island by Canadian tourists is raised in the Midwest — the embargo/blockade has always been messy when you lean in close).
by John Jeremiah Sullivan, NY Times | Read more:
Photo: Andrew Moore/Yancey Richardson Gallery
Subscribe to:
Comments (Atom)








.jpg)






