Saturday, September 22, 2012

How Much Tech Can One City Take?


Last year, when Mayor Ed Lee heard that Twitter was planning to move its headquarters out of San Francisco and down to the peninsula, he quickly consulted with his digital experts—his two daughters, Brianna, 27, and Tania, 30. Was the company important enough to make a top priority? “Of course it’s important, Daddy!” they told him. “We tweet all the time. You have to keep them in town.”

Lee quickly made an appointment with Twitter CEO Dick Costolo to see how San Francisco could hold on to the social media giant. Costolo told the mayor that Twitter was planning to double the size of its local workforce over the next year, from about 450 employees to 1,000. But the city’s policies penalized job growth by taxing a company’s payroll, the Twitter chief said. If San Francisco wanted to attract fast-growing digital companies like his, city hall would have to reform its business-tax structure.

For Lee, the Costolo meeting would prove to be a wake-up call. Little in the mayor’s biography (he’s the son of a Chinese restaurant cook, and a former tenants’ rights agitator) suggested that he would become a close friend of the city’s dominant business interests. But ever since his Twitter awakening, Lee has been moving quickly to align his administration with the booming technology industry, shrugging off complaints from the city’s powerful progressives that he’s gotten too cozy with tech moguls, such as investor Ron Conway. The mayor’s proposal to shift business taxes from a payroll-based plan to one based on gross receipts will be on the November ballot, with wide backing from the Board of Supervisors, labor unions, and, of course, Conway. Progressive gadfly Aaron Peskin tapped a deep well of distrust on the left last month when he told the San Francisco Chronicle, “The Koch brothers are trying to buy the president of the United States, and Ron Conway has bought himself a mayor.”

Lee, unperturbed by the flak on his left, now devotes one afternoon each week to a gathering he calls "Tech Tuesday"—visiting one of the many technology companies that are flocking to the city and discussing with the executives and engineers their wishlists for San Francisco. He has sat down with the geek elite at more than 20 companies so far, including Yelp, Yammer, Autodesk, and Zendesk. The firms’ representatives tell Lee what they like about the city—the bike lanes, the arts, the cultural diversity, the different languages you hear on the streets. And then they tell him what they don’t like—the homelessness, the poor public schools, the crime.

In spite of the obvious urban warts, the word is out: San Francisco is the world’s leading tech paradise. At a rate eclipsing the dot-com boom of the 1990s, tech companies are setting up shop in the city by the hundreds, drawn by its beauty and livability, as well as the deep pool of engineering talent here and, yes, city hall’s increasingly tech-oriented policies.

Young entrepreneurs from as far away as Denmark, Singapore, and France can be seen with real estate agents in tow, roaming through converted South of Market lofts still vacant from when the previous bubble burst more than a decade ago. The city is currently home to more than 1,700 tech firms, which employ 44,000 workers, up a whopping 30 percent from just two years ago. And San Francisco has been the nation’s top magnet for venture capital funding for three years in a row. Consequently, the distinction between Silicon Valley and San Francisco has all but disappeared. It is us, and we are it.

The city is clearly benefiting from this new mind meld. San Francisco’s 7.6 percent unemployment rate handily beats the state’s 10.9 percent rate, and it’s one of the few counties in California that has experienced significant property-tax growth during the economic crisis, driven largely by the hot real estate market in the tech-heavy SoMa area. The new tech boom has helped add $6 billion to the city’s tax rolls over the past year—an increase of more than 4 percent over the previous fiscal year. There’s a sense of pride and excitement in the air, a feeling that—once again—we’re the ones creating the technologies that are driving the digital era. San Francisco is quite literally changing the world.

But despite all this, there is trouble in paradise. The unique urban features that have made San Francisco so appealing to a new generation of digital workers—its artistic ferment, its social diversity, its trailblazing progressive consciousness—are deteriorating, driven out of the city by the tech boom itself, and the rising real estate prices that go with it. Rents are soaring: Units in one Mission district condominium complex recently sold for a record $900 per square foot. And single-family homes in Noe Valley, Bernal Heights, and other attractive city neighborhoods are selling for as much as 40 percent above the asking price. Again and again, you hear of teachers, nurses, firefighters, police officers, artists, hotel and restaurant workers, and others with no stake in the new digital gold rush being squeezed out of the city.

And it’s not just about housing. Many San Franciscans don’t feel as if they’re benefiting from the boom in any way. While 23-year-olds are becoming instant millionaires and the rest of the digital technocracy seek out gourmet restaurants and artisanal bars, a good portion of the city watches from the sidelines, feeling left out and irrelevant. Dot-com decadence is once again creeping into the city of St. Francis, and the tensions between those who own a piece of its future and those who don’t are growing by the day.

In light of this, the time has come for a serious reckoning—for Mayor Lee, for the tech cognoscenti, and for the rest of the populace. In short, do we wish to be a city of enlightenment, or a city of apps? Many of those who have lived in San Francisco the longest and care for it the most are worried that their charmed oasis is becoming a dangerously one-dimensional company town—a techie’s Los Angeles, a VC’s D.C. If San Francisco is swallowed whole by the digital elite, many city lovers fear, the once-lush urban landscape will become as flat as a computer screen.

by David Talbot, San Francisco Magazine |  Read more:
Photography by Peter Belange

Still Too Pricey

Facebook has a business model in need of a radical change and a still-rich $61 billion market value. What's not to "like"? Plenty.

Facebook's 40% plunge from its initial-public-offering price of $38 in May has millions of investors asking a single question: Is the stock a buy? The short answer is "No." After a recent rally, to $23 from a low of $17.55, the stock trades at high multiples of both sales and earnings, even as uncertainty about the outlook for its business grows.

The rapid shift in Facebook's user base to mobile platforms—more than half of users now access the site on smartphones and tablets—appears to have caught the company by surprise. Facebook (ticker: FB) founder and CEO Mark Zuckerberg must find a way to monetize its mobile traffic because usage on traditional PCs, where the company makes virtually all of its money, is declining in its large and established markets. That trend isn't likely to change. (...)

The bull case for Facebook is that Zuckerberg & Co. will find creative ways to generate huge revenue from its 955 million monthly active users, be it from mobile and desktop advertising, e-commerce, search, online-game payments, or sources that have yet to emerge. Pay no attention to depressed current earnings, the argument goes. Facebook is just getting started.

Facebook now gets $5 annually in revenue per user. That could easily double or triple in the next five years, bulls say. In a recent interview at the TechCrunch Disrupt conference, Zuckerberg said, "It's easy to underestimate how fundamentally good mobile is for us." His argument, coming after Facebook's brand-damaging IPO fiasco and a halving of the stock, was something only a mother, or a true believer, could love. This year Facebook is expected to get 5% of its revenue from mobile. "Literally six months ago we didn't run a single ad on mobile," Zuckerberg said. Facebook executives declined to speak with Barron's.

"Anyone who owns Facebook should be exceptionally troubled that they're still trying to 'figure out' mobile monetization and had to lay out $1 billion for Instagram because some start-up had figured out mobile pictures better than Facebook," says one institutional investor, referring to Facebook's April deal for two-year-old Instagram, whose smartphone app for mobile photo-sharing became a big hit (and at the time had yet to generate a nickel in revenue). 

by Andrew Bary, Barrons |  Read more:

Anthropology of Tailgating


Think football, and odds are you think tailgate party. And with good reason — the tailgate party is among the most time-honored and revered American sporting traditions, what with the festivities, the food and the fans. And the beer. Don’t forget the beer.

To the untrained eye, these game-day rituals appear to be little more than a wild party, a hedonistic excuse to get loaded and eat barbecue. Not at all. They are, according to Notre Dame anthropologist John Sherry, bustling microcosms of society where self-regulatory neighborhoods foster inter-generational community, nurture tradition and build the team’s brand.

Sherry didn’t always feel this way. There was a time when he considered tailgating a boisterous nuisance, little more than a gauntlet of unrelated and unruly celebrations to be run if he were to reach his seat in Notre Dame Stadium. But then he had an epiphany: What if there was meaning to the madness?

“One day I slowed down and paid attention to things that were going on that weren’t individual celebrations,” he said of research presented in A Cultural Analysis of Tailgating. “It was much more nuanced that I had thought before.”

Sherry consulted the existing literature on the subject and found bupkis. Most studies on tailgating come to Onion-esque conclusions like “tailgating leads to drunkenness” or examine the environmental impact(.pdf) of all that trash. Sherry looked deeper into tailgating and saw a whole lot of consumption akin to that of, say, ancient harvest festivals. He recruited colleague Tonya Bradford, trained a few research assistants and started attending tailgate parties and interviewing fans to learn more.

Notre Dame was a convenient place to start, given its rich football tradition. But Sherry and Co. hit the road too, attending Irish away games and checking the scene at Big Ten Conference schools. They talked to fans of every stripe, from alumni with six-figure RVs to students. And they discovered what every true football fan eventually discovers.

“What we really found was a real active and orchestrated effort in community building,” said Sherry. “People have tailgated in the same place for years, they have tailgated through generations, they have encountered strangers who have passed through and adopted them to their families and became fast friends. They have created neighborhoods.”

This much was obvious Saturday at the University of Utah-Brigham Young University game I attended. The parking lot around Eccles Stadium was thick with trucks and trailers and RVs, the air was thick with the smell of cooking meat. The lot was divided into “streets” and “neighborhoods” populated by fans who have in many cases known each other for years.

by Beth Carter, Wired |  Read more:
Photo: Mike Roemer/Associated Press

Friday, September 21, 2012

John Tavener


Hysteria

Such was the media excitement inspired by the appearance of a vibrator in a late 1990s episode of Sex And The City, one might have thought the device had only just been invented. Any misapprehension is about to be corrected by a new film, Hysteria, which tells the true story of the vibrator's inception. Described by its producers as a Merchant Ivory film with comedy, Hysteria's humour derives chiefly from the surprise of its subject's origins, which are as little known as they are improbable.

The vibrator was, in fact, invented by respectable Victorian doctors, who grew tired of bringing female patients to orgasm using their fingers alone, and so dreamt up a device to do the job for them. Their invention was regarded as a reputable medical instrument – no more improper than a stethoscope – but became wildly popular among Victorian and Edwardian gentlewomen, who soon began buying vibrators for themselves. For its early customers, a vibrator was nothing to be embarrassed about – unlike, it's probably safe to assume, many members of the film's contemporary audience, not to mention some of its stars.

"I've done a lot of 'out there' sexual movies," Maggie Gyllenhaal readily acknowledges, "but this one pushed even my boundaries." Gyllenhaal plays a spirited young Victorian lady, and the love interest of the doctor who invents the vibrator, but admits, "I just think there is something inherently embarrassing about a vibrator. It's not something most people say they've got; nobody talks about that, it's still a secret kind of thing. So it's very difficult," she adds, breaking into a laugh, "to imagine that 100 years ago women didn't have the vote, yet they were going to a doctor's office to get masturbated."

In 19th-century Britain, the condition known as hysteria – which the vibrator was invented to treat – was not a source of embarrassment at all. Hysteria's symptoms included chronic anxiety, irritability and abdominal heaviness, and early medical explanations were inclined to blame some or other fault in the uterus. But in fact these women were suffering from straightforward sexual frustration – and by the mid-19th century the problem had reached epidemic proportions, said to afflict up to 75% of the female population. Yet because the very idea of female sexual arousal was proscribed in Victorian times, the condition was classed as non-sexual. It followed, therefore, that its cure would likewise be regarded as medical rather than sexual.

The only consistently effective remedy was a treatment that had been practised by physicians for centuries, consisting of a "pelvic massage" – performed manually, until the patient reached a "hysterical paroxysm", after which she appeared miraculously restored. The pelvic massage was a highly lucrative staple of many medical practices in 19th-century London, with repeat business all but guaranteed. There is no evidence of any doctor taking pleasure from its provision; on the contrary, according to medical journals, most complained that it was tedious, time-consuming and physically tiring. This being the Victorian age of invention, the solution was obvious: devise a labour-saving device that would get the job done quicker.

by Decca Aitkenhead, The Guardian | Read more:
Photo: Good Vibrations

Google News at 10: How the Algorithm Won Over the News Industry


In April of 2010, Eric Schmidt delivered the keynote address at the conference of the American Society of News Editors in Washington, D.C. During the talk, the then-CEO of Google went out of his way to articulate -- and then reiterate -- his conviction that "the survival of high-quality journalism" was "essential to the functioning of modern democracy."

This was a strange thing. This was the leader of the most powerful company in the world, informing a roomful of professionals how earnestly he would prefer that their profession not die. And yet the speech itself -- I attended it -- felt oddly appropriate in its strangeness. Particularly in light of surrounding events, which would find Bob Woodward accusing Google of killing newspapers. And Les Hinton, then the publisher of the Wall Street Journal, referring to Google's news aggregation service as a "digital vampire." Which would mesh well, of course, with the similarly vampiric accusations that would come from Hinton's boss, Rupert Murdoch -- accusations addressed not just toward Google News, but toward Google as a media platform. A platform that was, Murdoch declared in January 2012, the "piracy leader."

What a difference nine months make. Earlier this week, Murdoch's 20th Century Fox got into business, officially, with Captain Google, cutting a deal to sell and rent the studio's movies and TV shows through YouTube and Google Play. It's hard not to see Murdoch's grudging acceptance of Google as symbolic of a broader transition: producers' own grudging acceptance of a media environment in which they are no longer the primary distributors of their own work. This week's Pax Murdochiana suggests an ecosystem that will find producers and amplifiers working collaboratively, rather than competitively. And working, intentionally or not, toward the earnest end that Schmidt expressed two years ago: "the survival of high-quality journalism."

"100,000 Business Opportunities"

There is, on the one hand, an incredibly simple explanation for the shift in news organizations' attitude toward Google: clicks. Google News was founded 10 years ago -- September 22, 2002 -- and has since functioned not merely as an aggregator of news, but also as a source of traffic to news sites. Google News, its executives tell me, now "algorithmically harvests" articles from more than 50,000 news sources across 72 editions and 30 languages. And Google News-powered results, Google says, are viewed by about 1 billion unique users a week. (Yep, that's billion with a b.) Which translates, for news outlets overall, to more than 4 billion clicks each month: 1 billion from Google News itself and an additional 3 billion from web search.

As a Google representative put it, "That's about 100,000 business opportunities we provide publishers every minute."

Google emphasizes numbers like these not just because they are fairly staggering in the context of a numbers-challenged news industry, but also because they help the company to make its case to that industry. (For more on this, see James Fallows's masterful piece from the June 2010 issue of The Atlantic.) Talking to Google News executives and team members myself in 2010 -- the height of the industry's aggregatory backlash -- I often got a sense of veiled frustration. And of just a bit of bafflement. When you believe that you're working to amplify the impact of good journalism, it can be strange to find yourself publicly resented by journalists. It can be even stranger to find yourself referred to as a vampire. Or a pirate. Or whatever.

by Megan Garber, The Atlantic |  Read more:

Why I Eloped


When I recently called my mother to tell her that I was getting married, she was ecstatic. After all, my boyfriend, Chris, and I had been together for nearly 10 years, so he had long been part of the family. “When’s the big day?” she asked me.

“In about 20 minutes!” I said, trying to sound perky instead of scared. Though we had decided to get married a few weeks prior, we told almost no one beforehand—not even our parents. And now, we were standing just outside the office of the man who would perform the ceremony.

“You’re getting married today?” she said, shocked. I braced myself for the worst—for her to say that I was robbing her of a precious time in a mother’s life. But she instead declared her unmitigated delight. And with that blessing on hand, I was wed. Chris, the officiant, and I were the only three people in the room.

Now a mere month into my marriage, perhaps it is dangerous to declare, “We did it the right way.” But as I look back at my humble little wedding, I feel pride—and the more I think about it, the more it seems that everyone should elope.

I love a good wedding just as I love any party with an open bar and “The Electric Slide.” But unless you are wealthy, come from a family that has never known strife, enjoy giving up an entire year of your life to planning, and can smile in the face of any possible wedding disaster (and mean it, not just for pictures), you should elope. That’s because weddings—even small-scale ones—are more pageant than sincerity.

True, I was never the fairy tale wedding type. As a child, I didn’t play bride unless peer-pressured. I can’t recall ever fantasizing about my wedding dress, let alone the flowers, the color scheme, or the cake. (Well, maybe the taste of the cake.) My father died when I was 11, and though I could foresee regretting many moments we would never share, walking down the aisle wasn’t among them. Because despite the popular idea that “every little girl dreams of her wedding”—an idea that keeps TLC churning out wedding reality shows—this is not so. I always dreamed of a lifelong partnership but never thought much of the froufrou affair.

The obvious reason to elope is the money. Over the summer, Brides magazine reported that, even in these tough economic times, the average couple spends nearly $27,000 on their nuptials. I have some doubts about that figure—the respondents were readers of Brides magazine and its website, a group already inclined to go veils-to-the-wall for a wedding. But there is no question that weddings, even those done on the cheap, cost far more than many couples can afford. While I have no qualms with the well-off (and their parents) shelling out for a classy affair, I did not want to go into debt or decimate my hard-earned savings for a party.

My primary objections to a “real” wedding go beyond the financial, however.

by Torie Bosch, Slate |  Read more:
Photo: Gerald Williams

The Writing Revolution

In 2009, when Monica DiBella entered New Dorp, a notorious public high school on Staten Island, her academic future was cloudy. Monica had struggled to read in early childhood, and had repeated first grade. During her elementary-school years, she got more than 100 hours of tutoring, but by fourth grade, she’d fallen behind her classmates again. In the years that followed, Monica became comfortable with math and learned to read passably well, but never seemed able to express her thoughts in writing. During her freshman year at New Dorp, a ’70s-style brick behemoth near a grimy beach, her history teacher asked her to write an essay on Alexander the Great. At a loss, she jotted down her opinion of the Macedonian ruler: “I think Alexander the Great was one of the best military leaders.” An essay? “Basically, that wasn’t going to happen,” she says, sweeping her blunt-cut brown hair from her brown eyes. “It was like, well, I got a sentence down. What now?” Monica’s mother, Santa, looked over her daughter’s answer—six simple sentences, one of which didn’t make sense—with a mixture of fear and frustration. Even a coherent, well-turned paragraph seemed beyond her daughter’s ability. An essay? “It just didn’t seem like something Monica could ever do.”

For decades, no one at New Dorp seemed to know how to help low-performing students like Monica, and unfortunately, this troubled population made up most of the school, which caters primarily to students from poor and working-class families. In 2006, 82 percent of freshmen entered the school reading below grade level. Students routinely scored poorly on the English and history Regents exams, a New York State graduation requirement: the essay questions were just too difficult. Many would simply write a sentence or two and shut the test booklet. In the spring of 2007, when administrators calculated graduation rates, they found that four out of 10 students who had started New Dorp as freshmen had dropped out, making it one of the 2,000 or so lowest-performing high schools in the nation. City officials, who had been closing comprehensive high schools all over New York and opening smaller, specialized ones in their stead, signaled that New Dorp was in the crosshairs.

And so the school’s principal, Deirdre DeAngelis, began a detailed investigation into why, ultimately, New Dorp’s students were failing. By 2008, she and her faculty had come to a singular answer: bad writing. Students’ inability to translate thoughts into coherent, well-argued sentences, paragraphs, and essays was severely impeding intellectual growth in many subjects. Consistently, one of the largest differences between failing and successful students was that only the latter could express their thoughts on the page. If nothing else, DeAngelis and her teachers decided, beginning in the fall of 2009, New Dorp students would learn to write well. “When they told me about the writing program,” Monica says, “well, I was skeptical.” With disarming candor, sharp-edged humor, and a shy smile, Monica occupies the middle ground between child and adult—she can be both naive and knowing. “On the other hand, it wasn’t like I had a choice. I go to high school. I figured I’d give it a try.”

New Dorp’s Writing Revolution, which placed an intense focus, across nearly every academic subject, on teaching the skills that underlie good analytical writing, was a dramatic departure from what most American students—especially low performers—are taught in high school. The program challenged long-held assumptions about the students and bitterly divided the staff. It also yielded extraordinary results. By the time they were sophomores, the students who had begun receiving the writing instruction as freshmen were already scoring higher on exams than any previous New Dorp class. Pass rates for the English Regents, for example, bounced from 67 percent in June 2009 to 89 percent in 2011; for the global-­history exam, pass rates rose from 64 to 75 percent. The school reduced its Regents-repeater classes—cram courses designed to help struggling students collect a graduation requirement—from five classes of 35 students to two classes of 20 students.

The number of kids enrolling in a program that allows them to take college-level classes shot up from 148 students in 2006 to 412 students last year. Most important, although the makeup of the school has remained about the same—­roughly 40 percent of students are poor, a third are Hispanic, and 12 percent are black—a greater proportion of students who enter as freshmen leave wearing a cap and gown. This spring, the graduation rate is expected to hit 80 percent, a staggering improvement over the 63 percent figure that prevailed before the Writing Revolution began. New Dorp, once the black sheep of the borough, is being held up as a model of successful school turnaround. “To be able to think critically and express that thinking, it’s where we are going,” says Dennis Walcott, New York City’s schools chancellor. “We are thrilled with what has happened there.”

In the coming months, the conversation about the importance of formal writing instruction and its place in a public-school curriculum—­the conversation that was central to changing the culture at New Dorp—will spread throughout the nation. Over the next two school years, 46 states will align themselves with the Common Core State Standards. For the first time, elementary-­school students—­who today mostly learn writing by constructing personal narratives, memoirs, and small works of fiction—will be required to write informative and persuasive essays. By high school, students will be expected to produce mature and thoughtful essays, not just in English class but in history and science classes as well.

by Peg Tyre, The Atlantic |  Read more:
Photo: Kyoto Hamada

Fitzgerald's Depression


After forty, all life is a matter of saving face. For those whose successes have run out early, the years are measured less by the decreasing increments of honors achieved, than by the humiliations staved off and the reversals slowed.

Among our canonical twentieth-century writers, none suffered this pronouncement—one avoids labeling it a fate—more than F. Scott Fitzgerald. At what should have been the height of his novelistic powers in the mid 1930s, he was listless, reckless in his personal affairs, sick with tuberculosis and jaw-droppingly drunk. As Fitzgerald himself would later admit, he had become a poor caretaker of everything he possessed, even his own talent. After a decade of enviable productivity, his writing had slowed to a trickle of short stories, most of them published inEsquire, his one remaining reliable outlet, and many of these, as the scholar Ruth Prigozy describes them, “elliptical, unadorned, curiously enervated, barely stories at all.”

When the editors of The New Yorker categorically rejected the forty-year-old’s delicate slip of a short story “Thank You for the Light” in 1936 as “altogether out of the question,” their reasons hinged partially on its lack of merits. Few of Fitzgerald’s pieces from the period, this one included, clocked in at the standard commercial length of five thousand words and most of them gave the strong impression that they were both dashed off quickly and forced. They were. Yet I’d hazard that other, more complex reasons for its rejection were in play too, namely the ever-ephemeral nature of the artist’s image and his ability to reflect back to the nation its own acts of bad faith, manias, exuberances and bankrupt ideas.

With a penchant for casting his own experience as a particularly grandiose American brand of success and tragedy and with a proclivity for scripting the drama of the inner life in the language of economics, Fitzgerald declared elsewhere in 1936 that his happiness through the Jazz Age was as “unnatural as the Boom . . . and my recent experience parallels the wave of despair that swept the nation when the Boom was over.” In placing “Thank You” in the reject pile, the editors did not voice their concerns specifically in these national terms, but something like the outsized stakes involved in managing Fitzgerald’s reputation appeared to be on their minds. Calling the story “really too fantastic,” which is to say, ‘odd,’ they concluded, “It seems to us so curious and so unlike the kind of thing we associate with him.”

Not only did it not square with the dashing image of the lyrical, romantic wunderkind of the vertiginous Twenties—which Fitzgerald’s readers were emotionally invested in—but in its small way, it also pulled back the sheet to reveal the unforgiveable American sin of personal failure and diminished talent. As he wrote and sent out “curious” stories that bore the stylistic markings of someone else altogether, and as he watched them come back declined, Fitzgerald understood too well that the conditions of his literary celebrity lay in the past.

by Thomas Heise. Berfrois |  Read more:
Illustration: Automat, Edward Hopper, 1927

The Pretty Things



Odilon Redon (1840-1916)
Flowers in a blue jug, 1910
via:

The Great Rift

In the span of about a week, starting on December 30, 2007, the day that President Mwai Kibaki stood awkwardly in an ill-fitting suit in the backyard of the Nairobi statehouse, Bible in hand, and had himself sworn in after a rigged election, Kenya went from one of the most orderly countries in sub-Saharan Africa to a war zone. The violence was as terrible as it was swift, but the real shock was that it could happen here at all. Kenya had just held two back-to-back national elections, in 2002 and 2005, that were widely praised as free and fair. According to pre-election polls, most Kenyans were backing the opposition candidate, Raila Odinga, and they were expecting a peaceful transfer of power, which has happened only a few times in Africa, but Kenya was thought to be the happy exception, and for good reason.

Having been stationed for the New York Times in Kenya for more than six years, and having reported on Kenya’s amazing distance runners, its second-to-none safari business, and its golf-club-wielding middle class, I watched this country prosper as many other countries in Africa remained stagnant or, worse, imploded further. Kenya was different. It was the anti-Congo, the anti-Burundi, the anti-Sudan, the opposite of African nations where violence rules and the infrastructure is sinking back into the weeds. I used to get back from those countries, places where I feared for my life all the time, and want to kiss the tarmac at Nairobi’s airport. In Kenya, things work. There’s an orderliness here inherited from the British, manifest in the cul-de-sacs with marked street signs in neat black lettering and the SUVs driven by the wildlife rangers somehow without a speck of dirt on them. There are Internet startups, investment banks, a thriving national airline. It is still Africa, and most people are still poor, but even that has been changing. In the mid-2000s, the economy was growing by about 6 percent per year, far faster than those of Western Europe or the U.S., adding hundreds of thousands of new jobs. Kenya’s middle class—around four million people making between three thousand and forty thousand dollars per year—is one of the continent’s largest.

Which is all to say that when Kibaki’s men openly hijacked the vote-counting process and forcibly installed their man, I, along with most Kenyans, was astounded and then quickly appalled. Within minutes of Kibaki taking the oath of office that day, thousands of protesters burst out of Kibera, an enormous shantytown, waving sticks, smashing shacks, burning tires, and hurling stones. Police poured into the streets to control them. In the next few days, gangs went from house to house across the country, dragging out people of certain tribes and clubbing them to death. It was horrifyingly clear what was starting to happen—tribal war—and that promising GDP or literacy-rate statistics were no longer relevant. (...)

The election was the first time in Kenya’s history that tribal politics was dragged into the open and the first time that there was a hotly competitive race between a Kikuyu (Kibaki) and a non-Kikuyu (Odinga, a Luo). There are aboutforty different ethnic groups or tribes in the country, each with its own language and customs, and the stolen election ignited long-simmering ethnic grievances that many Kenyans had thought, or maybe more aptly, had wished were redressed. In all, at least one thousand people were murdered and about one million displaced. The police, the judiciary, the army, the religious leaders, and especially the politicians all failed their country at the moment when they were needed most.

In much of Africa, if not the world, geography and ethnicity correlate, certain groups dominating certain areas. This was the basis of South Africa’s apartheid-era homeland policy, which sought to relegate every black person in the country to an ethnic homeland. In Kenya, single ethnic groups often overwhelmingly populate a place, like the Luos on the shores of Lake Victoria or the Kikuyus in the foothills around Mt. Kenya. Not so in the Rift Valley. Here Luos, Kikuyus, Kambas, Kipsigis, Nandes, Ogieks (the traditional hunters and gatherers), Luhyas, Masais, and Kisiis are all packed together, drawn by fertile soil and the opportunity for work, making the towns and the countryside cosmopolitan. The multiethnic Rift Valley was the epicenter of the violence, and death squads swept the hills with elemental killing tools—knives, rocks, and fire—singling out families to execute (the stripes of destruction I saw from the helicopter).

Kenya’s portion of the Great Rift Valley seems to belong to another world and another time—lakes so full of flamingoes that the water is actually pink when you scoop it up in your hands, sculpted green mountains nosing the sky, and soils so rich that just about any fruit or vegetable known to man can grow, from mangoes to guava to snow peas to cucumbers to miles and miles of high-quality, disease-resistant corn. Kenya’s natural beauty, so undeniable in the Rift Valley, sent it down a path different from other European colonies: few African areas attracted so many white settlers. South Africa, yes, and Rhodesia (now Zimbabwe) too, but they were qualitatively different, agricultural and mineral-based economies, with legions of working-class whites. Kenya, on the other hand, because of its wildlife and spectacular landscape, became a playground for aristocratic misfits. They came to shoot lions, drink gin, maybe try their hand at gentleman farming, and cheat on their wives. There was a famous expression from colonial-era Kenya: “Are you married, or do you live in Kenya?’’

by Jeffrey Gettleman, Lapham's Quarterly |  Read more:

Thursday, September 20, 2012


Kamisaka Sekka (1866 - 1942) Japanese Woodblock Print
Rolling Hillside
Sekka’s A World of Things Series (Momoyogusa)
via: