Saturday, July 21, 2012
Blogs to Riches: The Haves and Have Nots of the Blogging Boom
[ed. For your reading pleasure, humble dispatches from the D-list.]
In theory, sure. But if you talk to many of today’s bloggers, they’ll complain that the game seems fixed. They’ve targeted one of the more lucrative niches—gossip or politics or gadgets (or sex, of course)—yet they cannot reach anywhere close to the size of the existing big blogs. It’s as if there were an A-list of a few extremely lucky, well-trafficked blogs—then hordes of people stuck on the B-list or C-list, also-rans who can’t figure out why their audiences stay so comparatively puny no matter how hard they work. “It just seems like it’s a big in-party,” one blogger complained to me. (Indeed, a couple of pranksters last spring started a joke site called Blogebrity and posted actual lists of the blogs they figured were A-, B-, and C-level famous.)
That’s a lot of inequality for a supposedly democratic medium. Not long ago, Clay Shirky, an instructor at New York University, became interested in this phenomenon—and argued that there is a scientific explanation. Shirky specializes in the social dynamics of the Internet, including “network theory”: a mathematical model of how information travels inside groups of loosely connected people, such as users of the Web.
To analyze the disparities in the blogosphere, Shirky took a sample of 433 blogs. Then he counted an interesting metric: the number of links that pointed toward each site (“inbound” links, as they’re called). Why links? Because they are the most important and visible measure of a site’s popularity. Links are the chief way that visitors find new blogs in the first place. Bloggers almost never advertise their sites; they don’t post billboards or run blinking trailers on top of cabs. No, they rely purely on word of mouth. Readers find a link to Gawker or Andrew Sullivan on a friend’s site, and they follow it. A link is, in essence, a vote of confidence that a fan leaves inscribed in cyberspace: Check this site out! It’s cool! What’s more, Internet studies have found that inbound links are an 80 percent–accurate predictor of traffic. The more links point to you, the more readers you have. (Well, almost. But the exceptions tend to prove the rule: Fleshbot, for example. The sex blog has 300,000 page views per day but relatively few inbound links. Not many readers are willing to proclaim their porn habits with links, understandably.)
When Shirky compiled his analysis of links, he saw that the smaller bloggers’ fears were perfectly correct: There is enormous inequity in the system. A very small number of blogs enjoy hundreds and hundreds of inbound links—the A-list, as it were. But almost all others have very few sites pointing to them. When Shirky sorted the 433 blogs from most linked to least linked and lined them up on a chart, the curve began up high, with the lucky few. But then it quickly fell into a steep dive, flattening off into the distance, where the vast majority of ignored blogs reside. The A-list is teensy, the B-list is bigger, and the C-list is simply massive. In the blogosphere, the biggest audiences—and the advertising revenue they bring—go to a small, elite few. Most bloggers toil in total obscurity.
Economists and network scientists have a name for Shirky’s curve: a “power-law distribution.” Power laws are not limited to the Web; in fact, they’re common to many social systems. If you chart the world’s wealth, it forms a power-law curve: A tiny number of rich people possess most of the world’s capital, while almost everyone else has little or none. The employment of movie actors follows the curve, too, because a small group appears in dozens of films while the rest are chronically underemployed. The pattern even emerges in studies of sexual activity in urban areas: A small minority bed-hop, while the rest of us are mostly monogamous.
The power law is dominant because of a quirk of human behavior: When we are asked to decide among a dizzying array of options, we do not act like dispassionate decision-makers, weighing each option on its own merits. Movie producers pick stars who have already been employed by other producers. Investors give money to entrepreneurs who are already loaded with cash. Popularity breeds popularity.
“It’s not about moral failings or any sort of psychological thing. People aren’t lazy—they just base their decisions on what other people are doing,” Shirky says. “It’s just social physics. It’s like gravity, one of those forces.
Power laws are arguably part of the very nature of links. To explain why, Shirky poses a thought experiment: Imagine that 1,000 people were all picking their favorite ten blogs and posting lists of those links. Alice, the first person, would read a few, pick some favorites, and put up a list of links pointing to them. The next person, Bob, is thus incrementally more likely to pick Alice’s favorites and include some of them on his own list. The third person, Carmen, is affected by the choices of the first two, and so on. This repeats until a feedback loop emerges. Those few sites lucky enough to acquire the first linkages grow rapidly off their early success, acquiring more and more visitors in a cascade of popularity. So even if the content among competitors is basically equal, there will still be a tiny few that rise up to form an elite.
First-movers get a crucial leg up in this kind of power-law system. This is certainly true of the blogosphere. If you look at the list of the most-linked-to blogs on the top 100 as ranked by Technorati—a company that scans the blogosphere every day—many of those at the top were first-movers, the pioneers in their fields. With 19,764 inbound links, the No. 1 site is Boing Boing, a tech blog devoted to geek news and nerd trivia; it has been online for five years, making it a grandfather in the field. In the gossip- blog arena, Gawker is the graybeard, having launched in 2002. With 4,790 sites now linking to it, Gawker towers above the more-recent entrants such as PerezHilton.com (with 1,549 links) and Jossip (with 814). In politics, the highest is Daily Kos, one of the first liberal blogs—with 11,182 links—followed closely by Instapundit, an early right-wing blog, with 6,513. Uncountable teensy political blogs lie in their shadows.
In scientific terms, this pattern is called “homeostasis”—the tendency of networked systems to become self-reinforcing. “It’s the same thing you see in economies—the rich-get-richer problem,” Shirky notes.
by Clive Thompson, New York Magazine | Read more:
Photo: Ben FryWhy We Should Stop Talking About 'Bus Stigma'
"So, Mr. Walker. If we adopt this plan of yours, does that mean I’m going to leave my BMW in the driveway?"
Years later, on my book tour, I was at dinner with some architects when the conversation slipped into one of those abstract rail versus bus debates. One woman, a leading architecture scholar, said: "But I simply wouldn’t ride a bus," as though that settled the matter.
Transit, even the indispensable bus, will continue on that path to greater relevance so long as citizens care about it and demand that it be funded.
Both of these people are prosperous, successful, and (if it matters) white. So both are likely to be counted as data points when people argue that there is an American "stigma" about buses, felt mostly by white and successful people, and that transit agencies need to "break through" that stigma to achieve relevance.
There is a simpler explanation. These two people are relatively elite, as are most of our decision-makers. Elected officials and leading professionals are nothing like a representative slice of the population. Many have the best of intentions and a strong commitment to sustainable urbanism, but some still make the mistake of assuming that a transit service that they personally wouldn’t ride must not be accomplishing anything important.
Elites are by definition a small minority, so it makes no sense to define a vast transit network around their personal tastes. Even when we’ve achieved all our sustainability goals, that particular city councilman can still drive his BMW everywhere, and that leading architecture scholar need never set foot on a bus. It doesn’t matter much what they do, because there just aren’t very many of them.
This, after all, is how Germany works. Germany is a world-leader in the design of expensive luxury cars, and has a network of freeways with no speed limits where you can push these cars to their ecstatic edge. But most urban travel in Germany happens on bikes, feet, or civilized and useful public transit systems in pleasant and sustainable cities. Transit’s purpose is to appeal to massive numbers of diverse riders, not chase the choosy few who would rather be on the Autobahn.
All of this came to mind in reading Amanda Hess’s recent Atlantic Cities article, "Race, Class and the Stigma of Riding the Bus in America." Hess argues that the predominance of minority and low-income people on the bus is evidence of an American bus "stigma." "In Los Angeles," she writes, "92 percent of bus riders are people of color. Their annual median household income is $12,000."
The reference to race is a distraction. The service area of the Los Angeles MTA is well over 70 percent people of color. What’s more, whites are more likely to live in low-density areas with obstructed street patterns where effective bus service is impossible. So people of color in L.A. may be over 80 percent of the residents for whom the MTA can be useful, which means that the number of white bus riders is not far off what we should expect.
When it comes to income – or "class," as she calls it – Hess has a point. Median income among Los Angeles MTA bus riders is well below the average for its service area, as is true of most urban transit agencies.
Notice what happens, though, when you say "class" instead of "income." Income is obviously a spectrum, with families and people scattered at every point along it. But "class" sounds like a set of boxes. American discourse is full of words that describe class as a box that you’re either in or out of: poverty, the middle class, the working class, the wealthy, the top one percent. We tend to use the word "class" when we want to imply a permanent condition. You can move gradually along the spectrum of income, but you must break through fortress walls to advance in "class."
by Jarrett Walker, Atlantic Cities | Read more:
Photo credit: Dave Newman/Shutterstock.com
The Genetics of Stupidity
What if we’ve been thinking about the genetics of intelligence from completely the wrong angle? Intelligence (as indexed by IQ or the general intelligence factor “g”) is clearly highly heritable in humans – people who are more genetically similar are also more similar in this factor. (Genetic variance has been estimated as explaining ~75% of variance in g, depending on age and other factors). There must therefore be genetic variants in the population that affect intelligence – so far, so good. But the search for such variants has, at its heart, an implicit assumption: that these variants affect intelligence in a fairly specific way – that they will occur in genes “for intelligence”.
An implication of that phrase is that mutations in those genes were positively selected for at some stage in humanity’s descent from our common ancestor with apes, on the basis of conferring increased intelligence. This seems a fairly reasonable leap to make – such genes must exist and, if variation in these genes in humanity’s evolution could affect intelligence, then maybe variation in those same genes can explain variation within the human species.
The problem with that logic is that we are talking about two very different types of variation. On the one hand, mutations that arose during human evolution that conferred increased intelligence (through whatever mechanism) will have been positively selected for and fixed in the population. How this happened is unknown of course, but one can imagine an iterative process, where initial small changes in, say, the timing of processes of brain development led to small increases in intelligence. Increased cognitive capabilities could have led in turn to the emergence of crude communication and culture, opening up what has been called the “cognitive niche” – creating an environment where further increases in intelligence became selectively more and more advantageous – a runaway process, where genetic changes bootstrap on cultural development in a way that reinforces their own adaptiveness.
That’s all nice, though admittedly speculative, but those mutations are the ones that we would expect to not vary in human populations – they would now be fixed. In particular, there is little reason to expect that there would exist new mutations in such genes, present in some but not all humans, which act to further increase intelligence. This is simply a matter of probabilities: the likelihood of a new mutation in some such gene changing its activity in a way that is advantageous is extremely low, compared to the likelihood of it either having no effect or being deleterious. There are simply many more ways of screwing something up than of improving it.
That is true for individual proteins and it is true at a higher level, for organismal traits that affect fitness (the genetic components of which have presumably been optimised by millions of years of evolution). Mutations are much more likely to cause a decrement in such traits than to improve them. So maybe we’re thinking about the genetics of g from the wrong perspective – maybe we should be looking for mutations that decrease intelligence from some Platonic ideal of a “wild-type” human. Thinking in this way – about “mutations that affect” a trait, rather than “genes for” the trait – changes our expectations about the type of variation that could be contributing to the heritability of the trait.
Mutations that lower intelligence could be quite non-specific, diverse and far more idiosyncratic. The idea of a finite, stable and discrete set of variants that specifically contribute to intelligence levels and that simply get shuffled around in human populations may be a fallacy. That view is supported by the fact that genome-wide association studies for common variants affecting intelligence have so far come up empty.
Various researchers have suggested that g may be simply an index of a general fitness factor – an indirect measure of the mutational load of an organism. The idea is that, while we all carry hundreds of deleterious mutations, some of us carry more than others, or ones with more severe effects. These effects in combination can degrade the biological systems of development and physiology in a general way, rendering them less robust and less able to generate our Platonic, ideal phenotype. In this model, it is not the idea that specific mutations have specific effects on specific traits that matters so much – it is that the overall load cumulatively reduces fitness through effects at the systems level. This means that the mutations affecting intelligence in one person may be totally different from those affecting it in another – there will be no genes “for intelligence”.
An implication of that phrase is that mutations in those genes were positively selected for at some stage in humanity’s descent from our common ancestor with apes, on the basis of conferring increased intelligence. This seems a fairly reasonable leap to make – such genes must exist and, if variation in these genes in humanity’s evolution could affect intelligence, then maybe variation in those same genes can explain variation within the human species. The problem with that logic is that we are talking about two very different types of variation. On the one hand, mutations that arose during human evolution that conferred increased intelligence (through whatever mechanism) will have been positively selected for and fixed in the population. How this happened is unknown of course, but one can imagine an iterative process, where initial small changes in, say, the timing of processes of brain development led to small increases in intelligence. Increased cognitive capabilities could have led in turn to the emergence of crude communication and culture, opening up what has been called the “cognitive niche” – creating an environment where further increases in intelligence became selectively more and more advantageous – a runaway process, where genetic changes bootstrap on cultural development in a way that reinforces their own adaptiveness.
That’s all nice, though admittedly speculative, but those mutations are the ones that we would expect to not vary in human populations – they would now be fixed. In particular, there is little reason to expect that there would exist new mutations in such genes, present in some but not all humans, which act to further increase intelligence. This is simply a matter of probabilities: the likelihood of a new mutation in some such gene changing its activity in a way that is advantageous is extremely low, compared to the likelihood of it either having no effect or being deleterious. There are simply many more ways of screwing something up than of improving it.
That is true for individual proteins and it is true at a higher level, for organismal traits that affect fitness (the genetic components of which have presumably been optimised by millions of years of evolution). Mutations are much more likely to cause a decrement in such traits than to improve them. So maybe we’re thinking about the genetics of g from the wrong perspective – maybe we should be looking for mutations that decrease intelligence from some Platonic ideal of a “wild-type” human. Thinking in this way – about “mutations that affect” a trait, rather than “genes for” the trait – changes our expectations about the type of variation that could be contributing to the heritability of the trait.
Mutations that lower intelligence could be quite non-specific, diverse and far more idiosyncratic. The idea of a finite, stable and discrete set of variants that specifically contribute to intelligence levels and that simply get shuffled around in human populations may be a fallacy. That view is supported by the fact that genome-wide association studies for common variants affecting intelligence have so far come up empty.
Various researchers have suggested that g may be simply an index of a general fitness factor – an indirect measure of the mutational load of an organism. The idea is that, while we all carry hundreds of deleterious mutations, some of us carry more than others, or ones with more severe effects. These effects in combination can degrade the biological systems of development and physiology in a general way, rendering them less robust and less able to generate our Platonic, ideal phenotype. In this model, it is not the idea that specific mutations have specific effects on specific traits that matters so much – it is that the overall load cumulatively reduces fitness through effects at the systems level. This means that the mutations affecting intelligence in one person may be totally different from those affecting it in another – there will be no genes “for intelligence”.
Direct evidence for this kind of effect of mutational load was found recently in a study by Ronald Yeo and colleagues, showing that the overall burden of rare copy number variants (deletions or duplications of segments of chromosomes) negatively predicts intelligence (r = -0.3).
If g really is an index of a general fitness factor, then it should be correlated with other indices of fitness. This indeed appears to be the case. G is weakly positively correlated with height, for example, and also strongly correlated with various measures of health and longevity.
by Kevin Mitchell, Wiring the Brain | Read more:
If g really is an index of a general fitness factor, then it should be correlated with other indices of fitness. This indeed appears to be the case. G is weakly positively correlated with height, for example, and also strongly correlated with various measures of health and longevity.
by Kevin Mitchell, Wiring the Brain | Read more:
A Real-Life Fairy Tale, Long in the Making: Searching for Sugar Man
It’s a real-life tale of talent disregarded, bad luck and missed opportunities, with an improbable stop in the Hamptons and a Hollywood conclusion: A singer-songwriter is signed to a contract in the late 1960s after producers with ties to Motown Records see him playing in a smoky Detroit nightclub called the Sewer. He makes a pair of albums that sell almost nothing and then drops out of sight. So why, 40 years later, would anyone feel compelled to make a movie about this obscure artist, known professionally as Rodriguez?
Because, as it turns out, on the other side of the globe, in South Africa, Rodriguez had become as popular as the Rolling Stones or Elvis Presley. But he never knew of that success. He never saw a penny in royalties from it, and he spent decades doing manual labor to make ends meet and raise his three daughters. It wasn’t until fans in South Africa, trying to verify rumors he was dead, tracked him down through the Internet and brought him there to perform to adoring multitudes, that his career was resuscitated.
“This was the greatest, the most amazing, true story I’d ever heard, an almost archetypal fairy tale,” said Malik Bendjelloul, the Swedish director of “Searching for Sugar Man,” a documentary that opens on Friday in New York and Los Angeles. “It’s a perfect story. It has the human element, the music aspect, a resurrection and a detective story.”
Because of an odd confluence of circumstances it is also a story unlikely ever to occur again. In the era before the World Wide Web, South Africans, living under apartheid and isolated from the main currents of pop culture by domestic censorship and international sanctions, had no idea that Rodriguez languished in anonymity elsewhere. The singer himself compounded the situation by seeking to live as inconspicuously as possible.
On another, somewhat more oblique level, Mr. Bendjelloul acknowledged, “Searching for Sugar Man” can also be interpreted as a meditation on the fickle and arbitrary nature of celebrity and fame. We live in a culture, the film suggests, in which talent and quality sometimes go ignored, and when they get belated recognition, even that is often through happenstance.
“I’ve produced a lot of big-name artists with big hits, like Peter Frampton and Jerry Lee Lewis, but I’ve never worked with anyone as talented as Rodriguez,” Steve Rowland, who produced the singer’s second album, “Coming From Reality,” said in a telephone interview from his home in Palm Springs, Calif. “I never understood why he didn’t become a big star, so to see him rise like a phoenix from the ashes, it’s just as inexplicable, but it makes me really, really happy this is going on for him, because he’s a wonderful, humble person, and he really deserves it.”
by Larry Rohter, NY Times | Read more:
Photo: Nicole BengivenoFriday, July 20, 2012
When Fashion Meets Fishing, the Feathers Fly
In an improbable collision of cutting-edge chic and a hobby that requires drab waders, fly fishing shops around the country are suddenly inundated with stylish women looking to get in on the latest trend: long, colorful feathers that are bonded or clipped into hair.
Demand for the feathers, before now exclusively the domain of fly fishermen, who use them to tie flies, has created a shortage, forcing up the price and causing fly shops and hairdressers to compete for the elusive plumes.
“I’ve been out for probably a month,” said Bill Thompson, the owner of North Country Angler in North Conway, N.H. “There is that worry that next year, fishermen won’t have materials they’ll need.”
The circumstances are especially strange because a proudly stodgy and tradition-bound industry content to hide from the world beyond the river is competing in this niche marketplace with a fad that may not last as long as a trout’s spawning season.
“For someone to use them as a fashion statement is just sacrilegious,” said Bob Brown, 65, a fly fisherman who lives in an recreational vehicle parked in Kennebunk, Me. He said he had been tying flies for 50 years and this is the first time he had ever heard of a feather shortage.
“They’ve been genetically bred for fly tying, and that’s what they should be used for,” Mr. Brown said.
Fly fishing feathers — which individually are called hackles and as a group called saddles — are harvested from roosters painstakingly bred to grow supple feathers. It takes more than a year for a rooster to grow feathers long and pliable enough for use by fly fishermen. Because no one could have predicted the fashion trend, there are not enough to go around.
by Katie Zezima, NY Times | Read more:
Photo: Craig DilgerYou Walk Wrong
Walking is easy. It’s so easy that no one ever has to teach you how to do it. It’s so easy, in fact, that we often pair it with other easy activities—talking, chewing gum—and suggest that if you can’t do both simultaneously, you’re some sort of insensate clod. So you probably think you’ve got this walking thing pretty much nailed. As you stroll around the city, worrying about the economy, or the environment, or your next month’s rent, you might assume that the one thing you don’t need to worry about is the way in which you’re strolling around the city.
Well, I’m afraid I have some bad news for you: You walk wrong.
Look, it’s not your fault. It’s your shoes. Shoes are bad. I don’t just mean stiletto heels, or cowboy boots, or tottering espadrilles, or any of the other fairly obvious foot-torture devices into which we wincingly jam our feet. I mean all shoes. Shoes hurt your feet. They change how you walk. In fact, your feet—your poor, tender, abused, ignored, maligned, misunderstood feet—are getting trounced in a war that’s been raging for roughly a thousand years: the battle of shoes versus feet.
Last year, researchers at the University of the Witwatersrand in Johannesburg, South Africa, published a study titled “Shod Versus Unshod: The Emergence of Forefoot Pathology in Modern Humans?” in the podiatry journal The Foot. The study examined 180 modern humans from three different population groups (Sotho, Zulu, and European), comparing their feet to one another’s, as well as to the feet of 2,000-year-old skeletons. The researchers concluded that, prior to the invention of shoes, people had healthier feet. Among the modern subjects, the Zulu population, which often goes barefoot, had the healthiest feet while the Europeans—i.e., the habitual shoe-wearers—had the unhealthiest. One of the lead researchers, Dr. Bernhard Zipfel, when commenting on his findings, lamented that the American Podiatric Medical Association does not “actively encourage outdoor barefoot walking for healthy individuals. This flies in the face of the increasing scientific evidence, including our study, that most of the commercially available footwear is not good for the feet.”
Okay, so shoes can be less than comfortable. If you’ve ever suffered through a wedding in four-inch heels or patent-leather dress shoes, you’ve probably figured this out. But does that really mean we don’t walk correctly? (Yes.) I mean, don’t we instinctively know how to walk? (Yes, sort of.) Isn’t walking totally natural? Yes—but shoes aren’t.
“Natural gait is biomechanically impossible for any shoe-wearing person,” wrote Dr. William A. Rossi in a 1999 article in Podiatry Management. “It took 4 million years to develop our unique human foot and our consequent distinctive form of gait, a remarkable feat of bioengineering. Yet, in only a few thousand years, and with one carelessly designed instrument, our shoes, we have warped the pure anatomical form of human gait, obstructing its engineering efficiency, afflicting it with strains and stresses and denying it its natural grace of form and ease of movement head to foot.” In other words: Feet good. Shoes bad.
Perhaps this sounds to you like scientific gobbledygook or the ravings of some radical back-to-nature nuts. In that case, you should listen to Galahad Clark. Clark is 32 years old, lives in London, and is about as unlikely an advocate for getting rid of your shoes as you could find. For one, he’s a scion of the Clark family, as in the English shoe company C&J Clark, a.k.a. Clarks, founded in 1825. Two, he currently runs his own shoe company. So it’s a bit surprising when he says, “Shoes are the problem. No matter what type of shoe. Shoes are bad for you.”
This is especially grim news for New Yorkers, who (a) tend to walk a lot, and (b) tend to wear shoes while doing so.
I know what you’re thinking: If shoes are so bad for me, what’s my alternative?
Simple. Walk barefoot.
Okay, now I know what you’re thinking: What’s my other alternative?
by Adam Sternberg, New York Magazine | Read more:
Photo: Tom Schierlitz
Well, I’m afraid I have some bad news for you: You walk wrong.Look, it’s not your fault. It’s your shoes. Shoes are bad. I don’t just mean stiletto heels, or cowboy boots, or tottering espadrilles, or any of the other fairly obvious foot-torture devices into which we wincingly jam our feet. I mean all shoes. Shoes hurt your feet. They change how you walk. In fact, your feet—your poor, tender, abused, ignored, maligned, misunderstood feet—are getting trounced in a war that’s been raging for roughly a thousand years: the battle of shoes versus feet.
Last year, researchers at the University of the Witwatersrand in Johannesburg, South Africa, published a study titled “Shod Versus Unshod: The Emergence of Forefoot Pathology in Modern Humans?” in the podiatry journal The Foot. The study examined 180 modern humans from three different population groups (Sotho, Zulu, and European), comparing their feet to one another’s, as well as to the feet of 2,000-year-old skeletons. The researchers concluded that, prior to the invention of shoes, people had healthier feet. Among the modern subjects, the Zulu population, which often goes barefoot, had the healthiest feet while the Europeans—i.e., the habitual shoe-wearers—had the unhealthiest. One of the lead researchers, Dr. Bernhard Zipfel, when commenting on his findings, lamented that the American Podiatric Medical Association does not “actively encourage outdoor barefoot walking for healthy individuals. This flies in the face of the increasing scientific evidence, including our study, that most of the commercially available footwear is not good for the feet.”
Okay, so shoes can be less than comfortable. If you’ve ever suffered through a wedding in four-inch heels or patent-leather dress shoes, you’ve probably figured this out. But does that really mean we don’t walk correctly? (Yes.) I mean, don’t we instinctively know how to walk? (Yes, sort of.) Isn’t walking totally natural? Yes—but shoes aren’t.
“Natural gait is biomechanically impossible for any shoe-wearing person,” wrote Dr. William A. Rossi in a 1999 article in Podiatry Management. “It took 4 million years to develop our unique human foot and our consequent distinctive form of gait, a remarkable feat of bioengineering. Yet, in only a few thousand years, and with one carelessly designed instrument, our shoes, we have warped the pure anatomical form of human gait, obstructing its engineering efficiency, afflicting it with strains and stresses and denying it its natural grace of form and ease of movement head to foot.” In other words: Feet good. Shoes bad.
Perhaps this sounds to you like scientific gobbledygook or the ravings of some radical back-to-nature nuts. In that case, you should listen to Galahad Clark. Clark is 32 years old, lives in London, and is about as unlikely an advocate for getting rid of your shoes as you could find. For one, he’s a scion of the Clark family, as in the English shoe company C&J Clark, a.k.a. Clarks, founded in 1825. Two, he currently runs his own shoe company. So it’s a bit surprising when he says, “Shoes are the problem. No matter what type of shoe. Shoes are bad for you.”
This is especially grim news for New Yorkers, who (a) tend to walk a lot, and (b) tend to wear shoes while doing so.
I know what you’re thinking: If shoes are so bad for me, what’s my alternative?
Simple. Walk barefoot.
Okay, now I know what you’re thinking: What’s my other alternative?
by Adam Sternberg, New York Magazine | Read more:
Photo: Tom Schierlitz
War Is Betrayal
We condition the poor and the working class to go to war. We promise them honor, status, glory, and adventure. We promise boys they will become men. We hold these promises up against the dead-end jobs of small-town life, the financial dislocations, credit card debt, bad marriages, lack of health insurance, and dread of unemployment. The military is the call of the Sirens, the enticement that has for generations seduced young Americans working in fast food restaurants or behind the counters of Walmarts to fight and die for war profiteers and elites.
The poor embrace the military because every other cul-de-sac in their lives breaks their spirit and their dignity. Pick up Erich Maria Remarque’s All Quiet on the Western Front or James Jones’s From Here to Eternity. Read Henry IV. Turn to the Iliad. The allure of combat is a trap, a ploy, an old, dirty game of deception in which the powerful, who do not go to war, promise a mirage to those who do.
I saw this in my own family. At the age of ten I was given a scholarship to a top New England boarding school. I spent my adolescence in the schizophrenic embrace of the wealthy, on the playing fields and in the dorms and classrooms that condition boys and girls for privilege, and came back to my working-class relations in the depressed former mill towns in Maine. I traveled between two universes: one where everyone got chance after chance after chance, where connections and money and influence almost guaranteed that you would not fail; the other where no one ever got a second try. I learned at an early age that when the poor fall no one picks them up, while the rich stumble and trip their way to the top.
Those I knew in prep school did not seek out the military and were not sought by it. But in the impoverished enclaves of central Maine, where I had relatives living in trailers, nearly everyone was a veteran. My grandfather. My uncles. My cousins. My second cousins. They were all in the military. Some of them—including my Uncle Morris, who fought in the infantry in the South Pacific during World War II—were destroyed by the war. Uncle Morris drank himself to death in his trailer. He sold the hunting rifle my grandfather had given to me to buy booze.
He was not alone. After World War II, thousands of families struggled with broken men who, because they could never read the approved lines from the patriotic script, had been discarded. They were not trotted out for red-white-and-blue love fests on the Fourth of July or Veterans Day.
The myth of war held fast, despite the deep bitterness of my grandmother—who acidly denounced what war had done to her only son—and of others like her. The myth held because it was all the soldiers and their families had. Even those who knew it to be a lie—and I think most did—were loath to give up the fleeting moments of recognition, the only times in their lives they were told they were worth something.
“For it’s Tommy this, an’ Tommy that, an’ ‘Chuck him out, the brute!’” Rudyard Kipling wrote. “But it’s ‘Saviour of ’is country’ when the guns begin to shoot.”
Any story of war is a story of elites preying on the weak, the gullible, the marginal, the poor. I do not know of a single member of my graduating prep school class who went into the military. You could not say this about the high school class that graduated the same year in Mechanic Falls, Maine.
by Chris Hedges, Boston Review | Read more:
Photograph by Teddy Wade, U.S. Army
The poor embrace the military because every other cul-de-sac in their lives breaks their spirit and their dignity. Pick up Erich Maria Remarque’s All Quiet on the Western Front or James Jones’s From Here to Eternity. Read Henry IV. Turn to the Iliad. The allure of combat is a trap, a ploy, an old, dirty game of deception in which the powerful, who do not go to war, promise a mirage to those who do.I saw this in my own family. At the age of ten I was given a scholarship to a top New England boarding school. I spent my adolescence in the schizophrenic embrace of the wealthy, on the playing fields and in the dorms and classrooms that condition boys and girls for privilege, and came back to my working-class relations in the depressed former mill towns in Maine. I traveled between two universes: one where everyone got chance after chance after chance, where connections and money and influence almost guaranteed that you would not fail; the other where no one ever got a second try. I learned at an early age that when the poor fall no one picks them up, while the rich stumble and trip their way to the top.
Those I knew in prep school did not seek out the military and were not sought by it. But in the impoverished enclaves of central Maine, where I had relatives living in trailers, nearly everyone was a veteran. My grandfather. My uncles. My cousins. My second cousins. They were all in the military. Some of them—including my Uncle Morris, who fought in the infantry in the South Pacific during World War II—were destroyed by the war. Uncle Morris drank himself to death in his trailer. He sold the hunting rifle my grandfather had given to me to buy booze.
He was not alone. After World War II, thousands of families struggled with broken men who, because they could never read the approved lines from the patriotic script, had been discarded. They were not trotted out for red-white-and-blue love fests on the Fourth of July or Veterans Day.
The myth of war held fast, despite the deep bitterness of my grandmother—who acidly denounced what war had done to her only son—and of others like her. The myth held because it was all the soldiers and their families had. Even those who knew it to be a lie—and I think most did—were loath to give up the fleeting moments of recognition, the only times in their lives they were told they were worth something.
“For it’s Tommy this, an’ Tommy that, an’ ‘Chuck him out, the brute!’” Rudyard Kipling wrote. “But it’s ‘Saviour of ’is country’ when the guns begin to shoot.”
Any story of war is a story of elites preying on the weak, the gullible, the marginal, the poor. I do not know of a single member of my graduating prep school class who went into the military. You could not say this about the high school class that graduated the same year in Mechanic Falls, Maine.
by Chris Hedges, Boston Review | Read more:
Photograph by Teddy Wade, U.S. Army
Global Warming's Terrifying New Math
If the pictures of those towering wildfires in Colorado haven't convinced you, or the size of your AC bill this summer, here are some hard numbers about climate change: June broke or tied 3,215 high-temperature records across the United States. That followed the warmest May on record for the Northern Hemisphere – the 327th consecutive month in which the temperature of the entire globe exceeded the 20th-century average, the odds of which occurring by simple chance were 3.7 x 10-99, a number considerably larger than the number of stars in the universe.
Meteorologists reported that this spring was the warmest ever recorded for our nation – in fact, it crushed the old record by so much that it represented the "largest temperature departure from average of any season on record." The same week, Saudi authorities reported that it had rained in Mecca despite a temperature of 109 degrees, the hottest downpour in the planet's history.
Not that our leaders seemed to notice. Last month the world's nations, meeting in Rio for the 20th-anniversary reprise of a massive 1992 environmental summit, accomplished nothing. Unlike George H.W. Bush, who flew in for the first conclave, Barack Obama didn't even attend. It was "a ghost of the glad, confident meeting 20 years ago," the British journalist George Monbiot wrote; no one paid it much attention, footsteps echoing through the halls "once thronged by multitudes." Since I wrote one of the first books for a general audience about global warming way back in 1989, and since I've spent the intervening decades working ineffectively to slow that warming, I can say with some confidence that we're losing the fight, badly and quickly – losing it because, most of all, we remain in denial about the peril that human civilization is in.
When we think about global warming at all, the arguments tend to be ideological, theological and economic. But to grasp the seriousness of our predicament, you just need to do a little math. For the past year, an easy and powerful bit of arithmetical analysis first published by financial analysts in the U.K. has been making the rounds of environmental conferences and journals, but it hasn't yet broken through to the larger public. This analysis upends most of the conventional political thinking about climate change. And it allows us to understand our precarious – our almost-but-not-quite-finally hopeless – position with three simple numbers.
The First Number: 2° Celsius
If the movie had ended in Hollywood fashion, the Copenhagen climate conference in 2009 would have marked the culmination of the global fight to slow a changing climate. The world's nations had gathered in the December gloom of the Danish capital for what a leading climate economist, Sir Nicholas Stern of Britain, called the "most important gathering since the Second World War, given what is at stake." As Danish energy minister Connie Hedegaard, who presided over the conference, declared at the time: "This is our chance. If we miss it, it could take years before we get a new and better one. If ever."
In the event, of course, we missed it. Copenhagen failed spectacularly. Neither China nor the United States, which between them are responsible for 40 percent of global carbon emissions, was prepared to offer dramatic concessions, and so the conference drifted aimlessly for two weeks until world leaders jetted in for the final day. Amid considerable chaos, President Obama took the lead in drafting a face-saving "Copenhagen Accord" that fooled very few. Its purely voluntary agreements committed no one to anything, and even if countries signaled their intentions to cut carbon emissions, there was no enforcement mechanism. "Copenhagen is a crime scene tonight," an angry Greenpeace official declared, "with the guilty men and women fleeing to the airport." Headline writers were equally brutal: COPENHAGEN: THE MUNICH OF OUR TIMES? asked one.
The accord did contain one important number, however. In Paragraph 1, it formally recognized "the scientific view that the increase in global temperature should be below two degrees Celsius." And in the very next paragraph, it declared that "we agree that deep cuts in global emissions are required... so as to hold the increase in global temperature below two degrees Celsius." By insisting on two degrees – about 3.6 degrees Fahrenheit – the accord ratified positions taken earlier in 2009 by the G8, and the so-called Major Economies Forum. It was as conventional as conventional wisdom gets. The number first gained prominence, in fact, at a 1995 climate conference chaired by Angela Merkel, then the German minister of the environment and now the center-right chancellor of the nation.
Some context: So far, we've raised the average temperature of the planet just under 0.8 degrees Celsius, and that has caused far more damage than most scientists expected. (A third of summer sea ice in the Arctic is gone, the oceans are 30 percent more acidic, and since warm air holds more water vapor than cold, the atmosphere over the oceans is a shocking five percent wetter, loading the dice for devastating floods.) Given those impacts, in fact, many scientists have come to think that two degrees is far too lenient a target. "Any number much above one degree involves a gamble," writes Kerry Emanuel of MIT, a leading authority on hurricanes, "and the odds become less and less favorable as the temperature goes up." Thomas Lovejoy, once the World Bank's chief biodiversity adviser, puts it like this: "If we're seeing what we're seeing today at 0.8 degrees Celsius, two degrees is simply too much." NASA scientist James Hansen, the planet's most prominent climatologist, is even blunter: "The target that has been talked about in international negotiations for two degrees of warming is actually a prescription for long-term disaster." At the Copenhagen summit, a spokesman for small island nations warned that many would not survive a two-degree rise: "Some countries will flat-out disappear." When delegates from developing nations were warned that two degrees would represent a "suicide pact" for drought-stricken Africa, many of them started chanting, "One degree, one Africa."
Despite such well-founded misgivings, political realism bested scientific data, and the world settled on the two-degree target – indeed, it's fair to say that it's the only thing about climate change the world has settled on. All told, 167 countries responsible for more than 87 percent of the world's carbon emissions have signed on to the Copenhagen Accord, endorsing the two-degree target. Only a few dozen countries have rejected it, including Kuwait, Nicaragua and Venezuela. Even the United Arab Emirates, which makes most of its money exporting oil and gas, signed on. The official position of planet Earth at the moment is that we can't raise the temperature more than two degrees Celsius – it's become the bottomest of bottom lines. Two degrees.
Meteorologists reported that this spring was the warmest ever recorded for our nation – in fact, it crushed the old record by so much that it represented the "largest temperature departure from average of any season on record." The same week, Saudi authorities reported that it had rained in Mecca despite a temperature of 109 degrees, the hottest downpour in the planet's history.
Not that our leaders seemed to notice. Last month the world's nations, meeting in Rio for the 20th-anniversary reprise of a massive 1992 environmental summit, accomplished nothing. Unlike George H.W. Bush, who flew in for the first conclave, Barack Obama didn't even attend. It was "a ghost of the glad, confident meeting 20 years ago," the British journalist George Monbiot wrote; no one paid it much attention, footsteps echoing through the halls "once thronged by multitudes." Since I wrote one of the first books for a general audience about global warming way back in 1989, and since I've spent the intervening decades working ineffectively to slow that warming, I can say with some confidence that we're losing the fight, badly and quickly – losing it because, most of all, we remain in denial about the peril that human civilization is in.
When we think about global warming at all, the arguments tend to be ideological, theological and economic. But to grasp the seriousness of our predicament, you just need to do a little math. For the past year, an easy and powerful bit of arithmetical analysis first published by financial analysts in the U.K. has been making the rounds of environmental conferences and journals, but it hasn't yet broken through to the larger public. This analysis upends most of the conventional political thinking about climate change. And it allows us to understand our precarious – our almost-but-not-quite-finally hopeless – position with three simple numbers.
The First Number: 2° Celsius
If the movie had ended in Hollywood fashion, the Copenhagen climate conference in 2009 would have marked the culmination of the global fight to slow a changing climate. The world's nations had gathered in the December gloom of the Danish capital for what a leading climate economist, Sir Nicholas Stern of Britain, called the "most important gathering since the Second World War, given what is at stake." As Danish energy minister Connie Hedegaard, who presided over the conference, declared at the time: "This is our chance. If we miss it, it could take years before we get a new and better one. If ever."
In the event, of course, we missed it. Copenhagen failed spectacularly. Neither China nor the United States, which between them are responsible for 40 percent of global carbon emissions, was prepared to offer dramatic concessions, and so the conference drifted aimlessly for two weeks until world leaders jetted in for the final day. Amid considerable chaos, President Obama took the lead in drafting a face-saving "Copenhagen Accord" that fooled very few. Its purely voluntary agreements committed no one to anything, and even if countries signaled their intentions to cut carbon emissions, there was no enforcement mechanism. "Copenhagen is a crime scene tonight," an angry Greenpeace official declared, "with the guilty men and women fleeing to the airport." Headline writers were equally brutal: COPENHAGEN: THE MUNICH OF OUR TIMES? asked one.
The accord did contain one important number, however. In Paragraph 1, it formally recognized "the scientific view that the increase in global temperature should be below two degrees Celsius." And in the very next paragraph, it declared that "we agree that deep cuts in global emissions are required... so as to hold the increase in global temperature below two degrees Celsius." By insisting on two degrees – about 3.6 degrees Fahrenheit – the accord ratified positions taken earlier in 2009 by the G8, and the so-called Major Economies Forum. It was as conventional as conventional wisdom gets. The number first gained prominence, in fact, at a 1995 climate conference chaired by Angela Merkel, then the German minister of the environment and now the center-right chancellor of the nation.
Some context: So far, we've raised the average temperature of the planet just under 0.8 degrees Celsius, and that has caused far more damage than most scientists expected. (A third of summer sea ice in the Arctic is gone, the oceans are 30 percent more acidic, and since warm air holds more water vapor than cold, the atmosphere over the oceans is a shocking five percent wetter, loading the dice for devastating floods.) Given those impacts, in fact, many scientists have come to think that two degrees is far too lenient a target. "Any number much above one degree involves a gamble," writes Kerry Emanuel of MIT, a leading authority on hurricanes, "and the odds become less and less favorable as the temperature goes up." Thomas Lovejoy, once the World Bank's chief biodiversity adviser, puts it like this: "If we're seeing what we're seeing today at 0.8 degrees Celsius, two degrees is simply too much." NASA scientist James Hansen, the planet's most prominent climatologist, is even blunter: "The target that has been talked about in international negotiations for two degrees of warming is actually a prescription for long-term disaster." At the Copenhagen summit, a spokesman for small island nations warned that many would not survive a two-degree rise: "Some countries will flat-out disappear." When delegates from developing nations were warned that two degrees would represent a "suicide pact" for drought-stricken Africa, many of them started chanting, "One degree, one Africa."
Despite such well-founded misgivings, political realism bested scientific data, and the world settled on the two-degree target – indeed, it's fair to say that it's the only thing about climate change the world has settled on. All told, 167 countries responsible for more than 87 percent of the world's carbon emissions have signed on to the Copenhagen Accord, endorsing the two-degree target. Only a few dozen countries have rejected it, including Kuwait, Nicaragua and Venezuela. Even the United Arab Emirates, which makes most of its money exporting oil and gas, signed on. The official position of planet Earth at the moment is that we can't raise the temperature more than two degrees Celsius – it's become the bottomest of bottom lines. Two degrees.
by Bill McKibben, Rolling Stone | Read more:
Illustration by Edel RodriguezThursday, July 19, 2012
The Future of Manufacturing Is in America, Not China
But Ralph Lauren berets aside, the larger trends show that the tide has turned, and it is China's turn to worry. Many CEOs, including Dow Chemicals' Andrew Liveris, have declared their intentions to bring manufacturing back to the United States. What is going to accelerate the trend isn't, as people believe, the rising cost of Chinese labor or a rising yuan. The real threat to China comes from technology. Technical advances will soon lead to the same hollowing out of China's manufacturing industry that they have to U.S industry over the past two decades.
Several technologies advancing and converging will cause this.
First, robotics. The robots of today aren't the androids or Cylons that we are used to seeing in science fiction movies, but specialized electromechanical devices run by software and remote control. As computers become more powerful, so do the abilities of these devices. Robots are now capable of performing surgery, milking cows, doing military reconnaissance and combat, and flying fighter jets. Several companies, such Willow Garage, iRobot, and 9th Sense, sell robot-development kits for which university students and open-source communities are developing ever more sophisticated applications.
The factory assembly that China is currently performing is child's play compared to the next generation of robots -- which will soon become cheaper than human labor. One of China's largest manufacturers, Taiwan-based Foxconn Technology Group, announced last August that it plans to install one million robots within three years to do the work that its workers in China presently do. It has found even low-cost Chinese labor to be too expensive and demanding.
Then there is artificial intelligence (AI) -- software that makes computers, if not intelligent in the human sense, at least good enough to fake it. This is the basic technology that IBM's Deep Blue computer used to beat chess grandmaster Garry Kasparov in 1997 and that enabled IBM's Watson to beat TV-show Jeopardy champions in 2011. AI is making it possible to develop self-driving cars, voice-recognition systems such as the iPhone's Siri, and Face.com, the face-recognition software Facebook recently acquired.
by Vivek Wadhwa, Foreign Policy | Read more:
Stephen Brashear/Getty ImagesThe Triumph of the Family Farm
If you’re a part of the roughly 99 percent of the North American population that doesn’t work on a farm, you might guess at what comes next—many a lament has been written about the passing of the good old days in rural areas, the family farm’s decline, and the inevitable loss of the homestead. But in many respects, that narrative itself is obsolete. That’s certainly true in my family’s case: The Freeland farm is still being cultivated by my father. And it is bigger and more prosperous than ever.
My dad farms 3,200 acres of his own, and rents another 2,400—all told, a territory seven times the size of Central Park. Last year, he produced 3,900 tonnes (or metric tons) of wheat, 2,500 tonnes of canola, and 1,400 tonnes of barley. (That’s enough to produce 13 million loaves of bread, 1.2 million liters of vegetable oil, and 40,000 barrels of beer.) His revenue last year was more than $2 million, and he admits to having made “a good profit,” but won’t reveal more than that. The farm has just three workers, my dad and his two hired men, who farm with him nine months of the year. For the two or three weeks of seeding and harvest, my dad usually hires a few friends to help out, too.
My father farms in northern Alberta, but his story is typical of large-scale family farmers across North America. Urbanites may picture farmers as hip heritage-pig breeders returning to the land, or a struggling rural underclass waging a doomed battle to hang on to their patrimony as agribusiness moves in. But these stereotypes are misleading. In 2010, of all the farms in the United States with at least $1 million in revenues, 88 percent were family farms, and they accounted for 79 percent of production. Large-scale farmers today are sophisticated businesspeople who use GPS equipment to guide their combines, biotechnology to boost their yields, and futures contracts to hedge their risk. They are also pretty rich. (...)
Big Money has noticed these trends, and is beginning to pile in. “We are seeing a tremendous uptick in allocations and interest in farmland,” says Chris Erickson of HighQuest Partners, an agricultural consultancy and investor. Erickson told me that big institutional investors—pension funds, insurance companies—have recently been making investments in farmland ranging from “the several hundred millions to the billions.” Erickson said this broad interest is new, and is driven by the fact that “the fundamentals are changing dramatically.”
by Chrystia Freeland, The Atlantic | Read more:
Photo: David JohnstonWednesday, July 18, 2012
Into the Wild
Marko Cheseto is almost late to class. He enters the lobby of the social sciences building at 9:58 a.m., two minutes before his public speaking lecture begins. He is in no rush, plodding slowly amid the blur of backpacks and students. He stands out: 28 years old, long and spindly, a black man on the mostly white campus of the University of Alaska Anchorage, a Kenyan among mostly in-state students. His skin is as dark as an Alaskan winter morning; patches of frostbite char his cheeks like eyeblack. His lips are dry and crevassed. He is the most famous person on campus, a star runner. And he's pushing a two-wheeled walker.
A blond girl stops him. "Marko!" she says.
"Hellll-oooo!" he replies, voice arching.
"Can I give you a hug?"
"Okay, just don't push me!" he says in fast, accented English. She moves in gently. Marko embraces her with his left arm, his right hand steadying himself. For two months, Marko has envisioned this January morning: First day of spring semester senior year, a chance to prove that he's still the same old sweet, sarcastic, eager-to-entertain Marko. A few nights ago at a UAA basketball game, girls had hugged him in droves. Three former teammates surrounded him for a picture and posted it on Facebook. Marko had ambled around without his walker, showing off, perhaps too much.
Now Marko says goodbye to the blonde and rolls into an elevator. Before the doors close, an older woman whom Marko doesn't know juts toward the narrowing window and whispers, "We love you." The elevator rings open on the second floor, and Marko pushes to Room 251. He rolls toward the desks, then stops like a car that's halfway through a wrong turn.
Those desks -- the normal desks -- aren't for him anymore. He turns toward the lone handicap table, twists and falls into his seat straight-legged, then glances down at the shiny black shoes covering his new plastic stubs.
Those used to be his feet.
During an August night in 2008, Marko Cheseto walked onto a plane in Nairobi bound for Alaska. His feet were his own. He had only $100 in his pockets. His luggage totaled one bag containing two outfits. He was raised in Ptop, a village of 1,000 in the western Kenyan mountains, elevation 8,000 feet -- a foggy, damp region without running water or electricity or roads, where the Pokot dialect of Swahili was spoken. His father, Dickson, farmed, built houses and herded animals, many of which he sold to help purchase a one-way ticket to Anchorage, where the third oldest of his 11 children would attend college on a cross-country and track scholarship.
Nobody from Marko's village had ever left to go to school in America, never mind Alaska. Running was not the route out of Ptop as it was in so many other poor villages in Kenya's highlands. But running was something he always did well. After he graduated from a Nairobi two-year college in 2006 and was earning a modest living as a teacher, he noticed that runners -- inferior runners, he felt -- were leaving on scholarship for U.S. colleges. America meant money, and those who left were expected to share it to help back home.
So Marko chased a new life in hopes of improving his family's old one. He wanted, in the words of his cousin Nicholas Atudonyang, "to be a role model for the guys in his village." He enrolled in one of the running academies in Eldoret, training twice daily in the 6,000-foot elevation, and had moderate success in local races. That got his name on American recruiters' prospect lists. Michael Friess, the track and cross-country coach at Alaska Anchorage, already had one star Kenyan on his roster, David Kiplagat, and wanted to add more. Friess, a loving hard-ass who's been UAA's head coach for 22 of his 50 years, offered Marko a full scholarship, without even meeting him
At first, his parents didn't want Marko to leave, fearing that they'd have to support him again. But he argued that although his teaching job was fine for him, his father could desperately use extra income to supplement his typical earnings of $200 a year. In Alaska, Marko said, he'd work part time and send home a few hundred dollars a year. His parents acquiesced, selling farm animals and asking members of their extended family to help cover Marko's expenses. So Marko, seated in the rear, a few rows behind another runner bound for UAA, Alfred Kangogo, flew from Nairobi to Amsterdam to Minneapolis to Anchorage. All he'd heard about Alaska was that it was dark 24 hours a day. But when they arrived in the evening, the sun shining, Alfred turned to Marko and said, "Just like home." (...)
But the ease with which Marko and his fellow Kenyans got along with other students belied the fact that getting beyond the surface was difficult. The Kenyans were too busy being unspoken breadwinners to date much. Friess, worried that they were stretched too thin, told them they couldn't begin work at 6 a.m. anymore. They adjusted by working later. They simply carried on, each handling the pressure in his own way. David was driven, eventually graduating with a degree in finance and economics. Alfred was relentless, earning the nickname Bulldog. And Marko tried to be perfect, putting on a positive front even during the occasional month when he didn't earn enough to send any money home. After he paid rent and his school expenses, much of his $450 take-home was spoken for. Usually he was able to save up and wire $100 every few months.
A blond girl stops him. "Marko!" she says.
"Hellll-oooo!" he replies, voice arching.
"Can I give you a hug?"
"Okay, just don't push me!" he says in fast, accented English. She moves in gently. Marko embraces her with his left arm, his right hand steadying himself. For two months, Marko has envisioned this January morning: First day of spring semester senior year, a chance to prove that he's still the same old sweet, sarcastic, eager-to-entertain Marko. A few nights ago at a UAA basketball game, girls had hugged him in droves. Three former teammates surrounded him for a picture and posted it on Facebook. Marko had ambled around without his walker, showing off, perhaps too much.
Now Marko says goodbye to the blonde and rolls into an elevator. Before the doors close, an older woman whom Marko doesn't know juts toward the narrowing window and whispers, "We love you." The elevator rings open on the second floor, and Marko pushes to Room 251. He rolls toward the desks, then stops like a car that's halfway through a wrong turn.
Those desks -- the normal desks -- aren't for him anymore. He turns toward the lone handicap table, twists and falls into his seat straight-legged, then glances down at the shiny black shoes covering his new plastic stubs.
Those used to be his feet.
During an August night in 2008, Marko Cheseto walked onto a plane in Nairobi bound for Alaska. His feet were his own. He had only $100 in his pockets. His luggage totaled one bag containing two outfits. He was raised in Ptop, a village of 1,000 in the western Kenyan mountains, elevation 8,000 feet -- a foggy, damp region without running water or electricity or roads, where the Pokot dialect of Swahili was spoken. His father, Dickson, farmed, built houses and herded animals, many of which he sold to help purchase a one-way ticket to Anchorage, where the third oldest of his 11 children would attend college on a cross-country and track scholarship.
Nobody from Marko's village had ever left to go to school in America, never mind Alaska. Running was not the route out of Ptop as it was in so many other poor villages in Kenya's highlands. But running was something he always did well. After he graduated from a Nairobi two-year college in 2006 and was earning a modest living as a teacher, he noticed that runners -- inferior runners, he felt -- were leaving on scholarship for U.S. colleges. America meant money, and those who left were expected to share it to help back home.
So Marko chased a new life in hopes of improving his family's old one. He wanted, in the words of his cousin Nicholas Atudonyang, "to be a role model for the guys in his village." He enrolled in one of the running academies in Eldoret, training twice daily in the 6,000-foot elevation, and had moderate success in local races. That got his name on American recruiters' prospect lists. Michael Friess, the track and cross-country coach at Alaska Anchorage, already had one star Kenyan on his roster, David Kiplagat, and wanted to add more. Friess, a loving hard-ass who's been UAA's head coach for 22 of his 50 years, offered Marko a full scholarship, without even meeting him
At first, his parents didn't want Marko to leave, fearing that they'd have to support him again. But he argued that although his teaching job was fine for him, his father could desperately use extra income to supplement his typical earnings of $200 a year. In Alaska, Marko said, he'd work part time and send home a few hundred dollars a year. His parents acquiesced, selling farm animals and asking members of their extended family to help cover Marko's expenses. So Marko, seated in the rear, a few rows behind another runner bound for UAA, Alfred Kangogo, flew from Nairobi to Amsterdam to Minneapolis to Anchorage. All he'd heard about Alaska was that it was dark 24 hours a day. But when they arrived in the evening, the sun shining, Alfred turned to Marko and said, "Just like home." (...)
But the ease with which Marko and his fellow Kenyans got along with other students belied the fact that getting beyond the surface was difficult. The Kenyans were too busy being unspoken breadwinners to date much. Friess, worried that they were stretched too thin, told them they couldn't begin work at 6 a.m. anymore. They adjusted by working later. They simply carried on, each handling the pressure in his own way. David was driven, eventually graduating with a degree in finance and economics. Alfred was relentless, earning the nickname Bulldog. And Marko tried to be perfect, putting on a positive front even during the occasional month when he didn't earn enough to send any money home. After he paid rent and his school expenses, much of his $450 take-home was spoken for. Usually he was able to save up and wire $100 every few months.
by Seth Wickersham, ESPN | Read more:
Photo: Jose Mandojana for ESPN The Magazine
Subscribe to:
Comments (Atom)












