Monday, July 23, 2012

From an Unlikely Source, a Serious Challenge to Wall Street

Something very interesting is happening.

There’s been so much corruption on Wall Street in recent years, and the federal government has appeared to be so deeply complicit in many of the problems, that many people have experienced something very like despair over the question of what to do about it all.

But there’s something brewing that looks like it might eventually turn into a blueprint to take on the financial services industry: a plan to allow local governments to take on the problem of neighborhoods blighted by toxic home loans and foreclosures through the use of eminent domain. I can't speak for how well this program will work, but it's certaily been effective in scaring the hell out of Wall Street.

Under the proposal, towns would essentially be seizing and condemning the man-made mess resulting from the housing bubble. Cooked up by a small group of businessmen and ex-venture capitalists, the audacious idea falls under the category of "That’s so crazy, it just might work!" One of the plan’s originators described it to me as a "four-bank pool shot."

Here’s how the New York Times described it in an article from earlier this week entitled, "California County Weighs Drastic Plan to Aid Homeowners":
Desperate for a way out of a housing collapse that has crippled the region, officials in San Bernardino County … are exploring a drastic option — using eminent domain to buy up mortgages for homes that are underwater.
Then, the idea goes, the county could cut the mortgages to the current value of the homes and resell the mortgages to a private investment firm, which would allow homeowners to lower their monthly payments and hang onto their property.
I’ve been following this story for months now – I was tipped off that this was coming earlier this past spring – and in the time since I’ve become more convinced the idea might actually work, thanks mainly to the lucky accident that the plan doesn’t require the permission of anyone up in the political Olympus.

Cities and towns won’t need to ask for an act of a bank-subsidized congress to do this, and they won’t need a federal judge to sign off on any settlement. They can just do it. In the Death Star of America’s financial oligarchy, the ability of local governments to use eminent domain to seize toxic debt might be the one structural flaw big enough for the rebel alliance to exploit. (...)

The plan is being put forward by a company called Mortgage Resolution Partners, run by a venture capitalist named Steven Gluckstern. MRP absolutely has a profit motive in the plan, and much is likely to be made of that in the press as this story develops. I've heard many arguments on both sides about this particular approach to the eminent domain concept. But either way, I doubt this ends up being entirely about money.

“What happened is, a bunch of us got together and asked ourselves what a fix of the housing/foreclosure problem would look like,” Gluckstern. “Then we asked, is there a way to fix it and make money, too. I mean, we're businessmen. Obviously, if there wasn’t a financial motive for anybody, it wouldn’t happen.”

Here’s how it works: MRP helps raise the capital a town or a county would need to essentially “buy” seized home loans from the banks and the bondholders (remember, to use eminent domain to seize property, governments must give the owners “reasonable compensation,” often interpreted as fair current market value).

Once the town or county seizes the loan, it would then be owned by a legal entity set up by the local government – San Bernardino, for instance, has set up a JPA, or Joint Powers Authority, to manage the loans.

At that point, the JPA is simply the new owner of the loan. It would then approach the homeowner with a choice. If, for some crazy reason, the homeowner likes the current situation, he can simply keep making his same inflated payments to the JPA. Not that this is likely, but the idea here is that nobody would force homeowners to do anything.

On the other hand, the town can also offer to help the homeowner find new financing. In conjunction with companies like MRP (or the copycat firms like it that would inevitably spring up), the counties and towns would arrange for private lenders to enter the picture, and help homeowners essentially buy back his own house, only at a current market price. Just like that, the homeowner is no longer underwater and threatened with foreclosure. (...)

But MRP’s role aside, this is also a compelling political story with potentially revolutionary consequences. If this gambit actually goes forward, it will inevitably force a powerful response both from Wall Street and from its allies in federal government, setting up a cage-match showdown between lower Manhattan and, well, everywhere else in America. In fact, the first salvoes in that battle have already been fired.

by Matt Taibbi, Rolling Stone |  Read more:
Photo: Joe Raedle/Getty Images

Sunday, July 22, 2012

Our Ridiculous Approach to Retirement

I work on retirement policy, so friends often want to talk about their own retirement plans and prospects. While I am happy to have these conversations, my friends usually walk away feeling worse — for good reason.

Seventy-five percent of Americans nearing retirement age in 2010 had less than $30,000 in their retirement accounts. The specter of downward mobility in retirement is a looming reality for both middle- and higher-income workers. Almost half of middle-class workers, 49 percent, will be poor or near poor in retirement, living on a food budget of about $5 a day.

In my ad hoc retirement talks, I repeatedly hear about the “guy.” This is a for-profit investment adviser, often described as, “I have this guy who is pretty good, he always calls, doesn’t push me into investments.” When I ask how much the “guy” costs, or if the guy has fiduciary loyalty — to the client, not the firm — or if their investments do better than a standard low-fee benchmark, they inevitably don’t know. After hearing about their magical guy, I ask about their “number.”

To maintain living standards into old age we need roughly 20 times our annual income in financial wealth. If you earn $100,000 at retirement, you need about $2 million beyond what you will receive from Social Security. If you have an income-producing partner and a paid-off house, you need less. This number is startling in light of the stone-cold fact that most people aged 50 to 64 have nothing or next to nothing in retirement accounts and thus will rely solely on Social Security.

Even for those who know their “number” and are prepared for retirement (it happens, rarely), these conversations aren’t easy. At dinner one night, a friend told me how much he has in retirement assets and said he didn’t think he had saved enough. I mentally calculated his mortality, figured he would die sooner than he predicted, and told him cheerfully that he shouldn’t worry. (“Congratulations!”) But dying early is not the basis of a retirement plan.

If we manage to accept that our investments will likely not be enough, we usually enter another fantasy world — that of working longer. After all, people hear that 70 is the new 50, and a recent report from Boston College says that if people work until age 70, they will most likely have enough to retire on. Unfortunately, this ignores the reality that unemployment rates for those over 50 are increasing faster than for any other group and that displaced older workers face a higher risk of long-term unemployment than their younger counterparts. If those workers ever do get re-hired, it’s not without taking at least a 25 percent wage cut.

But the idea is tempting; people say they don’t want to retire and feel useless. Professionals say they can keep going, “maybe do some consulting” or find some other way to generate income well into their late 60s. Others say they can always be Walmart greeters. They rarely admit that many people retire earlier than they want because they are laid off or their spouse becomes sick.

Like the nation’s wealth gap, the longevity gap has also widened. The chance to work into one’s 70s primarily belongs to the most well off. Medical technology has helped extend life, by helping older people survive longer with illnesses and by helping others stay active. The gains in longevity in the last two decades almost all went to people earning more than average. It makes perfect sense for human beings to think each of us is special and can work forever. To admit you can’t, or might not be able to, is hard, and denial and magical thinking are underrated human coping devices in response to helplessness and fear.

So it’s not surprising that denial dominates my dinner conversations, but it is irresponsible for Congress to deny that regardless of how much you throw 401(k) advertising, pension cuts, financial education and tax breaks at Americans, the retirement system simply defies human behavior. Basing a system on people’s voluntarily saving for 40 years and evaluating the relevant information for sound investment choices is like asking the family pet to dance on two legs.

by Theresa Ghilarducci, NY Times |  Read more:
Photo: Dennis Stock/Magnum Photos

Let My Mother Go

On the way to visit my mother one recent rainy afternoon, I stopped in, after quite some constant prodding, to see my insurance salesman. He was pressing his efforts to sell me a long-term-care policy with a pitch about how much I'd save if I bought it now, before the rates were set to precipitously rise. I am, as my insurance man pointed out, a "sweet spot" candidate. Not only do I have the cash (though not enough to self-finance my decline) but a realistic view: like so many people in our 50s – in my experience almost everybody – I have a parent in an advanced stage of terminal breakdown.

I didn't need to be schooled in the realities of long-term care: the costs for my mother, who is 86 and who, for the past 18 months, has not been able to walk, talk or to address her most minimal needs and, to boot, is absent a short-term memory, come in at about $17,000 a month. And while her insurance hardly covers all of that, I'm certainly grateful she had the foresight to carry such a policy. (Although the carrier has never paid on time and all payments involve hours of being on hold with its invariably unhelpful helpline operators – and please fax them, don't email.) My three children deserve as much.

And yet, on the verge of writing the cheque, I backed up.

What I feel most intensely when I sit by my mother's bed is a crushing sense of guilt for keeping her alive. Who can accept such suffering – who can so conscientiously facilitate it?

"Why do we want to cure cancer? Why do we want everybody to stop smoking? For this?" wailed a friend of mine with two long-ailing and yet tenacious in-laws.

Age is one of the great modern adventures, a technological marvel – we're given several more youthful-ish decades if we take care of ourselves. Almost nobody, at least openly, sees this for its ultimate, dismaying, unintended consequence: by promoting longevity and technologically inhibiting death we have created a new biological status – a no-exit state that persists longer and longer, one that is nearly as remote from life as death, but which, unlike death, requires vast service – indentured servitude, really – and resources.

This is not anomalous; this is the norm.

The traditional exits, of a sudden heart attack, of dying in one's sleep, of unreasonably dropping dead in the street, of even a terminal illness, are now exotic ways of going. The longer you live the longer it will take to die. The better you have lived the worse you may die. The healthier you are – through careful diet, diligent exercise and attentive medical scrutiny – the harder it is to die. Part of the advance in life expectancy is that we have technologically inhibited the ultimate event. We have fought natural causes to almost a draw. If you eliminate smokers, drinkers, other substance abusers, the obese and the fatally ill, you are left with a rapidly growing demographic segment peculiarly resistant to death's appointment – though far, far, far from healthy.

by Michael Wolff, The Guardian |  Read more:

#8, Sun Set Series

Scott FraserMetronome
oil on panel, 1990

Will You Still Medal in the Morning?


Home to more than 10,000 athletes at the Summer Games and 2,700 at the Winter, the Olympic Village is one of the world's most exclusive clubs. To join, prospective members need only have spectacular talent and -- we long assumed -- a chaste devotion to the most intense competition of their lives. But the image of a celibate Games began to flicker in '92 when it was reported that the Games' organizers had ordered in prophylactics like pizza. Then, at the 2000 Sydney Games, 70,000 condoms wasn't enough, prompting a second order of 20,000 and a new standing order of 100,000 condoms per Olympics.

Many Olympians, past and present, abide by what Summer Sanders, a swimmer who won two gold medals, a silver and a bronze in Barcelona, calls the second Olympic motto: "What happens in the village stays in the village." Yet if you ask enough active and retired athletes often enough to spill their secrets, the village gates will fly open. It quickly becomes clear that, summer or winter, the games go on long after the medal ceremony. "There's a lot of sex going on," says women's soccer goalkeeper Hope Solo, a gold medalist in 2008. How much sex? "I'd say it's 70 percent to 75 percent of Olympians," offers world-record-holding swimmer Ryan Lochte, who will be in London for his third Games. "Hey, sometimes you gotta do what you gotta do."

The games begin as soon as teams move in a week or so before opening ceremonies. "It's like the first day of college," says water polo captain Tony Azevedo, a veteran of Beijing, Athens and Sydney who is returning to London. "You're nervous, super excited. Everyone's meeting people and trying to hook up with someone."

Which is perfectly understandable, if not to be expected. Olympians are young, supremely healthy people who've been training with the intensity of combat troops for years. Suddenly they're released into a cocoon where prying reporters and overprotective parents aren't allowed. Pre-competition testosterone is running high. Many Olympians are in tapering mode, full of excess energy because they're maintaining a training diet of up to 9,000 calories per day while not actually training as hard. The village becomes "a pretty wild scene, the biggest melting pot you've been in," says Eric Shanteau, an American who swam in Beijing and will be heading to London.

by Sam Alipour, ESPN |  Read more:
Photo: Olivia Harris/Getty Images

Not Your Dad's Golf: Portland Urban Golf Club


It's an overcast Saturday afternoon in Portland. A man in his late 20s with or without a beard rides his fixed-gear bike down the street, a pant leg rolled up so as not to get caught in the spokes—and then OUCH. A tennis ball hits him right in the spine. He looks back, and there's a group of people dressed in some over-the-top argyle outfits cheering and holding their golf clubs in the air.

It's just another Saturday in Portland—and someone saved a stroke by nailing the biker.

The Portland Urban Golf Club was formed by Scott Mazariegos back in 2005. An interaction designer with Adidas by day, Mazariegos was bored with his standard rec-league kickball team when he came up with the idea of mapping out a golf course in the streets of an industrial district in Portland. (Urban golf traces its roots back to early-90’s Germany. Mazariegos stressed that he didn’t invent the sport.) Lampposts, couches, and anything else they can find are used as holes, and tennis balls replace golf balls because, again, this is a golf course, literally, in the middle of a major American city.

The group started back in 2005 with around 20 or so members, and now a regular round—18 holes, usually starting around 3 p.m. on a Saturday; they used to start at noon, but, as Mazariegos said, “It turned out to be a very long day of drinking”—brings out anywhere from 40 to 150 members depending on the occasion.

Cops showed up that first day because, you know, there were people playing golf in the streets of downtown Portland, but after seeing that the golfers were playing with tennis balls, the officers let them be. A few of them even wanted to join in, according to Mazariegos.

“There are holes that are your standard, down-the-road, dog leg to the left, dog leg to the right,” Mazariegos said, as if anything at all about this is standard. “A hole can be anywhere from a block to five blocks, depending on what and where we’re playing.” Some holes are trickier, involving overpasses and barbed-wire fences. Currently, the Portland Urban Golf Club has 22 designed, mapped-out courses, with four more in the works.

There is no par, and most people don’t keep score. When the group first started, the vast majority of the golfers had never played golf—or any other sport—before, so they gave out trophies to whoever had the highest score. “There’d be these people who’d never even gotten off the couch,” Mazariegos said, “and they’d get a trophy at the end of the day.”

The group stops at a bar every three holes and then moves on to the next trio of alleyways and chain-link fences. Some take it seriously—bets get placed, beers get bought—and others don't really care about anything other than just being there and being a part of it all. You get to deduct a stroke for every public transportation vehicle hit, every bike rider clocked, and every time your ball gets run over by a car. Seriously.

by Ryan O'Hanlon, Outside Magazine |  Read more:
Photographer: Scott Mazariegos

Saturday, July 21, 2012


Mercury Girls
via:

Evisbeats


Blogs to Riches: The Haves and Have Nots of the Blogging Boom

[ed. For your reading pleasure, humble dispatches from the D-list.]

By all appearances, the blog boom is the most democratized revolution in media ever. Starting a blog is ridiculously cheap; indeed, blogging software and hosting can be had for free online. There are also easy-to-use ad services that, for a small fee, will place advertisements from major corporations on blogs, then mail the blogger his profits. Blogging, therefore, should be the purest meritocracy there is. It doesn’t matter if you’re a nobody from the sticks or a well-connected Harvard grad. If you launch a witty blog in a sexy niche, if you’re good at scrounging for news nuggets, and if you’re dedicated enough to post around the clock—well, there’s nothing separating you from the big successful bloggers, right? I can do that.

In theory, sure. But if you talk to many of today’s bloggers, they’ll complain that the game seems fixed. They’ve targeted one of the more lucrative niches—gossip or politics or gadgets (or sex, of course)—yet they cannot reach anywhere close to the size of the existing big blogs. It’s as if there were an A-list of a few extremely lucky, well-trafficked blogs—then hordes of people stuck on the B-list or C-list, also-rans who can’t figure out why their audiences stay so comparatively puny no matter how hard they work. “It just seems like it’s a big in-party,” one blogger complained to me. (Indeed, a couple of pranksters last spring started a joke site called Blogebrity and posted actual lists of the blogs they figured were A-, B-, and C-level famous.)

That’s a lot of inequality for a supposedly democratic medium. Not long ago, Clay Shirky, an instructor at New York University, became interested in this phenomenon—and argued that there is a scientific explanation. Shirky specializes in the social dynamics of the Internet, including “network theory”: a mathematical model of how information travels inside groups of loosely connected people, such as users of the Web.

To analyze the disparities in the blogosphere, Shirky took a sample of 433 blogs. Then he counted an interesting metric: the number of links that pointed toward each site (“inbound” links, as they’re called). Why links? Because they are the most important and visible measure of a site’s popularity. Links are the chief way that visitors find new blogs in the first place. Bloggers almost never advertise their sites; they don’t post billboards or run blinking trailers on top of cabs. No, they rely purely on word of mouth. Readers find a link to Gawker or Andrew Sullivan on a friend’s site, and they follow it. A link is, in essence, a vote of confidence that a fan leaves inscribed in cyberspace: Check this site out! It’s cool! What’s more, Internet studies have found that inbound links are an 80 percent–accurate predictor of traffic. The more links point to you, the more readers you have. (Well, almost. But the exceptions tend to prove the rule: Fleshbot, for example. The sex blog has 300,000 page views per day but relatively few inbound links. Not many readers are willing to proclaim their porn habits with links, understandably.)

When Shirky compiled his analysis of links, he saw that the smaller bloggers’ fears were perfectly correct: There is enormous inequity in the system. A very small number of blogs enjoy hundreds and hundreds of inbound links—the A-list, as it were. But almost all others have very few sites pointing to them. When Shirky sorted the 433 blogs from most linked to least linked and lined them up on a chart, the curve began up high, with the lucky few. But then it quickly fell into a steep dive, flattening off into the distance, where the vast majority of ignored blogs reside. The A-list is teensy, the B-list is bigger, and the C-list is simply massive. In the blogosphere, the biggest audiences—and the advertising revenue they bring—go to a small, elite few. Most bloggers toil in total obscurity.

Economists and network scientists have a name for Shirky’s curve: a “power-law distribution.” Power laws are not limited to the Web; in fact, they’re common to many social systems. If you chart the world’s wealth, it forms a power-law curve: A tiny number of rich people possess most of the world’s capital, while almost everyone else has little or none. The employment of movie actors follows the curve, too, because a small group appears in dozens of films while the rest are chronically underemployed. The pattern even emerges in studies of sexual activity in urban areas: A small minority bed-hop, while the rest of us are mostly monogamous.

The power law is dominant because of a quirk of human behavior: When we are asked to decide among a dizzying array of options, we do not act like dispassionate decision-makers, weighing each option on its own merits. Movie producers pick stars who have already been employed by other producers. Investors give money to entrepreneurs who are already loaded with cash. Popularity breeds popularity.

“It’s not about moral failings or any sort of psychological thing. People aren’t lazy—they just base their decisions on what other people are doing,” Shirky says. “It’s just social physics. It’s like gravity, one of those forces.

Power laws are arguably part of the very nature of links. To explain why, Shirky poses a thought experiment: Imagine that 1,000 people were all picking their favorite ten blogs and posting lists of those links. Alice, the first person, would read a few, pick some favorites, and put up a list of links pointing to them. The next person, Bob, is thus incrementally more likely to pick Alice’s favorites and include some of them on his own list. The third person, Carmen, is affected by the choices of the first two, and so on. This repeats until a feedback loop emerges. Those few sites lucky enough to acquire the first linkages grow rapidly off their early success, acquiring more and more visitors in a cascade of popularity. So even if the content among competitors is basically equal, there will still be a tiny few that rise up to form an elite.

First-movers get a crucial leg up in this kind of power-law system. This is certainly true of the blogosphere. If you look at the list of the most-linked-to blogs on the top 100 as ranked by Technorati—a company that scans the blogosphere every day—many of those at the top were first-movers, the pioneers in their fields. With 19,764 inbound links, the No. 1 site is Boing Boing, a tech blog devoted to geek news and nerd trivia; it has been online for five years, making it a grandfather in the field. In the gossip- blog arena, Gawker is the graybeard, having launched in 2002. With 4,790 sites now linking to it, Gawker towers above the more-recent entrants such as PerezHilton.com (with 1,549 links) and Jossip (with 814). In politics, the highest is Daily Kos, one of the first liberal blogs—with 11,182 links—followed closely by Instapundit, an early right-wing blog, with 6,513. Uncountable teensy political blogs lie in their shadows.

In scientific terms, this pattern is called “homeostasis”—the tendency of networked systems to become self-reinforcing. “It’s the same thing you see in economies—the rich-get-richer problem,” Shirky notes.

by Clive Thompson, New York Magazine |  Read more:
Photo: Ben Fry

Backwater, 2001, Alexander Grishkevich. Russian born 1961.
via:

Why We Should Stop Talking About 'Bus Stigma'


Years ago, when I was presenting my firm’s bus network redesign plan to the board of a suburban transit agency, a board member from an affluent suburb leaned slowly forward, cleared his throat, and asked me a simple question:

"So, Mr. Walker. If we adopt this plan of yours, does that mean I’m going to leave my BMW in the driveway?"

Years later, on my book tour, I was at dinner with some architects when the conversation slipped into one of those abstract rail versus bus debates. One woman, a leading architecture scholar, said: "But I simply wouldn’t ride a bus," as though that settled the matter.

Transit, even the indispensable bus, will continue on that path to greater relevance so long as citizens care about it and demand that it be funded.

Both of these people are prosperous, successful, and (if it matters) white. So both are likely to be counted as data points when people argue that there is an American "stigma" about buses, felt mostly by white and successful people, and that transit agencies need to "break through" that stigma to achieve relevance.

There is a simpler explanation. These two people are relatively elite, as are most of our decision-makers. Elected officials and leading professionals are nothing like a representative slice of the population. Many have the best of intentions and a strong commitment to sustainable urbanism, but some still make the mistake of assuming that a transit service that they personally wouldn’t ride must not be accomplishing anything important.

Elites are by definition a small minority, so it makes no sense to define a vast transit network around their personal tastes. Even when we’ve achieved all our sustainability goals, that particular city councilman can still drive his BMW everywhere, and that leading architecture scholar need never set foot on a bus. It doesn’t matter much what they do, because there just aren’t very many of them.

This, after all, is how Germany works. Germany is a world-leader in the design of expensive luxury cars, and has a network of freeways with no speed limits where you can push these cars to their ecstatic edge. But most urban travel in Germany happens on bikes, feet, or civilized and useful public transit systems in pleasant and sustainable cities. Transit’s purpose is to appeal to massive numbers of diverse riders, not chase the choosy few who would rather be on the Autobahn.

All of this came to mind in reading Amanda Hess’s recent Atlantic Cities article, "Race, Class and the Stigma of Riding the Bus in America." Hess argues that the predominance of minority and low-income people on the bus is evidence of an American bus "stigma." "In Los Angeles," she writes, "92 percent of bus riders are people of color. Their annual median household income is $12,000."

The reference to race is a distraction. The service area of the Los Angeles MTA is well over 70 percent people of color. What’s more, whites are more likely to live in low-density areas with obstructed street patterns where effective bus service is impossible. So people of color in L.A. may be over 80 percent of the residents for whom the MTA can be useful, which means that the number of white bus riders is not far off what we should expect.

When it comes to income – or "class," as she calls it – Hess has a point. Median income among Los Angeles MTA bus riders is well below the average for its service area, as is true of most urban transit agencies.

Notice what happens, though, when you say "class" instead of "income." Income is obviously a spectrum, with families and people scattered at every point along it. But "class" sounds like a set of boxes. American discourse is full of words that describe class as a box that you’re either in or out of: poverty, the middle class, the working class, the wealthy, the top one percent. We tend to use the word "class" when we want to imply a permanent condition. You can move gradually along the spectrum of income, but you must break through fortress walls to advance in "class."

by Jarrett Walker, Atlantic Cities |  Read more:
Photo credit: Dave Newman/Shutterstock.com

The Genetics of Stupidity

What if we’ve been thinking about the genetics of intelligence from completely the wrong angle? Intelligence (as indexed by IQ or the general intelligence factor “g”) is clearly highly heritable in humans – people who are more genetically similar are also more similar in this factor. (Genetic variance has been estimated as explaining ~75% of variance in g, depending on age and other factors). There must therefore be genetic variants in the population that affect intelligence – so far, so good. But the search for such variants has, at its heart, an implicit assumption: that these variants affect intelligence in a fairly specific way – that they will occur in genes “for intelligence”.

An implication of that phrase is that mutations in those genes were positively selected for at some stage in humanity’s descent from our common ancestor with apes, on the basis of conferring increased intelligence. This seems a fairly reasonable leap to make – such genes must exist and, if variation in these genes in humanity’s evolution could affect intelligence, then maybe variation in those same genes can explain variation within the human species.

The problem with that logic is that we are talking about two very different types of variation. On the one hand, mutations that arose during human evolution that conferred increased intelligence (through whatever mechanism) will have been positively selected for and fixed in the population. How this happened is unknown of course, but one can imagine an iterative process, where initial small changes in, say, the timing of processes of brain development led to small increases in intelligence. Increased cognitive capabilities could have led in turn to the emergence of crude communication and culture, opening up what has been called the “cognitive niche” – creating an environment where further increases in intelligence became selectively more and more advantageous – a runaway process, where genetic changes bootstrap on cultural development in a way that reinforces their own adaptiveness.

That’s all nice, though admittedly speculative, but those mutations are the ones that we would expect to not vary in human populations – they would now be fixed. In particular, there is little reason to expect that there would exist new mutations in such genes, present in some but not all humans, which act to further increase intelligence. This is simply a matter of probabilities: the likelihood of a new mutation in some such gene changing its activity in a way that is advantageous is extremely low, compared to the likelihood of it either having no effect or being deleterious. There are simply many more ways of screwing something up than of improving it.

That is true for individual proteins and it is true at a higher level, for organismal traits that affect fitness (the genetic components of which have presumably been optimised by millions of years of evolution). Mutations are much more likely to cause a decrement in such traits than to improve them. So maybe we’re thinking about the genetics of g from the wrong perspective – maybe we should be looking for mutations that decrease intelligence from some Platonic ideal of a “wild-type” human. Thinking in this way – about “mutations that affect” a trait, rather than “genes for” the trait – changes our expectations about the type of variation that could be contributing to the heritability of the trait.

Mutations that lower intelligence could be quite non-specific, diverse and far more idiosyncratic. The idea of a finite, stable and discrete set of variants that specifically contribute to intelligence levels and that simply get shuffled around in human populations may be a fallacy. That view is supported by the fact that genome-wide association studies for common variants affecting intelligence have so far come up empty.

Various researchers have suggested that g may be simply an index of a general fitness factor – an indirect measure of the mutational load of an organism. The idea is that, while we all carry hundreds of deleterious mutations, some of us carry more than others, or ones with more severe effects. These effects in combination can degrade the biological systems of development and physiology in a general way, rendering them less robust and less able to generate our Platonic, ideal phenotype. In this model, it is not the idea that specific mutations have specific effects on specific traits that matters so much – it is that the overall load cumulatively reduces fitness through effects at the systems level. This means that the mutations affecting intelligence in one person may be totally different from those affecting it in another – there will be no genes “for intelligence”.

Direct evidence for this kind of effect of mutational load was found recently in a study by Ronald Yeo and colleagues, showing that the overall burden of rare copy number variants (deletions or duplications of segments of chromosomes) negatively predicts intelligence (r = -0.3).

If g really is an index of a general fitness factor, then it should be correlated with other indices of fitness. This indeed appears to be the case. G is weakly positively correlated with height, for example, and also strongly correlated with various measures of health and longevity.

by Kevin Mitchell, Wiring the Brain | Read more:

A Real-Life Fairy Tale, Long in the Making: Searching for Sugar Man


[ed. See also:Cold Facts]

It’s a real-life tale of talent disregarded, bad luck and missed opportunities, with an improbable stop in the Hamptons and a Hollywood conclusion: A singer-songwriter is signed to a contract in the late 1960s after producers with ties to Motown Records see him playing in a smoky Detroit nightclub called the Sewer. He makes a pair of albums that sell almost nothing and then drops out of sight. So why, 40 years later, would anyone feel compelled to make a movie about this obscure artist, known professionally as Rodriguez?

Because, as it turns out, on the other side of the globe, in South Africa, Rodriguez had become as popular as the Rolling Stones or Elvis Presley. But he never knew of that success. He never saw a penny in royalties from it, and he spent decades doing manual labor to make ends meet and raise his three daughters. It wasn’t until fans in South Africa, trying to verify rumors he was dead, tracked him down through the Internet and brought him there to perform to adoring multitudes, that his career was resuscitated.

“This was the greatest, the most amazing, true story I’d ever heard, an almost archetypal fairy tale,” said Malik Bendjelloul, the Swedish director of “Searching for Sugar Man,” a documentary that opens on Friday in New York and Los Angeles. “It’s a perfect story. It has the human element, the music aspect, a resurrection and a detective story.”

Because of an odd confluence of circumstances it is also a story unlikely ever to occur again. In the era before the World Wide Web, South Africans, living under apartheid and isolated from the main currents of pop culture by domestic censorship and international sanctions, had no idea that Rodriguez languished in anonymity elsewhere. The singer himself compounded the situation by seeking to live as inconspicuously as possible.

On another, somewhat more oblique level, Mr. Bendjelloul acknowledged, “Searching for Sugar Man” can also be interpreted as a meditation on the fickle and arbitrary nature of celebrity and fame. We live in a culture, the film suggests, in which talent and quality sometimes go ignored, and when they get belated recognition, even that is often through happenstance.

“I’ve produced a lot of big-name artists with big hits, like Peter Frampton and Jerry Lee Lewis, but I’ve never worked with anyone as talented as Rodriguez,” Steve Rowland, who produced the singer’s second album, “Coming From Reality,” said in a telephone interview from his home in Palm Springs, Calif. “I never understood why he didn’t become a big star, so to see him rise like a phoenix from the ashes, it’s just as inexplicable, but it makes me really, really happy this is going on for him, because he’s a wonderful, humble person, and he really deserves it.” 

by Larry Rohter, NY Times |  Read more:
Photo: Nicole Bengiveno


Friday, July 20, 2012


Fairfield Porter: Forsythia and Pear in Bloom (1968)
via:

The xx

When Fashion Meets Fishing, the Feathers Fly


The most enthusiastic customers at the Eldredge Brothers Fly Shop of late are not looking to buy fly fishing reels or snag stripers. They are here to make a fashion statement.

In an improbable collision of cutting-edge chic and a hobby that requires drab waders, fly fishing shops around the country are suddenly inundated with stylish women looking to get in on the latest trend: long, colorful feathers that are bonded or clipped into hair.

Demand for the feathers, before now exclusively the domain of fly fishermen, who use them to tie flies, has created a shortage, forcing up the price and causing fly shops and hairdressers to compete for the elusive plumes.

“I’ve been out for probably a month,” said Bill Thompson, the owner of North Country Angler in North Conway, N.H. “There is that worry that next year, fishermen won’t have materials they’ll need.”

The circumstances are especially strange because a proudly stodgy and tradition-bound industry content to hide from the world beyond the river is competing in this niche marketplace with a fad that may not last as long as a trout’s spawning season.

“For someone to use them as a fashion statement is just sacrilegious,” said Bob Brown, 65, a fly fisherman who lives in an recreational vehicle parked in Kennebunk, Me. He said he had been tying flies for 50 years and this is the first time he had ever heard of a feather shortage.

“They’ve been genetically bred for fly tying, and that’s what they should be used for,” Mr. Brown said.

Fly fishing feathers — which individually are called hackles and as a group called saddles — are harvested from roosters painstakingly bred to grow supple feathers. It takes more than a year for a rooster to grow feathers long and pliable enough for use by fly fishermen. Because no one could have predicted the fashion trend, there are not enough to go around.

by Katie Zezima, NY Times |  Read more:
Photo: Craig Dilger

You Walk Wrong

Walking is easy. It’s so easy that no one ever has to teach you how to do it. It’s so easy, in fact, that we often pair it with other easy activities—talking, chewing gum—and suggest that if you can’t do both simultaneously, you’re some sort of insensate clod. So you probably think you’ve got this walking thing pretty much nailed. As you stroll around the city, worrying about the economy, or the environment, or your next month’s rent, you might assume that the one thing you don’t need to worry about is the way in which you’re strolling around the city.

Well, I’m afraid I have some bad news for you: You walk wrong.

Look, it’s not your fault. It’s your shoes. Shoes are bad. I don’t just mean stiletto heels, or cowboy boots, or tottering espadrilles, or any of the other fairly obvious foot-torture devices into which we wincingly jam our feet. I mean all shoes. Shoes hurt your feet. They change how you walk. In fact, your feet—your poor, tender, abused, ignored, maligned, misunderstood feet—are getting trounced in a war that’s been raging for roughly a thousand years: the battle of shoes versus feet.

Last year, researchers at the University of the Witwatersrand in Johannesburg, South Africa, published a study titled “Shod Versus Unshod: The Emergence of Forefoot Pathology in Modern Humans?” in the podiatry journal The Foot. The study examined 180 modern humans from three different population groups (Sotho, Zulu, and European), comparing their feet to one another’s, as well as to the feet of 2,000-year-old skeletons. The researchers concluded that, prior to the invention of shoes, people had healthier feet. Among the modern subjects, the Zulu population, which often goes barefoot, had the healthiest feet while the Europeans—i.e., the habitual shoe-wearers—had the unhealthiest. One of the lead researchers, Dr. Bernhard Zipfel, when commenting on his findings, lamented that the American Podiatric Medical Association does not “actively encourage outdoor barefoot walking for healthy individuals. This flies in the face of the increasing scientific evidence, including our study, that most of the commercially available footwear is not good for the feet.”

Okay, so shoes can be less than comfortable. If you’ve ever suffered through a wedding in four-inch heels or patent-leather dress shoes, you’ve probably figured this out. But does that really mean we don’t walk correctly? (Yes.) I mean, don’t we instinctively know how to walk? (Yes, sort of.) Isn’t walking totally natural? Yes—but shoes aren’t.

“Natural gait is biomechanically impossible for any shoe-wearing person,” wrote Dr. William A. Rossi in a 1999 article in Podiatry Management. “It took 4 million years to develop our unique human foot and our consequent distinctive form of gait, a remarkable feat of bioengineering. Yet, in only a few thousand years, and with one carelessly designed instrument, our shoes, we have warped the pure anatomical form of human gait, obstructing its engineering efficiency, afflicting it with strains and stresses and denying it its natural grace of form and ease of movement head to foot.” In other words: Feet good. Shoes bad.

Perhaps this sounds to you like scientific gobbledygook or the ravings of some radical back-to-nature nuts. In that case, you should listen to Galahad Clark. Clark is 32 years old, lives in London, and is about as unlikely an advocate for getting rid of your shoes as you could find. For one, he’s a scion of the Clark family, as in the English shoe company C&J Clark, a.k.a. Clarks, founded in 1825. Two, he currently runs his own shoe company. So it’s a bit surprising when he says, “Shoes are the problem. No matter what type of shoe. Shoes are bad for you.”

This is especially grim news for New Yorkers, who (a) tend to walk a lot, and (b) tend to wear shoes while doing so.

I know what you’re thinking: If shoes are so bad for me, what’s my alternative?

Simple. Walk barefoot.

Okay, now I know what you’re thinking: What’s my other alternative?

by Adam Sternberg, New York Magazine | Read more:
Photo: Tom Schierlitz

She Did Not Turn, by David Inshaw