Saturday, June 30, 2012


René Magritte, Time Transfixed, 1938, oil on canvas (via The Art Institute of Chicago)
via:
"I would characterize it sort of like a powerful interest group within a political party at this point. It used to be the entire political party."
—Iggy Pop explains his current relationship with his penis.

h/t The Awl 

California Takes Foie Gras Off the Menu

At Mélisse in Santa Monica, diners were preparing Saturday for "one last huzzah" in honour of a controversial delicacy that will soon become contraband across California.

Awaiting them at the upmarket French bistro is a feast of foie gras, a seven-course special celebrating the food stuff that makes animal rights campaigners gag, but leaves aficionados wanting more.

Those who make it through to the final dish – a strawberry shortcake stuffed with foie gras mouse and accompanied with foie gras ice cream – will be battling time, as well as their belts.

For at midnight California will enact a law it promised eight years ago, making the fattened livers of force-fed ducks and geese illegal.

Foie gras has long been a target for those calling for the ethical treatment of livestock. Translated to English as "fatty liver", foie gras is produced by a process known as gavage, in which the birds are force-fed corn through a tube.

It is designed to enlarge the birds' livers before being slaughtered, after which the organs are harvested and served up as a rich – and to fans a mouth-watering – delicacy.

The process dates back centuries. But in late 2004, then California governor Arnold Schwarzenegger signed a bill banning the sale of foie gras.

Diners and chefs were given a suitably long grace period to find an alternative method to gavage or wean themselves off the stuff it produces.

But despite a concerted effort by some to get the proposed ban overturned, seven and a half years down the line the law is now to be enacted.

From July 1, any restaurant serving foie gras will be fined up to $1,000 according to the statute. As the deadline has neared, restaurants have seen a growth in patrons wanting foie gras.

by Matt Williams, The Guardian |  Read more:
Photograph: Dimitar Dilkoff/AFP/Getty Images

Our Robot Future


It was chaos over Zuccotti Park on the early morning of Nov. 15. New York City policemen surrounded the park in Lower Manhattan where hundreds of activists had been living as part of the nationwide Occupy movement. The 1:00 AM raid followed a court order allowing the city to prohibit camping gear in the privately-owned park.

Many protestors resisted and nearly 200 were arrested. Journalists hurrying towards the park reported being illegally barred by police. The crews of two news-choppers–one each from CBS and NBC–claimed they were ordered out of the airspace over Zuccotti Park by the NYPD. Later, NBC claimed its crew misunderstood directions from the control tower. “NYPD cannot, and did not, close air space. Only FAA can do that,” a police spokesperson told Columbia Journalism Review. The FAA said it issued no flight ban.Regardless, the confusion resulted in a de facto media blackout for big media. Just one reporter had the unconstrained ability to get a bird’s-eye view on police action during the height of the Occupy protests. Tim Pool, a 26-year-old independent video journalist, in early December began sending a customized two-foot-wide robot–made by French company Parrot–whirring over the police’s and protestors’ heads. The camera-equipped ‘bot streamed live video to Pool’s smartphone, which relayed the footage to a public Internet stream.If the police ever noticed the diminutive, all-seeing automaton–and there’s no evidence they did–they never did anything to stop it. Unlike CBS and NBC, the boyish Pool, forever recognizable in his signature black knit cap, understood the law. He knew his pioneering drone flights were legal–just barely.

Pool’s robot coup was a preview of the future, as rapid advances in cheap drone technology dovetail with a loosening legal regime that, combined, could allow pretty much anybody to deploy their own flying robot–and all within the next three years. The spread of do-it-yourself robotics could radically change the news, the police, business and politics. And it could spark a sort of drone arms race as competing robot users seek to balance out their rivals.

Imagine police drones patrolling at treetop level down city streets, their cameras scanning crowds for weapons or suspicious activity. “Newsbots” might follow in their wake, streaming live video of the goings-on. Drones belonging to protest groups hover over both, watching the watchers. In nearby zip codes, drones belonging to real estate agents scope out hot properties. Robots deliver pizzas by following the signal from customers’ cell phones. Meanwhile, anti-drone “freedom fighters,” alarmed by the spread of cheap, easy overhead surveillance, take potshots at the robots with rifles and shotguns.

These aren’t just fantasies. All of these things are happening today, although infrequently and sometimes illegally. The only thing holding back the robots is government regulations that have failed to keep up with technology. The regs are due for an overhaul in 2015. That’s the year drones could make their major debut. “Everyone’s ready to do this,” Pool tells ANIMAL. “It’s only going to get crazier.”

by David Axe, AnimalNewYork |  Read more:

Amber Waves of Green

The gap between the richest and the poorest among us is now wider than it has been since we all nose-dived into the Great Depression. So GQ sent Jon Ronson on a journey into the secret financial lives of six different people on the ladder, from a guy washing dishes for 200 bucks a week in Miami to a self-storage gazillionaire. What he found are some surprising truths about class, money, and making it in America.

As I drive along the Pacific Coast Highway into Malibu, I catch glimpses of incredible cliff-top mansions discreetly obscured from the road, which is littered with abandoned gas stations and run-down mini-marts. The offlce building I pull up to is quite drab and utilitarian. There are no ornaments on the conference-room shelves—just a bottle of hand sanitizer. An elderly, broad-shouldered man greets me. He's wearing jogging pants. They don't look expensive. His name is B. Wayne Hughes.

You almost definitely won't have heard of him. He hardly ever gives interviews. He only agreed to this one because—as his people explained to me—income disparity is a hugely important topic for him. They didn't explain how it was important, so I assumed he thought it was bad.

I approached Wayne, as he's known, for wholly mathematical reasons. I'd worked out that there are six degrees of economic separation between a guy making ten bucks an hour and a Forbes billionaire, if you multiply each person's income by five. So I decided to journey across America to meet one representative of each multiple. By connecting these income brackets to actual people, I hoped to understand how money shapes their lives—and the life of the country—at a moment when the gap between rich and poor is such a combustible issue. Everyone in this story, then, makes roughly five times more than the last person makes. There's a dishwasher in Miami with an unbelievably stressful life, some nice middle-class Iowans with quite difflcult lives, me with a perfectly fine if frequently anxiety-inducing life, a millionaire with an annoyingly happy life, a multimillionaire with a stunningly amazing life, and then, finally, at the summit, this great American eagle, Wayne, who tells me he's "pissed off" right now.

"I live my life paying my taxes and taking care of my responsibilities, and I'm a little surprised to find out that I'm an enemy of the state at this time in my life," he says. (...)

In 2006, Wayne was America's sixty-first-richest man, according to Forbes, with $4.1 billion. Today he's the 242nd richest (and the 683rd richest in the world), with $1.9 billion. He's among the least famous people on the list. In fact, he once asked the magazine to remove his name. "I said, 'It's an imposition. Forbes should not be doing that. It's the wrong thing to do. It puts my children and my grandchildren at risk.' "

"And what did they say?" I ask.

"They said when Trump called up, he said the number next to his name was too small."

When Wayne is in Malibu, he stays in his daughter's spare room. His home is a three-bedroom farmhouse on a working stud farm in Lexington, Kentucky.

"I have no fancy living at all," he says. "Well, I have a house in Sun Valley. Five acres in the woods. I guess that's fancy."

I like Wayne very much. He's avuncular and salt of the earth. I admire how far he has risen from the Grapes of Wrath circumstances into which he was born; he's the very embodiment of the American Dream. I'm surprised, though, and a little taken aback, by his anger. I'll return to Wayne—and the curiously aggrieved way he views his place in the world—a bit later.

But first let's plummet all the way down to the very, very bottom, as if we're falling down a well, to a concrete slab of a house in a downtrodden Miami neighborhood called Little Haiti.

by Jon Ronson, GQ |  Read more:

Friday, June 29, 2012


Sergei Ivanov:  Firing Squad (1905)
via:

Why We Cheat

Behavioral economist Dan Ariely, who teaches at Duke University, is known as one of the most original designers of experiments in social science. Not surprisingly, the best-selling author’s creativity is evident throughout his latest book, The (Honest) Truth About Dishonesty. A lively tour through the impulses that cause many of us to cheat, the book offers especially keen insights into the ways in which we cut corners while still thinking of ourselves as moral people. Here, in Ariely’s own words, are seven lessons you didn’t learn in school about dishonesty. (Interview edited and condensed by Gary Belsky.)

1. Most of us are 98-percenters.

“A student told me a story about a locksmith he met when he locked himself out of the house. This student was amazed at how easily the locksmith picked his lock, but the locksmith explained that locks were really there to keep honest people from stealing. His view was that 1% of people would never steal, another 1% would always try to steal, and the rest of us are honest as long as we’re not easily tempted. Locks remove temptation for most people. And that’s good, because in our research over many years, we’ve found that everybody has the capacity to be dishonest and almost everybody is at some point or another.”

2. We’ll happily cheat … until it hurts.

“The Simple Model of Rational Crime suggests that the greater the reward, the greater the likelihood that people will cheat. But we’ve found that for most of us, the biggest driver of dishonesty is the ability to rationalize our actions so that we don’t lose the sense of ourselves as good people. In one of our matrix experiments [a puzzle-solving exercise Ariely uses in his work to measure dishonesty], the level of cheating didn’t change as the reward for cheating rose. In fact, the highest payout resulted in a little less cheating, probably because the amount of money got to be big enough that people couldn’t rationalize their cheating as harmless. Most people are able to cheat a little because they can maintain the sense of themselves as basically honest people. They won’t commit major fraud on their tax returns or insurance claims or expense reports, but they’ll cut corners or exaggerate here or there because they don’t feel that bad about it.”

3. It’s no wonder people steal from work.

“In one matrix experiment, we added a condition where some participants were paid in tokens, which they knew they could quickly exchange for real money. But just having that one step of separation resulted in a significant increase in cheating. Another time, we surveyed golfers and asked which act of moving a ball illegally would make other golfers most uncomfortable: using a club, their foot or their hand. More than twice as many said it would be less of a problem — for other golfers, of course — to use their club than to pick the ball up. Our willingness to cheat increases as we gain psychological distance from the action. So as we gain distance from money, it becomes easier to see ourselves as doing something other than stealing. That’s why many of us have no problem taking pencils or a stapler home from work when we’d never take the equivalent amount of money from petty cash. And that’s why I’m a little concerned about the direction we’re taking toward becoming a cashless society. Virtual payments are a great convenience, but our research suggests we should worry that the farther people get from using actual money, the easier it becomes to steal.”

by Gary Belsky, Time |  Read more:
Photo: Getty Images

Tokyo

Eurythmics


54 Smart Thinkers Everyone Should Follow On Twitter


Today everyone is getting their news and information from Twitter. At Business Insider, it's how we get a lot of our story ideas.

But figuring out exactly who to follow is a tough task. So we've put together a guide of some of the most influential thought leaders in the world who tweet.

Our criteria was simply this: that these people are respected voices in their fields — whether it be neuroscience, economics, business or journalism — and that they have developed a following for their insightful commentary on Twitter.

by Aimee Groth, Danielle Schlanger and Kim Bhasin, Business Insider |  Read more:

Cities Grow More than Suburbs, First Time in 100 Years

For the first time in a century, most of America's largest cities are growing at a faster rate than their surrounding suburbs as young adults seeking a foothold in the weak job market shun home-buying and stay put in bustling urban centers.

New 2011 census estimates released Thursday highlight the dramatic switch.

Driving the resurgence are young adults, who are delaying careers, marriage and having children amid persistently high unemployment. Burdened with college debt or toiling in temporary, lower-wage positions, they are spurning homeownership in the suburbs for shorter-term, no-strings-attached apartment living, public transit and proximity to potential jobs in larger cities.

While economists tend to believe the city boom is temporary, that is not stopping many city planning agencies and apartment developers from seeking to boost their appeal to the sizable demographic of 18-to-29-year olds. They make up roughly 1 in 6 Americans, and some sociologists are calling them "generation rent." The planners and developers are betting on young Americans' continued interest in urban living, sensing that some longer-term changes such as decreased reliance on cars may be afoot.

The last time growth in big cities surpassed that in outlying areas occurred prior to 1920, before the rise of mass-produced automobiles spurred expansion beyond city cores. (...)

"The recession hit suburban markets hard. What we're seeing now is young adults moving out from their parents' homes and starting to find jobs," Shepard said. "There's a bigger focus on building residences near transportation hubs, such as a train or subway station, because fewer people want to travel by car for an hour and a half for work anymore."

Katherine Newman, a sociologist and dean of arts and sciences at Johns Hopkins University who chronicled the financial struggles of young adults in a recent book, said they are emerging as a new generation of renters due to stricter mortgage requirements and mounting college debt. From 2009 to 2011, just 9 percent of 29- to 34-year-olds were approved for a first-time mortgage.

"Young adults simply can't amass the down payments needed and don't have the earnings," she said. "They will be renting for a very long time."

by Hope Yen and Kristen Wyatt, MSNBC |  Read more:
Photo: Kristen Wyatt

Obamacare Upheld: How and Why Did Justice Roberts Do It?


[ed. See also: A Confused Opinion, NY Times]

The Supreme Court closed out its 2011–12 term today in dramatic fashion, upholding the Affordable Care Act by a sharply divided vote. The Court’s bottom line, reasoning and lineup of justices all came as a shock to many. While I had earlier cautioned doomsayers that the law was “not dead yet” after an oral argument that others deemed disastrous for the law’s defenders, I don’t think anyone predicted that the law would be upheld without the support of Justice Anthony Kennedy, almost always the Court’s crucial swing vote. And while most of the legal debate focused on Congress’s power under the Commerce Clause, the Court ultimately upheld the law as an exercise of the taxing power—even though President Obama famously claimed that the law was not a tax. The most surprising thing of all, though, is that in the end, this ultraconservative Court decided the case, much as it did in many other cases this term, by siding with the liberals.

Justice Kennedy, on whom virtually all hope for a decision upholding the law rested, voted with Antonin Scalia, Samuel Alito and Clarence Thomas. They would have invalidated all 900 pages of the law—even though the challengers had directly attacked only two of the law’s hundreds of provisions. But Chief Justice John Roberts sided with Justices Ruth Bader Ginsburg, Sonia Sotomayor, Stephen Breyer and Elena Kagan to uphold the law as a valid exercise of Congress’s power to tax.
The Individual Mandate As a Tax

What led Roberts to cast his lot with the law’s supporters? The argument that the taxing power supported the individual mandate was a strong one. The mandate provides that those who can afford to buy healthcare insurance must do so, but the only consequence of not doing so is the payment of a tax penalty. The Constitution gives Congress broad power to raise taxes “for the general welfare,” which means Congress need not point to some other enumerated power to justify a tax. (By contrast, if Congress seeks to regulate conduct by imposing criminal or civil sanctions, it must point to one of the Constitution’s affirmative grants of power—such as the Commerce Clause, the immigration power, or the power to raise and regulate the military.)

The law’s challengers—and the Court’s dissenters—rejected the characterization of the law as a tax. They noted that it was labeled a “penalty,” not a tax; that it was designed to encourage people to buy health insurance, not to raise revenue; and that Obama himself had rejected claims that the law was a tax when it was being considered by Congress. But Roberts said the question is a functional one, not a matter of labels. Because the law in fact would raise revenue, imposed no sanction other than a tax and was calculated and collected by the IRS as part of the income tax, the Court treated it as a tax and upheld the law.

Chief Justice Roberts did go on to say (for himself, but not for the Court’s majority) that he thought the law was not justified by the Commerce Clause or the Necessary and Proper Clause, because rather than regulating existing economic activity it compelled people to enter into commerce. When one adds the dissenting justices, there were five votes on the Court for this restrictive view of the Commerce Clause. But that is not binding, because the law was upheld on other grounds. And while some have termed this a major restriction on Commerce Clause power, it is not clear that it will have significant impact going forward, as the individual mandate was the first and only time in over 200 years that Congress had in fact sought to compel people to engage in commerce. It’s just not a common way of regulating, so the fact that five justices think it’s an unconstitutional way of regulating is not likely to have much real-world significance.

by David Cole, The Nation |  Read more:
AP Photo/Dana Verkouteren

'Having It All'? How About: 'Doing The Best I Can'?


Anne-Marie Slaughter's remarkable article Why Women Still Can't Have It All clearly has meant different things to different people since it was published and posted. To me, first, it is further evidence of what I have come to believe after 46 years on this planet: most women are not just smarter than most men but braver and more aspirational, too. There is the noble, ancient striving to "have it all." And then there is the earnest and thought-provoking debate, largely between and among women if I am not mistaken, over exactly what that phrase means and whether the quest to achieve it is even worth it.

Men? Please. Such an earnest public conversation on this topic between and among men is impossible to imagine (no matter how hard The Atlantic tries). That's why so many of us diplomatically stayed on the sideline last week. And haven't men as a group largely given up hope of "having it all" anyway? Did we ever have such hope to begin with? I don't remember ever getting a memo on that. Without any statistics to back me up -- how typical of a man, right? -- I humbly suggest that a great many of us long ago decided in any event to focus upon lesser, more obtainable mottoes, like "doing the best I can" or "hanging in there," as we try to juggle work, family, and a life.

The genius of Slaughter's piece wasn't necessarily her analysis, her conclusions, or her suggestions for societal change. It was also that she was bold enough to aspire to publicly ponder the question again in the first place. The conversation she started last week -- the one that is still taking place today -- is welcome for many reasons. For example, it reminds cynics and pessimists like me that there are still millions of bright people out there who have the time, energy and eloquence to appreciate and explain their pursuit of a lifestyle that is rich, rewarding and successful in all of its many facets.

I have little standing to assess Slaughter's article on its merits -- few men do -- except to say it's my general belief that no one should be so quick to judge the way anyone else balances the priorities in their life. That said, I don't know any men who "have it all," or who say that they do, or who complain that they don't. I know men who are happy in their marriage and unhappy in their work. I know men who are happy in their work but unhappy in their marriage. I know men who are happy but stressed. I know men who work too hard and those who don't work hard enough. And I know many men who don't give a shit about any of this.

When I go out with the boys, and we rarely go out anymore anyway, we talk about the specific work problems we are facing at that moment. We talk about how we can better parent our kids. We talk about women. We talk about sports. We talk about everything, really, except about whether we've "have it all" or want to have it all or think anyone else can have it all. That's not surprising, is it? My dad never talked about "having it all." Having enough was his goal. He had neither the eloquence nor the self-awareness to spend time on anything other than trying to provide for his loved ones.

by Andrew Cohen, The Atlantic |  Read more:
Photo: ishane/Flickr

Thursday, June 28, 2012


Dan Witz, Mosh Pit
via:

Agnieszka Kozień. The South 1.
via:

Charisma: Who Has It, and How to Get It


It’s a rainy, midsummer evening. I’m standing in a draughty hall, holding a glass of cheap, white wine and staring intently at a middle-aged man as if he’s the Messiah. “In my view, the problem with Britain today…” he drones.

A group nearby laughs uproariously. It’s too hot, my shoes pinch. The people here are acquaintances rather than friends, and this is one of those social functions I’m attending out of duty rather than desire. Normally, I’d be appeasing this gasbag with the occasional “Oh?” Meanwhile I’d be shuffling in my tight shoes, eavesdropping on the fun gang.

But tonight is different. Tonight, rather than sinking in discomfort, I decide to bask in it. Dispassionately, I analyse the sensation of sore toes. I objectify the uproarious laughter by dismissing it as just another sound, rather than a siren call. When the man pauses, instead of interrupting with a story of my own, my eyes remain fixed on him. I pause two seconds, then ask a question. He runs a hand through his hair. I run a hand through mine.

Am I attempting a seduction? Heaven forbid. Do I care what he thinks about me? Not particularly. No matter. For I have just obtained the latest American must-have, a charisma coach, and tonight I am practising my new skills.

Until I encountered Olivia Fox Cabane, whom US executives at firms like Google, Deloitte and Citigroup pay up to $100,000 a year to help boost their X-factor, I’d have naively believed charisma was an intangible, magical aura.

by
Marilyn Monroe, 1953 Photo: REX FEATURES

Carol Marine
via:

The Way We Live: Drowning in Stuff

From 2001 to 2005, a team of social scientists studied 32 middle-class families in Los Angeles, a project documenting every wiggle of life at home. The study was generated by the U.C.L.A. Center on the Everyday Lives of Families to understand how people handled what anthropologists call material culture — what we call stuff. These were dual-earner households in a range of ethnic groups, neighborhoods, incomes and occupations, with at least two children between the ages of 7 and 12 — in other words, households smack in the weeds of family life.

What the researchers gleaned was an unflinching view of the American family, with all its stresses and joys on display. They’ve organized their findings into a book, scheduled to be available next week, called “Life at Home in the 21st Century.” It’s full of intriguing data points about the number of possessions the families owned (literally, thousands), much of it children’s toys. Women’s stress-hormone levels spiked when confronted with family clutter; the men’s, not so much. Finally, there was a direct relationship between the amount of magnets on refrigerators and the amount of stuff in a household.

One of the authors, Anthony P. Graesch, 38, an assistant professor of anthropology at Connecticut College, was a newly married, childless graduate student when the study was conducted (his co-authors are Jeanne E. Arnold, Enzo Ragazzini and Elinor Ochs). What Dr. Graesch witnessed as a lead researcher deeply imprinted his behavior as a husband and father, he said, in a recent interview.

I understand you once jumped out a family’s window to remove yourself from spousal combat? Also, you told a colleague, Benedict Carey, that the study was “the very purist form of birth control ever devised.” Discuss.

The study was an opportunity to see how families are doing it, working and raising children, every day, all the while trying to do that other job, maintaining a relationship with your spouse. In many ways that’s the job that suffered most. Parents are stretched the thinnest. Watching this unfold, I’d think: Why do I want to do this? It’s so much work. There are so many challenges. But there was also so much warmth and closeness, as much positive stuff as the tenseness, which was me jumping out the window.

Why do you think families are unable to manage the influx of material culture?

We can see how families are trying to cut down on the sheer number of trips to the store by buying bulk goods. How they can come to purchase more, and then not remember, and end up double purchasing. We can see how an increasingly nucleated family structure contributes to this.

Can you explain?

It means we don’t have extended family households. We don’t live next to grandparents. And we are further away from our relatives. We go to work, we come home, and there is only four hours of time we spend together. We feel guilty about this, and oftentimes buy gifts as a result. Grandparents contribute to possessions in no small way. Here comes Christmas, here come the birthdays. The inflow of objects is relentless. The outflow is not. We don’t have rituals, mechanisms, for getting rid of stuff.

by Penelope Green, NY Times |  Read more:
Photo: C. M. Glover

Silence, Exile, Punning

On a day in May, 1922, in Paris, a medical student named Pierre Mérigot de Treigny was asked by his teacher, Dr. Victor Morax, a well-known ophthalmologist, to attend to a patient who had telephoned complaining about pain from iritis, an inflammation of the eye. The student went to the patient’s apartment, in a residential hotel on the Rue de l’Université. Inside, he found a scene of disarray. Clothes were hanging everywhere; toilet articles were scattered around on chairs and the mantelpiece. A man wearing dark glasses and wrapped in a blanket was squatting in front of a pan that contained the remains of a chicken. A woman was sitting across from him. There was a half-empty bottle of wine next to them on the floor. The man was James Joyce. A few months before, on February 2nd, he had published what some people regarded then, and many people regard now, as the greatest work of prose fiction ever written in the English language.

The woman was Nora Barnacle. She and Joyce were unmarried, and had two teen-age children, Giorgio and Lucia, who were living with them in the two-room apartment. The conditions in which the student discovered them were not typical—Joyce lived in luxury whenever he could afford it, and often when he couldn’t—but the scene was emblematic. Joyce was a nomad. He was born in 1882, in Rathgar, a suburb of Dublin, and grew up the oldest of ten surviving children. After he started school, his family changed houses nine times in eleven years, an itinerancy not always undertaken by choice. They sometimes moved, with their shrinking stock of possessions, at night, in order to escape the attention of creditors. They did not leave a forwarding address.

James was the favorite of his charming, cantankerous, and dissolute father, John Stanislaus Joyce, and was adored by his brothers and sisters. They called him Sunny Jim, because he laughed at everything. He was a brilliant student when he chose to excel, a prodigy; and, despite the family’s relentless downward spiral—John Joyce wasted a considerable inheritance—he received a serious education at Jesuit schools. By the time he got his degree, from University College, Dublin, in 1902, the family was living in the northern suburb of Cabra. A friend later described the house: “The banisters were broken, the grass in the back-yard was all blackened out. There was laundry there and a few chickens, and it was a very very miserable home.” Joyce’s mother, Mary, died there, of liver cancer, in 1903.

Joyce left Ireland a year later, when he was twenty-two, but he never really left the manner of life he had known. Like his father, he was a raconteur and a barfly. He had a good tenor voice (as did John Joyce), and he loved to sing and to dance. When he had no money, he borrowed it; when he had it, he picked up the tab for whatever company he was in, booked himself and his family into fancy hotels, and bought fur coats for Nora and Lucia. He was generous in the free-spirited way that only the inveterately insolvent can be.

For many years after he moved to the Continent, he scraped a living as a language teacher in Berlitz schools, a job he disliked. He started out in Pula, moved to Trieste, to Rome, then back to Trieste, and, finally, to Zurich. He changed residences regularly wherever he was, sometimes under a landlord’s gun. In 1920, he moved to Paris, where he was supported by patrons and—though only toward the end of his life, since “Ulysses” was banned for twelve years in the United States and for fourteen in Britain—by royalties. During the twenty years he lived in Paris, he had eighteen different addresses.

“A man of small virtue, inclined to extravagance and alcoholism” is how Joyce described himself to Carl Jung. He was frail—he avoided contact sports like rugby as a child and barroom pugilism as a grownup—and he was frequently laid low by nervous attacks and illnesses. His eye troubles forced him to submit to a series of tricky and painful operations. At times, he was virtually blind. When he wrote, which he did usually stretched out across a bed, he wore a white jacket, so that light was reflected onto the paper; as he got older, he used a magnifying glass, in addition to his eyeglasses, to read.

After the Second World War broke out and the Germans occupied Paris, Joyce managed to get to Switzerland. He died there, in Zurich, of a perforated ulcer, on January 13, 1941. He was fifty-eight, and a very old man. He had burned the candle all the way down. He had spent eight years on “Ulysses,” and fifteen years on “Finnegans Wake,” which was published in 1939. “My eyes are tired,” he wrote in a letter to Giorgio, in 1935. “For over half a century, they have gazed into nullity where they have found a lovely nothing.”

by Louis Menand, The New Yorker |  Read more:
Illustration: Delphine Lebourgeois

Tiny Camera to Rival the Pros

This is a review of the best pocket camera ever made.

But first, a history lesson.

For years camera makers worried about competition from only one source: other camera makers. But in the end, the most dangerous predator came from an unexpected direction: cellphones.

Today, more photos are taken with phones than with point-and-shoot cameras. On photo sites like Flickr, the iPhone is the source of more photos than any real camera. No wonder sales of inexpensive pocket cameras are going down each year.

Cameras in phones are a delightful development for the masses. If you have your camera with you, you’re more likely to take photos and more likely to capture amazing images.

But in a sense they are also great for camera makers, which are being forced to double down in areas where smartphones are useless: Zoom lenses. High resolution. Better photo quality. Flexibility and advanced features. That’s why, even if sales of pocket cameras are down, sales of high-end cameras are up.

Now you know why the time is ripe for the new Sony Cyber-shot DSC-RX100. It’s a tiny, pants-pocketable camera that will be available in late July for the nosebleed price of $650.

Or, rather, won’t be available. It will be sold out everywhere. I’ll skip to the punch line: No photos this good have ever come from a camera this small.

by David Pogue, NY Times |  Read more:

Wednesday, June 27, 2012

The Prescient Are Few

How many mutual fund managers can consistently pick stocks that outperform the broad stock market averages — as opposed to just being lucky now and then?

Countless studies have addressed this question, and have concluded that very few managers have the ability to beat the market over the long term. Nevertheless, researchers have been unable to agree on how small that minority really is, and on whether it makes sense for investors to try to beat the market by buying shares of actively managed mutual funds.

A new study builds on this research by applying a sensitive statistical test borrowed from outside the investment world. It comes to a rather sad conclusion: There was once a small number of fund managers with genuine market-beating abilities, as judged by having past performance so good that their records could not be attributed to luck alone. But virtually none remain today. Index funds are the only rational alternative for almost all mutual fund investors, according to the study’s findings.

The study, “False Discoveries in Mutual Fund Performance: Measuring Luck in Estimating Alphas,” has been circulating for over a year in academic circles. Its authors are Laurent Barras, a visiting researcher at Imperial College’s Tanaka Business School in London; Olivier Scaillet, a professor of financial econometrics at the University of Geneva and the Swiss Finance Institute; and Russ Wermers, a finance professor at the University of Maryland.

The statistical test featured in the study is known as the “False Discovery Rate,” and is used in fields as diverse as computational biology and astronomy. In effect, the method is designed to simultaneously avoid false positives and false negatives — in other words, conclusions that something is statistically significant when it is entirely random, and the reverse. (...)

This doesn’t mean that no mutual funds have beaten the market in recent years, Professor Wermers said. Some have done so repeatedly over periods as short as a year or two. But, he added, “the number of funds that have beaten the market over their entire histories is so small that the False Discovery Rate test can’t eliminate the possibility that the few that did were merely false positives” — just lucky, in other words.

Professor Wermers says he was surprised by how rare stock-picking skill has become. He had “generally been positive about the existence of fund manager ability,” he said, but these new results have been a “real shocker.”

by Mark Hilbert, NY Times |  Read more:

The Most Important New Technology Since the Smartphone Arrives December 2012

By now, many of us are aware of the Leap Motion, a small, $70 gesture control system that simply plugs into any computer and, apparently, just works. If you've seen the gesture interfaces in Minority Report, you know what it does. More importantly, if you're familiar with the touch modality -- and at this point, most of us are -- the interface is entirely intuitive. It's touch, except it happens in the space in front of the screen, so you don't have to cover your window into your tech with all those unsightly smudges.

To understand how subtly revolutionary Leap will be, watch the video below, shot by the folks at The Verge, where you'll also find more juicy details on the device's specs and inner workings.


Unlike a touchscreen interface, with the Leap, there's no friction. That sounds trivial, but it isn't. It's the difference between attempting to conduct a symphony with a wand and attempting to conduct the same symphony by sketching out what the orchestra should do next via chalk on a blackboard.

Plus, Leap operates in three dimensions rather than two. Forget pinch-to-zoom; imagine "push to scroll," rotating your flattened hand to control the orientation of an object with a full six degrees of freedom, or using both hands at once to control either end of a bezier surface you're casually sculpting as part of an object you'll be sending to your 3D printer.

The fact that the Leap can see almost any combination of objects - a pen, your fingers, all ten fingers at once, should make every interface designer on the planet giddy with anticipation. If you thought that the touchscreen interface on the iPhone and subsequent tablets opened up a whole new way to interact with your device, imagine something that combines the intuitiveness of that experience with the possibility of such fine-grained control that you could do away with the trackpad or mouse entirely.

by Christopher Mims, MIT Technology Review | Read more:

The Sharp, Sudden Decline of America's Middle Class

Every night around nine, Janis Adkins falls asleep in the back of her Toyota Sienna van in a church parking lot at the edge of Santa Barbara, California. On the van's roof is a black Yakima SpaceBooster, full of previous-life belongings like a snorkel and fins and camping gear. Adkins, who is 56 years old, parks the van at the lot's remotest corner, aligning its side with a row of dense, shading avocado trees. The trees provide privacy, but they are also useful because she can pick their fallen fruit, and she doesn't always­ have enough to eat. Despite a continuous, two-year job search, she remains without dependable work. She says she doesn't need to eat much – if she gets a decent hot meal in the morning, she can get by for the rest of the day on a piece of fruit or bulk-purchased almonds – but food stamps supply only a fraction of her nutritional needs, so foraging opportunities are welcome.

Prior to the Great Recession, Adkins owned and ran a successful plant nursery in Moab, Utah. At its peak, it was grossing $300,000 a year. She had never before been unemployed – she'd worked for 40 years, through three major recessions. During her first year of unemployment, in 2010, she wrote three or four cover letters a day, five days a week. Now, to keep her mind occupied when she's not looking for work or doing odd jobs, she volunteers at an animal shelter called the Santa Barbara­ Wildlife Care Network. ("I always ask for the most physically hard jobs just to get out my frustration," she says.) She has permission to pick fruit directly from the branches of the shelter's orange and avocado trees. Another benefit is that when she scrambles eggs to hand-feed wounded seabirds, she can surreptitiously make a dish for herself.

By the time Adkins goes to bed – early, because she has to get up soon after sunrise, before parishioners or church employees arrive – the four other people who overnight in the lot have usually settled in: a single mother who lives in a van with her two teenage children and keeps assiduously to herself, and a wrathful, mentally unstable woman in an old Mercedes sedan whom Adkins avoids. By mutual unspoken agreement, the three women park in the same spots every night, keeping a minimum distance from each other. When you live in your car in a parking lot, you value any reliable area of enclosing stillness. "You get very territorial," Adkins says.

Each evening, 150 people in 113 vehicles spend the night in 23 parking lots in Santa Barbara. The lots are part of Safe Parking, a program that offers overnight permits to people living in their vehicles. The nonprofit that runs the program, New Beginnings Counseling Center, requires participants to have a valid driver's license and current registration and insurance. The number of vehicles per lot ranges from one to 15, and lot hours are generally from 7 p.m. to 7 a.m. Fraternization among those who sleep in the lots is implicitly discouraged – the fainter the program's presence, the less likely it will provoke complaints from neighboring homes and churches and businesses.

The Safe Parking program is not the product of a benevolent government. Santa Barbara's mild climate and sheltered beachfront have long attracted the homeless, and the city has sometimes responded with punitive measures. (An appeals court compared one city ordinance forbidding overnight RV parking to anti-Okie laws in the 1930s.) To aid Santa Barbara's large homeless population, local activists launched the Safe Parking program in 2003. But since the Great Recession began, the number of lots and participants in the program has doubled. By 2009, formerly middle-class people like Janis Adkins had begun turning up – teachers and computer repairmen and yoga instructors seeking refuge in the city's parking­ lots. Safe-parking programs in other cities have experienced a similar influx of middle-class exiles, and their numbers are not expected to decrease anytime soon. It can take years for unemployed workers from the middle class to burn through their resources – savings, credit, salable belongings, home equity, loans from family and friends. Some 5.4 million Americans have been without work for at least six months, and an estimated 750,000 of them are completely broke or heading inexorably toward destitution. In California, where unemployment remains at 11 percent, middle-class refugees like Janis Adkins are only the earliest arrivals. "She's the tip of the iceberg," says Nancy Kapp, the coordinator of the Safe Parking program. "There are many people out there who haven't hit bottom yet, but they're on their way – they're on their way."

Kapp, who was herself homeless for a time many years ago, is blunt, indefatig­able, raptly empathetic. She works out of a minuscule office in the Salvation Army building in downtown Santa Barbara. On the wall is a map encompassing the program's parking lots – a vivid graphic of the fall of the middle class. Kapp expects more disoriented, newly impoverished families to request spots in the Safe Parking program this year, and next year, and the year after that.

"When you come to me, you've hit rock bottom," Kapp says. "You've already done everything you possibly could to avoid being homeless. You maybe have a teeny bit of savings left. People are crying, they're saying, 'I've never experienced this before. I've never been homeless.' They don't want to mix with homeless people. They're like, 'I'm not going over to those people' – sometimes they call them 'those people.' So now they're lost, they're humiliated, they're rejected, they're scared, and they're very ashamed. I'm worried about the psychological damage it does when you have a place and then, all of a sudden, you're in your car. You have to be depressed just from the fall itself, from losing everything and not understanding how it could happen."

by Jeff Tietz, Rolling Stone |  Read more:
Photo: Mark Seliger

Tuesday, June 26, 2012

Joyas Volardores


Consider the hummingbird for a long moment. A hummingbird’s heart beats ten times a second. A hummingbird’s heart is the size of a pencil eraser. A hummingbird’s heart is a lot of the hummingbird. Joyas volardores, flying jewels, the first white explorers in the Americas called them, and the white men had never seen such creatures, for hummingbirds came into the world only in the Americas, nowhere else in the universe, more than three hundred species of them whirring and zooming and nectaring in hummer time zones nine times removed from ours, their hearts hammering faster than we could clearly hear if we pressed our elephantine ears to their infinitesimal chests.

Each one visits a thousand flowers a day. They can dive at sixty miles an hour. They can fly backwards. They can fly more than five hundred miles without pausing to rest. But when they rest they come close to death: on frigid nights, or when they are starving, they retreat into torpor, their metabolic rate slowing to a fifteenth of their normal sleep rate, their hearts sludging nearly to a halt, barely beating, and if they are not soon warmed, if they do not soon find that which is sweet, their hearts grow cold, and they cease to be. Consider for a moment those hummingbirds who did not open their eyes again today, this very day, in the Americas: bearded helmet-crests and booted racket-tails, violet-tailed sylphs and violet-capped woodnymphs, crimson topazes and purple-crowned fairies, red-tailed comets and amethyst woodstars, rainbow-bearded thornbills and glittering-bellied emeralds, velvet-purple coronets and golden-bellied star-frontlets, fiery-tailed awlbills and Andean hillstars, spatuletails and pufflegs, each the most amazing thing you have never seen, each thunderous wild heart the size of an infant’s fingernail, each mad heart silent, a brilliant music stilled.

Hummingbirds, like all flying birds but more so, have incredible enormous immense ferocious metabolisms. To drive those metabolisms they have race-car hearts that eat oxygen at an eye-popping rate. Their hearts are built of thinner, leaner fibers than ours. Their arteries are stiffer and more taut. They have more mitochondria in their heart muscles—anything to gulp more oxygen. Their hearts are stripped to the skin for the war against gravity and inertia, the mad search for food, the insane idea of flight. The price of their ambition is a life closer to death; they suffer more heart attacks and aneurysms and ruptures than any other living creature. It’s expensive to fly. You burn out. You fry the machine. You melt the engine. Every creature on earth has approximately two billion heartbeats to spend in a lifetime. You can spend them slowly, like a tortoise and live to be two hundred years old, or you can spend them fast, like a hummingbird, and live to be two years old.

by Brian Doyle, The American Scholar |  Read more:

Joe Jackson


[ed. Repost. Just because Joe is so great..]

The Most Important Numbers of the Next Half-Century

In 1991, former MIT dean Lester Thurow wrote: "If one looks at the last 20 years, Japan would have to be considered the betting favorite to win the economy honors of owning the 21st century."

It hasn't, and it likely won't. But 20 years ago, the view was nearly universal. Japan's economy was breathtaking -- rapid growth, innovation, and efficiency like no one had seen. From 1960 to 1990, real per-capital GDP grew by nearly 6%, double the rate of America's.

But then it all stopped. Japan's economy isn't the scene of decline some depict it as, but its growth slowed to a trickle at best.

What happened?

You can write volumes of books analyzing Japan's decline (and some have), but one of the biggest contributors to its stagnation is simple: It got old.

Decades in the making

The story begins, as so many about the modern day do, with World War II. Japan's toll in the world war was among the highest as a percentage of its population. Some estimate 4.4% of the Japanese population died in the war (the figure is 0.3% for the United States).

Demographically, two things resulted from that population shock that would shape the country's economic fate for the next half-century. Like America, Japan underwent a "baby boom" immediately after the war as returning soldiers married and families were rebuilt. More than 8 million Japanese babies were born from 1947 to 1949 -- a staggering sum given a population of around 70 million at the time.

Yet post-war devastation couldn't be ignored. Its major cities largely reduced to rubble, Japan didn't have the infrastructure necessary to support its existing population, let alone growth -- a problem amplified by the country's relative lack of natural resources. Tokyo-based journalist Eamonn Fingleton explains what happened next:
[In] the terrible winter of 1945-6 ... newly bereft of their empire, the Japanese nearly starved to death. With overseas expansion no longer an option, Japanese leaders determined as a top priority to cut the birthrate. Thereafter a culture of small families set in that has continued to the present day.
This created an extreme bulge in the country's demographics: a spike in population immediately after the war followed by decades of low birthrates.

As Japan entered the 1970s and 1980s, the baby boom generation -- called "dankai," or the "massive group" -- hit their peak earning and spending years. They bought cars, built houses and took vacations, helping to fuel the country's economic boom (which turned into an epic bubble). Observers like Thurow ostensibly extrapolated that growth and became dewy-eyed.

But as the 1990s rolled around Japan's dankai not only waved goodbye to their prime spending years, they crept into retirement. Consumption growth dropped and the need for assistance rose. Meanwhile, the small-family culture endured. Japan's birth rate per 1,000 people has averaged 12.4 per year since 1960, compared with 16 per year in the U.S, according to the United Nations. Combine the two trends, and Japan's aging population has created a demographic brick wall that has kept economic growth low for the last two decades, and will likely worsen for more to come. Adult diapers outsold baby diapers in Japan last year for the first time ever. There's your sign, as they say.

by Morgan Housel, Motley Fool |  Read more:

Robert Longo, Men Trapped In Ice, 1979
via:

Edward Hopper, Seven A.M., 1948. Oil on canvas.
via:

How Many Computers to Identify a Cat? Machines Teaching Machines to Learn


Google’s secretive X laboratory, known for inventing self-driving cars and augmented reality glasses, a small group of researchers began working several years ago on a simulation of the human brain.

There Google scientists created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own.

Presented with 10 million digital images found in YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats.

The neural network taught itself to recognize cats, which is actually no frivolous activity. This week the researchers will present the results of their work at a conference in Edinburgh, Scotland. The Google scientists and programmers will note that while it is hardly news that the Internet is full of cat videos, the simulation nevertheless surprised them. It performed far better than any previous effort by roughly doubling its accuracy in recognizing objects in a challenging list of 20,000 distinct items.

The research is representative of a new generation of computer science that is exploiting the falling cost of computing and the availability of huge clusters of computers in giant data centers. It is leading to significant advances in areas as diverse as machine vision and perception, speech recognition and language translation.

Although some of the computer science ideas that the researchers are using are not new, the sheer scale of the software simulations is leading to learning systems that were not previously possible. And Google researchers are not alone in exploiting the techniques, which are referred to as “deep learning” models. Last year Microsoft scientists presented research showing that the techniques could be applied equally well to build computer systems to understand human speech.

“This is the hottest thing in the speech recognition field these days,” said Yann LeCun, a computer scientist who specializes in machine learning at the Courant Institute of Mathematical Sciences at New York University.

And then, of course, there are the cats.

To find them, the Google research team, led by the Stanford University computer scientist Andrew Y. Ng and the Google fellow Jeff Dean, used an array of 16,000 processors to create a neural network with more than one billion connections. They then fed it random thumbnails of images, one each extracted from 10 million YouTube videos.

The videos were selected randomly and that in itself is an interesting comment on what interests humans in the Internet age. However, the research is also striking. That is because the software-based neural network created by the researchers appeared to closely mirror theories developed by biologists that suggest individual neurons are trained inside the brain to detect significant objects.

by John Markoff, NY Times |  Read more:
Photo: Jim Wilson/The New York Times

Can the Guardian Survive?

The Guardian is easy to mock for its sandal-wearing earnestness, its champagne socialism and congenital weakness for typos, but its readers en masse seemed like the kind any editor would be glad to have: curious, questioning, quick to laugh. Seeing the rapport between them and their paper, feeling its pull for the powerful and the talented, enjoying this brand-new festival that felt as if it had been going for years, you could easily have assumed that everything in the Guardian was rosy.

In many ways, it is. With its journalism, the Guardian has been having an astonishing run. For 20 years or more, ever since a bold reinvention led by Rusbridger’s predecessor Peter Preston in 1988, it has been the most stylish paper in the hyper-competitive British quality pack, the wittiest and best-designed, the strongest for features, the one most likely to reflect modern life. But it ruled only at what journalists call the soft end. In the 1970s, the age of Woodward and Bernstein, the Guardian’s best-remembered story was an April fool from 1977, which dreamt up the Pacific nation of San Serriffe – beautifully done but disclosing nothing more than its own sardonic wit. In the 1990s, the Guardian began to land some scoops, notably the scandals that brought down two Tory MPs, Jonathan Aitken and Neil Hamilton. But it still wasn’t known for big investigations, the kind of stories that demand courage, persistence and resources. This is where its culture has changed. It ran a sustained investigation into illicit payments by the arms giant BAE—first alleged in 2003, finally admitted in 2010, and now the subject of nine-figure compensation settlements. It did well with the Wikileaks diplomatic cables, and the English riots of 2011 and their causes.

Above all, it has led the way in the News International phone-hacking scandal, a farrago of power, corruption and lies, exposed by Nick Davies and other Guardian reporters. For two years, their investigation was lonely and scoffed at. A police chief urged Rusbridger to drop it; the mayor of London, Boris Johnson, who presides over the Metropolitan Police, called it “codswallop”. Then, last July, came the Guardian’s disclosure that the targets included the murdered teenager Milly Dowler. The story erupted across all the media. It has now led to the closure of the News of the World, the humbling of Rupert Murdoch, the fall of his son James, the arrest of his favourite Rebekah Brooks, multiple resignations by senior policemen and media executives, at least 50 more arrests, and six official investigations—three criminal ones, employing 150 police officers; one by a House of Commons select committee, one by the communications regulator Ofcom, and, most theatrically, the Leveson inquiry into the regulation of the media, which has spent months shining a fitful light on the mucky machinations of power. By the end of May, when it emerged that the Conservative-led coalition had allowed a former Murdoch editor to work at 10 Downing Street without the normal security vetting, the trail of dirt led all the way to David Cameron’s desk. (...)

This triumph of old-school reporting has been accompanied by spectacular success in new media. The Guardian has never been a big-selling newspaper: among the 11 national dailies in Britain, it lies 10th, with only the Independent behind it. But on the internet, the Guardian lies second among British newspaper sites (behind the Mail, which cheerfully chases hits by aiming lower than its print sister) and in the top five in the world, rubbing shoulders with the New York Times. Where many newspapers treated the web with suspicion, the Guardian dived in, starting early (1995), experimenting widely, pioneering live-blogging, embracing citizen journalism, mastering slideshows and timelines and interactive graphics. By March 2012 it was putting up 400 pieces of content every 24 hours. Its network of sites had a daily average of 4m browsers, as many as the sites for Britain’s bestselling newspaper (the Sun) and its bestselling broadsheet (the Telegraph) put together. The Guardian’s total traffic, around 67m unique browsers a month, was still rising by 60-70% a year. (...)

A sceptic could point out that the Guardian might as well be owned by a billionaire, given the losses it has been able to stomach. It is owned by the Scott Trust, set up in 1936 “to secure the financial and editorial independence of the Guardian in perpetuity”. The trust became a limited company in 2008, but remains trust-like, with all the shares held by the trustees. It also owns most of Auto Trader magazine, a cash cow which usually covers the Guardian’s losses. The idea that journalists like to believe, that the service they provide is more important than any profit it might make, is enshrined in the Scott Trust’s constitution. And Rusbridger says it makes a big difference to what they publish: “The fact that it was the Guardian that did the phone-hacking [story] directly flowed from being a trust.” But being a trust leads, inevitably, to mistrust: rivals depict the Guardian as a trustafarian, not having to make a living in the real world. (...)

The Guardian is not against all charges for digital reading. It asks a token sum for its iPhone edition (£4.99 a year), and a more realistic one for the iPad (£9.99 a month). But it is fiercely resistant to charging for its website—a position it shares with the Mail, the Telegraph, the Washington Post and many others. Some editors stay out of these choppy waters, saying the decisions are made by their commercial colleagues. Rusbridger goes the other way—not only is he happy to defend the Guardian’s stance, he has built a theory around it. He calls it “open journalism”, and in March, in an online Q&A session with readers, he defined it: “Open journalism is journalism which is fully knitted into the web of information that exists in the world today. It links to it; sifts and filters it; collaborates with it and generally uses the ability of anyone to publish and share material to give a better account of the world.”

He has become quite evangelical about it. Where did that come from? “Set aside how you’re going to pay for all this, and say ‘what’s the big story about, what’s happening to information, what is the big challenge for journalism?’ Any journalist who thinks we’re still living in the 19th-, 20th-century world in which a newsroom here can adequately cover the world around us in competition with what’s available on the open web – well, I think that’s very questionable. You can probably do it if you’re the FT or the Wall Street Journal and you’re selling time-critical financial information. For a general newspaper, forgive me if you’ve heard it before but the simplest way of explaining it is this. You’ve got Michael Billington, distinguished theatre critic, in the front row at the National Theatre. Are you saying you don’t need Michael Billington any more? No, he’s the Guardian voice, he is the expert. But what about the other 900 people in the theatre, don’t they have interesting things to say? Well obviously they do, and if we don’t do something with that social experience, somebody else will. And out of those 900 people, 30 will be very knowledgeable. So let’s say Michael Billington is as good as it gets, he’s 9 out of 10, but the experience of these other knowledgeable people is 6 out of 10, so the margin is 3 out of 10, that’s what you’re charging for. You either say ‘we’ll take that then, we’ll build a big wall round Michael Billington.’ Or you say, ‘actually, let’s get them on to our platform as well,’ and you’ve got 9 + 6. So what do you do? If you don’t do this, that’s bad for professional journalism, because you’re hedging against what other people can do. If you do do it, you have a much better account of what happens in a theatre, and you begin to think that it was quite odd to send one person on one night and think that was enough. It’s just obviously better. Then the question is how do you edit them, and find the people who know their Brecht from their musicals, and that’s probably partly software and partly old-fashioned editing.

“And the next question is, if it works for theatre does it work for other areas of journalism? I think it works for everything—investigative, foreign, science, environment. By building networks, you’re going with the flow of history, and your journalism is going to be more comprehensive and better. If you reduce it instantly to paywalls, you’re not tackling the bigger issue of what’s happening to journalism.”

by Tim de Lisle, More Intelligent Life |  Read more:
Photo illustrations Meeson

Monday, June 25, 2012

Our Underground Future

A finished basement can be a beautiful thing. With the right accoutrements and enough effort, what might otherwise be a damp, empty space lined with concrete can be turned into a cozy playroom, or a den, or an office and gym. Properly planned, the basement can become an integral part of a household, even a kind of engine that powers it from below.


The same is true for the far larger basement that all of us share: that vast space that exists under our feet wherever we go, out of sight and out of mind. Those of us who are city-dwellers already keep a lot of stuff down there—subway stations, sewer pipes, electrical lines—but as our cities grow more cramped, and real estate on the surface grows more valuable, the possibility that it can be used more inventively is starting to attract attention from planners around the world.

“It used to be, ‘How high can you go up into the sky?’” said Susie Kim, of the Boston-based urban design firm Koetter Kim & Associates. “Now it’s a matter of, ‘How low can you go and still be economically viable?’”

A cadre of engineers who specialize in tunneling and excavation say that we have barely begun to take advantage of the underground’s versatility. The underground is the next great frontier, they say, and figuring out how best to use it should be a priority as we look ahead to the shape our civilization will take.

“We have so much room underground,” said Sam Ariaratnam, a professor at Arizona State University and the chairman of the International Society for Trenchless Technology. “That underground real estate—people need to start looking at it. And they are starting to look at it.”

The federal government has taken an interest, convening a panel of specialists under the banner of the National Academy of Engineering to produce a report, due out later this year, on the potential uses for America’s underground space, and in particular its importance in building sustainable cities. The long-term vision is one in which the surface of the earth is reserved for the things we want to see and be around—houses, schools, yards, parks—while all the other facilities that are needed to make a city run, from water treatment plants to data banks to freight systems, hum away underground.

Though the basic idea has existed for decades, new engineering techniques and an increasing interest in sustainable urban growth have created fresh momentum for what once seemed like a notion out of Jules Verne. And the world has witnessed some striking new achievements. The city of Almere, in the Netherlands, built an underground trash network that uses suction tubes to transport waste out of the city at 70 kilometers per hour, making garbage trucks unnecessary. In Malaysia, a sophisticated new underground highway tunnel doubles as a discharge tunnel for floodwater. In Germany, a former iron mine is being converted into a nuclear waste repository, while scientists around the world explore the possibility of building actual nuclear power plants underground.

Overall, though, the cause of the underground has encountered resistance, in large part because digging large holes and building things inside them tends to be extremely expensive and technically demanding. Boston offers perfect examples of the pluses and minuses of the endeavor: Putting the Post Office Square parking lot underground created a park and a beloved urban amenity, but the much more ambitious Big Dig turned out to be a drawn-out and unspeakably costly piece of urban reengineering. And perhaps an even greater obstacle is the psychological one. As Ariaratnam put it, “Even in a condo tower, the penthouse on the top floor is the most attractive thing—everyone wants to be higher.” The underground, by contrast, calls to mind darkness, dirt, even danger—and when we imagine what it would look like for civilization to truly colonize it, we think of gophers and mole people. Little wonder that our politicians and urban designers don’t afford the underground anywhere near the level of attention and long-term vision they lavish on the surface. In a world where most people are accustomed to thinking of progress as pointing toward the heavens, it can be hard to retrain the imagination to aim downward.

by Leon Neyfakh, Boston Globe |  Read more:
Illustration: Jesse Lefkowitz