Friday, September 14, 2012

Please Stare

[ed. Wow, Sasha Weiss. What a great piece of writing.]

Entering the big tent at Lincoln Center, where most of the marquee shows at New York City’s fashion week took place, you feel transported to the scene in the “Hunger Games” movie, where the Capitol’s élite gather to observe the presentation of the new tributes, dressed in their metallic and feathered finest. The figures at Lincoln Center are humans, but humans who have imagined themselves into some mirrored universe, where women walk on craggy stilts, lips are colored the darkest crimson, and nearly seven-foot-tall men in hot pants show off their legs. They drift around the lobby, eyeing one another (eyes, too, are dramatically painted here, often in gold). Occasionally, homage is paid to a particularly daring outfit by means of a photograph.

I thought of the “Hunger Games” because that scene (the most visually arresting one in the film) is designed like a satanic fashion show, the runway serving as a conveyor belt for young children compelled to enact the desires of the powerful for beauty and bloodshed. “Hunger Games” isn’t the only pop-cultural artifact that primes us to view fashion as an expression of some inner rot, as vanity, a grasp at wealth, the shallow aspirations of a classist society. Even the shows devoted to its practice, like “Project Runway” and “America’s Next Top Model,” make fashion into a ruthless competition, presided over by stern, frosty judges (mostly women).

I didn’t expect to love the shows as I did, but I found them surprisingly joyous affairs. Watching them, we’re given permission to project ourselves into idealized, adventuresome future lives—ones that involve shimmering, jewel-toned gowns, stiff metallic trench coats, and flowing pants suits screen printed with images of highways—but we’re pulled even more forcefully to imagine our pasts. The fashion show—which begins with all the calculation and jostling of regular life—ends up depositing us somewhere back in the realm of childhood: before our personalities had coalesced, when we encountered ourselves in mirrors, wondered about who we might become, and pretended.

* * *

In New York, people stare at one another all the time, but it’s usually surreptitious: a flickering once-over walking down Spring street, checking out someone’s jeans. At fashion week, looking is the point. The waiting to enter the shows, and then the waiting for them to start, is interminable, and seems designed to stoke the study of others. Massed in a pack that reluctantly forms a line, the fashionistas gather at the entrance to a show, gobbling each other up with their eyes. (I spy, on the way into Nanette Lepore: many Louis Vuitton totes; a hideous crocheted poncho in garish colors layered over a flared leather miniskirt; a man and a woman who look to be in their eighties, both immaculately attired, the woman in black Chanel with leopard shoes.)

When we’re finally allowed to go in, the room itself is like a giant, blinking eye. At the back wall, hundreds of photographers have arranged themselves, nearly on top of one another, on rafters, creating a wall of jutting cameras. Lining the room are rows of benches and the spectators pile in (buyers, journalists, models, and the pure lovers of fashion, who one can spot because of the inventiveness of their outfits. I watched one latecomer navigate through a thicket of legs to reach her seat in four-inch platformed Oxfords, wearing stripes in all directions, to match her hair, which had one streak of white). We’re seated in descending order of importance—the well-known writers, editors, and models in the front row, closest to the catwalk. I’m in the standing room section, the better to survey the room.

A strobe-like flashing somewhere down below indicates the presence of a celebrity surrounded by cameras. I can see the fit silhouette of a woman in a haze of light. Someone near me murmurs that it’s Edie Falco. Even a crowd of that size (five hundred, easily, in the big tent’s main space) quickly becomes a hive, its lines of hierarchy drawn in thick black. The lights are low, with a glow of illumination from the stage and static of voices, and then, as in the theatre, the room turns a shade darker, the talking subsides. There’s a beat of anticipation, and the bright lights snap on.

Hundreds of well-dressed, strategizing people who have spent the last thirty minutes comparing themselves to one another incline their heads and their attention toward the runway. In this moment, they all want the same thing: to watch the beautiful parade.

by Sasha Weiss, New Yorker |  Read more:
Photo: Maria Lokke

Three Reasons to Salute Ben Bernanke

It’s time to give Ben Bernanke some credit. Under attack from the left and right for much of the past year, the mild-mannered former Princeton prof has shown some leadership and pushed through a major policy shift. In committing the Fed to buying tens of billions of dollars worth of mortgage bonds every month until the jobless rate, currently 8.1 per cent, falls markedly, Bernanke and his colleagues on the Fed’s policy-making committee have finally demonstrated that they won’t stand aside as tens of millions of Americans suffer the harsh consequences of a recession that was largely made on Wall Street.

I’ve had my ups and downs with Bernanke, whom I profiled at length back in 2008. At the start of the year, I thought critics were giving him a raw deal. With short-term interest rates close to zero (where they’ve been since December, 2008), and with job growth seemingly picking up, the calls for more Fed action seemed overstated. But over the past six months, as the recovery sputtered and Bernanke dithered, I too, ran out of patience with him. In a column in Fortune last month, I even suggested that Barack Obama should have replaced him when he had the chance, back in 2010.

It turns out that Bernanke was merely biding his time. I still think the Fed should have moved earlier. Once it became clear that slower G.D.P. growth, rather than some statistical aberration, was generating the big falloff in job creation we saw from March onwards, there was no justification for inaction. But Bernanke has now rectified the error—and then some. For at least three reasons, Thursday’s move was a historic one, which merits a loud salute:

1. Bernanke exceeded expectations. For several months now, he has been saying that the Fed would eventually act if the labor market didn’t improve of its own accord. In Jackson Hole last month, at the Fed’s annual policy gathering, he strongly hinted at another round of quantitative easing—the practice of exploiting the Fed’s capacity to create money and making large-scale purchases of bonds, which puts downward pressure on interest rates, which, in turn, spurs spending and job creation—at least in theory.

The Fed has tried this policy twice before, in 2009/10 (QE1) and 2010/11 (QE2). In retrospect, it was a big mistake to abandon QE2 just as the Obama Administration’s fiscal stimulus, which had provided support to the economy from 2009 to 2011, was running down. The experience of Japan demonstrates that in the aftermath of asset-price busts, when households and firms are seeking to pay down their debts, the prolonged maintenance of monetary and fiscal stimulus is necessary to prevent a semi-permanent slump.

Bernanke didn’t publicly concede on Thursday that he had blundered—that would be asking too much. But in announcing the terms of QE3, he went considerably further than most observers had been expecting. The two previous rounds of quantitative easing were term limited: this one isn’t. Rather, its duration will be directly linked to the jobs picture. “(I)f the outlook in the labor market for the labor market does not improve substantially, the Committee will continue its purchases of agency mortgage-backed securities, undertake additional asset purchases, and employ its other tools as appropriate such improvement is achieved…” the Fed said in a statement.

by John Cassidy, New Yorker |  Read more:
Photo: Platon

My Life as a TaskRabbit

Standing in the living room of his luxurious two-bedroom apartment, which has sweeping views of the San Francisco Bay, Curtis Jackson informs me that I am a terrible housecleaner. There are soap stains on the walls of his master bathroom and pools of water gathering near the edges of the tub. My Roomba vacuum, we discover after a lengthy and humiliating search, is out of power and stuck under a bed. There’s an entire room that I didn’t know about and thus never cleaned. I also neglected to take out the trash and left the living room coated in the noxious perfume of an organic cedar disinfectant. “I respect what you are trying to do, and you did an OK job in the time allotted,” he says. “But frankly, stick to being a reporter.”

The apartment is one stop in the middle of my short, backbreaking, soul-draining journey into what Silicon Valley venture capitalists often call the distributed workforce. This is the fancy term for the marketplace for odd jobs hosted by the site TaskRabbit, the get-me-a-soy-latte errands offered by the courier service Postmates, and the car washing assignments aggregated by yet another venture, called Cherry. These companies and several others are in the business of organizing and auctioning tedious and time-consuming chores. Rob Coneybeer, managing director of the investment firm Shasta Ventures, which has backed several of these new companies, says the goal is to build a new kind of labor market “where people end up getting paid more per hour than they would have otherwise and find it easier to do jobs they are good at.”

The idea of posting or finding jobs online isn’t new. Craigslist, the pioneering Internet bulletin board, allowed the primitive, gentle folk of the 1990s to find day work, not to mention cheap dates. These new services are different, partly because they’re focused and carefully supervised, and partly because they take advantage of smartphones. Workers can load one of these companies’ apps on their location-aware iPhone or Android device and, if the impulse strikes, take a job near them any time of day. Employers can monitor the whereabouts of their workers, make payments on their phones or over the Web, and evaluate each job after it’s accomplished. The most capable workers then rise to the top of the heap, attracting more work and higher pay. Lollygaggers who don’t know how to recharge their Roombas fall to the bottom of the barrel.

Distributed workforce entrepreneurs and their investors are thinking big. They compare their startups to fast-growing companies such as Airbnb, which allows people to rent out their homes. In this case, the assets for rent are people’s skills and time. Leah Busque, a former IBM (IBM) software engineer who started and runs TaskRabbit, says thousands of people make a living (up to $60,000 a year) on her site, which operates in San Francisco, Los Angeles, New York, Chicago, and five other cities. “We are enabling micro-entrepreneurs to build their own business on top of TaskRabbit, to set their own schedules, specify how much they want to get paid, say what they are good at, and then incorporate the work into their lifestyle,” she says.

Venture capitalists have bet $38 million on TaskRabbit and millions more on similar startups. Other distributed labor companies, with names like IAmExec (be a part-time gopher) and Gigwalk (run errands for companies) are being founded every day. Listening to this entrepreneurial buzz all summer, I got a notion that I couldn’t shake—that the only way to take the temperature of this hot new labor pool was to jump into it.

by Brad Stone, Bloomberg Businessweek |  Read more:

What Was Really Behind the Benghazi Attack?

Were the attacks on the United States Consulate in Benghazi, which killed the American Ambassador and three other diplomats, motivated by the film that the assailants, and many news networks, claim was their motive? Was it really religious outrage that made a few young men lose their heads and commit murder? Have any of the men who attacked the consulate actually seen the film? I do not know one Libyan who has, despite being in close contact with friends and relatives in Benghazi. And the attack was not preceded by vocal outrage toward the film. Libyan Internet sites and Facebook pages were not suddenly busy with chatter about it.

The film is offensive. It appears that it was made, rather clumsily, with the deliberate intention to offend. And if what happened yesterday was not, as I suspect, motivated by popular outrage, that outrage has now, as it were, caught up with the event. So, some might say, the fact that the attack might have been motivated by different intentions than those stated no longer matters. I don’t think so. It is important to see the incident for what it most likely was.

No specific group claimed responsibility for the attack, which was well orchestrated and involved heavy weapons. It is thought to be the work of the same ultra-religious groups who have perpetrated similar assaults in Benghazi. They are religious, authoritarian groups who justify their actions through very selective, corrupt, and ultimately self-serving interpretations of Islam. Under Qaddafi, they kept quiet. In the early days of the revolution some of them claimed that fighting Qaddafi was un-Islamic and conveniently issued a fatwa demanding full obedience to the ruler. This is Libya’s extreme right. And, while much is still uncertain, Tuesday’s attack appears to have been their attempt to escalate a strategy they have employed ever since the Libyan revolution overthrew Colonel Qaddafi’s dictatorship. They see in these days, in which the new Libya and its young institutions are still fragile, an opportunity to grab power. They want to exploit the impatient resentments of young people in particular in order to disrupt progress and the development of democratic institutions.

Even though they appear to be well funded from abroad and capable of ruthless acts of violence against Libyans and foreigners, these groups have so far failed to gain widespread support. In fact, the opposite: their actions have alienated most Libyans.

Ambassador J. Christopher Stevens was a popular figure in Libya, and nowhere more than in Benghazi. Friends and relatives there tell me that the city is mournful. There have been spontaneous demonstrations denouncing the attack. Popular Libyan Web sites are full of condemnations of those who carried out the assault. And there was a general air of despondency in the city Wednesday night. The streets were not as crowded and bustling as usual. There is a deep and palpable sense that Benghazi, the proud birthplace of the revolution, has failed to protect a highly regarded guest. There is outrage that Tripoli is yet to send government officials to Benghazi to condemn the attacks, instigate the necessary investigations and visit the Libyan members of the consulate staff who were wounded in the attack. There is anger, too, toward the government’s failure to protect hospitals, courtrooms, and other embassies that have recently suffered similar attacks in Benghazi. The city seems to have been left at the mercy of fanatics. And many fear that it will now become isolated. In fact, several American and European delegates and N.G.O. personnel have cancelled trips they had planned to make to Benghazi.

by Hisham Matar, New Yorker |  Read more:
Photograph by Ibrahim Alaguri/AP Photo

The Machines Are Taking Over

[ed. How computerized tutors are learning to teach humans.]

In a 1984 paper that is regarded as a classic of educational psychology, Benjamin Bloom, a professor at the University of Chicago, showed that being tutored is the most effective way to learn, vastly superior to being taught in a classroom. The experiments headed by Bloom randomly assigned fourth-, fifth- and eighth-grade students to classes of about 30 pupils per teacher, or to one-on-one tutoring. Children tutored individually performed two standard deviations better than children who received conventional classroom instruction — a huge difference. (...)

The morning after I watched Tyler Rogers do his homework, I sat in on his math class at Grafton Middle School. As he and his classmates filed into the classroom, I talked with his teacher, Kim Thienpont, who has taught middle school for 10 years. “As teachers, we get all this training in ‘differentiated instruction’ — adapting our teaching to the needs of each student,” she said. “But in a class of 20 students, with a certain amount of material we have to cover each day, how am I really going to do that?”

ASSISTments, Thienpont told me, made this possible, echoing what I heard from another area math teacher, Barbara Delaney, the day before. Delaney teaches sixth-grade math in nearby Bellingham. Each time her students use the computerized tutor to do their homework, the program collects data on how well they’re doing: which problems they got wrong, how many times they used the hint button. The information is automatically collated into a report, which is available to Delaney on her own computer before the next morning’s class. (Reports on individual students can be accessed by their parents.) “With ASSISTments, I know none of my students are falling through the cracks,” Delaney told me.

After completing a few warm-up problems on their school’s iPod Touches­, the students turned to the front of the room, where Thienpont projected a spreadsheet of the previous night’s homework. Like stock traders going over the day’s returns, the students scanned the data, comparing their own grades with the class average and picking out the problems that gave their classmates trouble. (“If you got a question wrong, but a lot of other people got it wrong, too, you don’t feel so bad,” Tyler explained.)

Thienpont began by going over “common wrong answers” — incorrect solutions that many students arrived at by following predictable but mistaken lines of reasoning. Or perhaps, not so predictable. “Sometimes I’m flabbergasted by the thing all the students get wrong,” Thienpont said. “It’s often a mistake I never would have expected.” Human teachers and tutors are susceptible to what cognitive scientists call the “expert blind spot” — once we’ve mastered a body of knowledge, it’s hard to imagine what novices don’t know — but computers have no such mental block. Highlighting “common wrong answers” allows Thienpont to address shared misconceptions without putting any one student on the spot.

I saw another unexpected effect of computerized tutoring in Delaney’s Bellingham classroom. After explaining how to solve a problem that many got wrong on the previous night’s homework, Delaney asked her students to come up with a hint for the next year’s class. Students called out suggested clues, and after a few tries, they arrived at a concise tip. “Congratulations!” she said. “You’ve just helped next year’s sixth graders learn math.” When Delaney’s future pupils press the hint button in ASSISTments, the former students’ advice will appear.

Unlike the proprietary software sold by Carnegie Learning, or by education-technology giants like Pearson, ASSISTments was designed to be modified by teachers and students, in a process Heffernan likens to the crowd-sourcing that created Wikipedia. His latest inspiration is to add a button to each page of ASSISTments that will allow students to access a Web page where they can get more information about, say, a relevant math concept. Heffernan and his W.P.I. colleagues are now developing a system of vetting and ranking the thousands of math-related sites on the Internet.

by Annie Murphy Paul, NY Times |  Read more:
Illustration by Tim Enthoven

Thursday, September 13, 2012

Healthcare's "Massive Transfer of Wealth"

[ed. It really is a massive transfer of wealth to health insurers and health care providers. The end years are expensive - no matter how much you think you've saved to sustain some measure of financial security, you never know if it will be enough. Then, there's the added indignity of having essentially zero control over when, or how, you exit this life. There has to be a better way.]

Here are excerpts from one family's story about the financial aspects of end-of-life-related healthcare:
My aunt, aged 94, died last week. In and of itself, there is nothing remarkable in this statement, except for the fact that she died a pauper and on medical assistance as a ward of the state of Minnesota... 
My aunt and her husband, who died in 1985, were hardworking Americans. The children of Polish immigrants, they tried to live by the rules. Combined, they worked for a total of 80 years in a variety of low-level, white-collar jobs. If they collectively earned $30,000 in any given year, that would have been a lot. 
Yet, somehow, my aunt managed to save more than $250,000. She also received small pensions from the Teamsters Union and the state of California, along with Social Security and a tiny private annuity. In the last decade of her life, her monthly income amounted to about $1,500.. 
But when she fell ill and had to be placed in assisted living, and finally in a nursing home, her financial fate was sealed. Although she had Medicare and Medicare supplemental insurance, neither of these covered the costs of long-term care. Her savings were now at risk, at a rate of $60,000 a year... 
In the end, she spent everything she had to qualify for Medicaid in Minnesota, which she was on for the last year of her life. This diligent, responsible American woman was pauperized simply because she had the indecency to get terminally ill... 
Though I have not been able to find statistics on the subject, I am certain that there will be a massive transfer of wealth over the next two or three decades, amounting to hundreds of billions of dollars or more, from people just like my aunt to health insurers and health care providers... 
This week, I was about to close out her checking account in the amount of $215, the sum total of her wealth. But I received, in the mail, a bill from a heath care provider in the amount of $220. Neither Medicare nor her supplemental insurer will pay it, because it is an unspecified "service not covered."More details of the story at the StarTribune. Of course, it's just one family's story. Repeated hundreds of thousands of times across the country.
My own mother, age 94, has asked me, "when the time comes" to "put her down."

by Minnasotastan, TYWKIWDBI |  Read more:

© Chris Ware/The New Yorker

Tyranny of Merit


The ideal of meritocracy has deep roots in this country. Jefferson dreamed of a “natural aristocracy.” But the modern meritocracy dates only to the 1930s, when Harvard President James Bryant Conant directed his admissions staff to find a measure of ability to supplement the old boys’ network. They settled on the exam we know as the SAT.

In the decades following World War II, standardized testing replaced the gentleman’s agreements that had governed the Ivy League. First Harvard, then Yale and the rest filled with the sons and eventually daughters of Jews, blue-collar workers, and other groups whose numbers had previously been limited.

After graduation, these newly pedigreed men and women flocked to New York and Washington. There, they took jobs once filled by products of New England boarding schools. One example is Lloyd Blankfein, the Bronx-born son of a Jewish postal clerk, who followed Harvard College and Harvard Law School with a job at a white-shoe law firm, which he left to join Goldman Sachs.

Hayes applauds the replacement of the WASP ascendancy with a more diverse cohort. The core of his book, however, argues that the principle on which they rose inevitably undermines itself.

The argument begins with the observation that meritocracy does not oppose unequal social and economic outcomes. Rather, it tries to justify inequality by offering greater rewards to the talented and hardworking.

The problem is that the effort presumes that everyone has the same chance to compete under the same rules. That may be true at the outset. But equality of opportunity tends to be subverted by the inequality of outcome that meritocracy legitimizes. In short, according to Hayes, “those who are able to climb up the ladder will find ways to pull it up after them, or to selectively lower it down to allow their friends, allies and kin to scramble up. In other words: ‘whoever says meritocracy says oligarchy.’”

With a nod to the early 20th-century German sociologist Robert Michels, Hayes calls this paradox the “Iron Law of Meritocracy.” (...)

Hayes oversells his argument as a unified explanation of the “fail decade.” Although it elucidates some aspects of the Iraq War, Katrina debacle, and financial crisis, these disasters had other causes. Nevertheless, the Iron Law of Meritocracy shows why our elites take the form they do and how they fell so out touch with reality. In Hayes’s account, the modern elite is caught in a feedback loop that makes it less and less open and more and more isolated from the rest of the country.

What’s to be done? One answer is to rescue meritocracy by providing the poor and middle class with the resources to compete. A popular strategy focuses on education reform. If schools were better, the argument goes, poor kids could compete on an equal footing for entry into the elite. The attempt to rescue meritocracy by fixing education has become a bipartisan consensus, reflected in Bush’s “No Child Left Behind” and Obama’s “Race to the Top.”

Hayes rejects this option. The defect of meritocracy, in his view, is not the inequality of opportunity that it conceals, but the inequality of outcome that it celebrates. In other words, the problem is not that the son of a postal clerk has less chance to become a Wall Street titan than he used to. It’s that the rewards of a career on Wall Street have become so disproportionate to the rewards of the traditional professions, let alone those available to a humble civil servant.

by Samuel Goldman, The American Conservative |  Read more:
Illustration by Michael Hogue

How Do Our Brains Process Music?

I listen to music only at very specific times. When I go out to hear it live, most obviously. When I’m cooking or doing the dishes I put on music, and sometimes other people are present. When I’m jogging or cycling to and from work down New York’s West Side Highway bike path, or if I’m in a rented car on the rare occasions I have to drive somewhere, I listen alone. And when I’m writing and recording music, I listen to what I’m working on. But that’s it.

I find music somewhat intrusive in restaurants or bars. Maybe due to my involvement with it, I feel I have to either listen intently or tune it out. Mostly I tune it out; I often don’t even notice if a Talking Heads song is playing in most public places. Sadly, most music then becomes (for me) an annoying sonic layer that just adds to the background noise.

As music becomes less of a thing—a cylinder, a cassette, a disc—and more ephemeral, perhaps we will start to assign an increasing value to live performances again. After years of hoarding LPs and CDs, I have to admit I’m now getting rid of them. I occasionally pop a CD into a player, but I’ve pretty much completely converted to listening to MP3s either on my computer or, gulp, my phone! For me, music is becoming dematerialized, a state that is more truthful to its nature, I suspect. Technology has brought us full circle.

I go to at least one live performance a week, sometimes with friends, sometimes alone. There are other people there. Often there is beer, too. After more than a hundred years of technological innovation, the digitization of music has inadvertently had the effect of emphasizing its social function. Not only do we still give friends copies of music that excites us, but increasingly we have come to value the social aspect of a live performance more than we used to. Music technology in some ways appears to have been on a trajectory in which the end result is that it will destroy and devalue itself. It will succeed completely when it self-destructs. The technology is useful and convenient, but it has, in the end, reduced its own value and increased the value of the things it has never been able to capture or reproduce.

Technology has altered the way music sounds, how it’s composed and how we experience it. It has also flooded the world with music. The world is awash with (mostly) recorded sounds. We used to have to pay for music or make it ourselves; playing, hearing and experiencing it was exceptional, a rare and special experience. Now hearing it is ubiquitous, and silence is the rarity that we pay for and savor.

Does our enjoyment of music—our ability to find a sequence of sounds emotionally affecting—have some neurological basis? From an evolutionary standpoint, does enjoying music provide any advantage? Is music of any truly practical use, or is it simply baggage that got carried along as we evolved other more obviously useful adaptations? Paleontologist Stephen Jay Gould and biologist Richard Lewontin wrote a paper in 1979 claiming that some of our skills and abilities might be like spandrels—the architectural negative spaces above the curve of the arches of buildings—details that weren’t originally designed as autonomous entities, but that came into being as a result of other, more practical elements around them.

by David Byrne, Smithsonian | Read more:
Photo: Clayton Cubitt

Melody Gardot



Virginia Colback, “Yellow and Grey Abstract”, oil and cement on canvas

Wednesday, September 12, 2012

Tesla Boy


Whoa, Dude, Are We in a Computer Right Now?

Two years ago, Rich Terrile appeared on Through the Wormhole, the Science Channel’s show about the mysteries of life and the universe. He was invited onto the program to discuss the theory that the human experience can be boiled down to something like an incredibly advanced, metaphysical version of The Sims.

It’s an idea that every college student with a gravity bong and The Matrix on DVD has thought of before, but Rich is a well-regarded scientist, the director of the Center for Evolutionary Computation and Automated Design at NASA’s Jet Propulsion Laboratory, and is currently writing an as-yet-untitled book about the subject, so we’re going to go ahead and take him seriously.

The essence of Rich’s theory is that a “programmer” from the future designed our reality to simulate the course of what the programmer considers to be ancient history—for whatever reason, maybe because he’s bored.

According to Moore’s Law, which states that computing power doubles roughly every two years, all of this will be theoretically possible in the future. Sooner or later, we’ll get to a place where simulating a few billion people—and making them believe they are sentient beings with the ability to control their own destinies—will be as easy as sending a stranger a picture of your genitals on your phone.

This hypothesis—versions of which have been kicked around for centuries—is becoming the trippy notion of the moment for philosophers, with people like Nick Bostrom, the director of Oxford University’s Future of Humanity Institute, seriously considering the premise.

Until recently, the simulation argument hadn’t really attracted traditional researchers. That’s not to say he is the first scientist to predict our ability to run realistic simulations (among others, Ray Kurzweil did that in his 1999 book The Age of Spiritual Machines), but he is one of the first to argue we might already be living inside one. Rich has even gone one step further by attempting to prove his theories through physics, citing things like the observable pixelation of the tiniest matter and the eerie similarities between quantum mechanics, the mathematical rules that govern our universe, and the creation of video game environments.

Just think: Whenever you fuck up there could be the intergalactic version of an overweight 13-year-old Korean boy controlling you and screaming “Shit!” into an Xbox headset. It sort of takes the edge off things.

VICE: When did you first surmise that our reality could be a computer simulation?
Rich Terrile: Unless you believe there’s something magical about consciousness—and I don’t, I believe it’s the product of a very sophisticated architecture within the human brain—then you have to assume that at some point it can be simulated by a computer, or in other words, replicated. There are two ways one might accomplish an artificial human brain in the future. One of them is to reverse-engineer it, but I think it would be far easier to evolve a circuit or architecture that could become conscious. Perhaps in the next ten to 30 years we’ll be able to incorporate artificial consciousness into our machines.

We’ll get there that fast?
Right now the fastest NASA supercomputers are cranking away at about double the speed of the human brain. If you make a simple calculation using Moore’s Law, you’ll find that these supercomputers, inside of a decade, will have the ability to compute an entire human lifetime of 80 years—including every thought ever conceived during that lifetime—in the span of a month.

That’s depressing.
Now brace yourself: In 30 years we expect that a PlayStation—they come out with a new PlayStation every six to eight years, so this would be a PlayStation 7—will be able to compute about 10,000 human lifetimes simultaneously in real time, or about a human lifetime in an hour.

There’s how many PlayStations worldwide? More than 100 million, certainly. So think of 100 million consoles, each one containing 10,000 humans. That means, by that time, conceptually, you could have more humans living in PlayStations than you have humans living on Earth today.

So there’s a possibility we’re living in a super advanced game in some bloodshot-eyed goober’s PlayStation right now?
Exactly. The supposition here is how do you know it’s not 30 years in the future now and you’re not one of these simulations? Let me go back a step here. As scientists, we put physical processes into mathematical frameworks, or into an equation. The universe behaves in a very peculiar way because it follows mathematics. Einstein said, “The most incomprehensible thing about the universe is that it’s comprehensible.” The universe does not have to work that way. It does not have to be so easy to abbreviate that I can basically write down a few pages of equations that contain enough information to simulate it.

The other interesting thing is that the natural world behaves exactly the same way as the environment of Grand Theft Auto IV. In the game, you can explore Liberty City seamlessly in phenomenal detail. I made a calculation of how big that city is, and it turns out it’s a million times larger than my PlayStation 3. You see exactly what you need to see of Liberty City when you need to see it, abbreviating the entire game universe into the console. The universe behaves in the exact same way. In quantum mechanics, particles do not have a definite state unless they’re being observed. Many theorists have spent a lot of time trying to figure out how you explain this. One explanation is that we’re living within a simulation, seeing what we need to see when we need to see it.

Which would explain why there have been reports of scientists observing pixels in the tiniest of microscopic images.
Right. The universe is also pixelated—in time, space, volume, and energy. There exists a fundamental unit that you cannot break down into anything smaller, which means the universe is made of a finite number of these units. This also means there are a finite number of things the universe can be; it’s not infinite, so it’s computable. And if it only behaves in a finite way when it’s being observed, then the question is: Is it being computed? Then there’s a mathematical parallel. If two things are mathematically equivalent, they’re the same. So the universe is mathematically equivalent to the simulation of the universe.

by Ben Muluch, Vice |  Read more:
Illustration By Julian Garcia

Bill Clinton Shows How It's Done


Bill Clinton spoke for nearly 50 minutes. His speech was dense, didactic and loaded with statistics and details. The paper version handed out to reporters took up four single-spaced pages in a tiny font, and he departed from it frequently. It may have been the most effective speech of either political convention.

The reason wasn't Clinton's oft-hyped "charisma," some kind of intangible political magnetism. Sure, Clinton has that -- a remarkable looseness and intimacy that draws listeners powerfully into his aura. But the strength of his speech came in its efforts to persuade.

Clinton made arguments. He talked through his reasoning. He went point by point through the case he wanted to make. He kept telling the audience he was talking to them and he wanted them to listen. In an age when so many political speeches are pure acts of rhetoric, full of stirring sentiments but utterly devoid of informational value -- when trying to win people over to your point of view is cynically assumed to be futile, so you settle for riling them up instead -- Clinton's felt like a whole different thing. In an era of detergent commercials, he delivered a real political speech.

by Molly Ball, The Atlantic |  Read more: