Friday, September 14, 2012

Three Reasons to Salute Ben Bernanke

It’s time to give Ben Bernanke some credit. Under attack from the left and right for much of the past year, the mild-mannered former Princeton prof has shown some leadership and pushed through a major policy shift. In committing the Fed to buying tens of billions of dollars worth of mortgage bonds every month until the jobless rate, currently 8.1 per cent, falls markedly, Bernanke and his colleagues on the Fed’s policy-making committee have finally demonstrated that they won’t stand aside as tens of millions of Americans suffer the harsh consequences of a recession that was largely made on Wall Street.

I’ve had my ups and downs with Bernanke, whom I profiled at length back in 2008. At the start of the year, I thought critics were giving him a raw deal. With short-term interest rates close to zero (where they’ve been since December, 2008), and with job growth seemingly picking up, the calls for more Fed action seemed overstated. But over the past six months, as the recovery sputtered and Bernanke dithered, I too, ran out of patience with him. In a column in Fortune last month, I even suggested that Barack Obama should have replaced him when he had the chance, back in 2010.

It turns out that Bernanke was merely biding his time. I still think the Fed should have moved earlier. Once it became clear that slower G.D.P. growth, rather than some statistical aberration, was generating the big falloff in job creation we saw from March onwards, there was no justification for inaction. But Bernanke has now rectified the error—and then some. For at least three reasons, Thursday’s move was a historic one, which merits a loud salute:

1. Bernanke exceeded expectations. For several months now, he has been saying that the Fed would eventually act if the labor market didn’t improve of its own accord. In Jackson Hole last month, at the Fed’s annual policy gathering, he strongly hinted at another round of quantitative easing—the practice of exploiting the Fed’s capacity to create money and making large-scale purchases of bonds, which puts downward pressure on interest rates, which, in turn, spurs spending and job creation—at least in theory.

The Fed has tried this policy twice before, in 2009/10 (QE1) and 2010/11 (QE2). In retrospect, it was a big mistake to abandon QE2 just as the Obama Administration’s fiscal stimulus, which had provided support to the economy from 2009 to 2011, was running down. The experience of Japan demonstrates that in the aftermath of asset-price busts, when households and firms are seeking to pay down their debts, the prolonged maintenance of monetary and fiscal stimulus is necessary to prevent a semi-permanent slump.

Bernanke didn’t publicly concede on Thursday that he had blundered—that would be asking too much. But in announcing the terms of QE3, he went considerably further than most observers had been expecting. The two previous rounds of quantitative easing were term limited: this one isn’t. Rather, its duration will be directly linked to the jobs picture. “(I)f the outlook in the labor market for the labor market does not improve substantially, the Committee will continue its purchases of agency mortgage-backed securities, undertake additional asset purchases, and employ its other tools as appropriate such improvement is achieved…” the Fed said in a statement.

by John Cassidy, New Yorker |  Read more:
Photo: Platon

My Life as a TaskRabbit

Standing in the living room of his luxurious two-bedroom apartment, which has sweeping views of the San Francisco Bay, Curtis Jackson informs me that I am a terrible housecleaner. There are soap stains on the walls of his master bathroom and pools of water gathering near the edges of the tub. My Roomba vacuum, we discover after a lengthy and humiliating search, is out of power and stuck under a bed. There’s an entire room that I didn’t know about and thus never cleaned. I also neglected to take out the trash and left the living room coated in the noxious perfume of an organic cedar disinfectant. “I respect what you are trying to do, and you did an OK job in the time allotted,” he says. “But frankly, stick to being a reporter.”

The apartment is one stop in the middle of my short, backbreaking, soul-draining journey into what Silicon Valley venture capitalists often call the distributed workforce. This is the fancy term for the marketplace for odd jobs hosted by the site TaskRabbit, the get-me-a-soy-latte errands offered by the courier service Postmates, and the car washing assignments aggregated by yet another venture, called Cherry. These companies and several others are in the business of organizing and auctioning tedious and time-consuming chores. Rob Coneybeer, managing director of the investment firm Shasta Ventures, which has backed several of these new companies, says the goal is to build a new kind of labor market “where people end up getting paid more per hour than they would have otherwise and find it easier to do jobs they are good at.”

The idea of posting or finding jobs online isn’t new. Craigslist, the pioneering Internet bulletin board, allowed the primitive, gentle folk of the 1990s to find day work, not to mention cheap dates. These new services are different, partly because they’re focused and carefully supervised, and partly because they take advantage of smartphones. Workers can load one of these companies’ apps on their location-aware iPhone or Android device and, if the impulse strikes, take a job near them any time of day. Employers can monitor the whereabouts of their workers, make payments on their phones or over the Web, and evaluate each job after it’s accomplished. The most capable workers then rise to the top of the heap, attracting more work and higher pay. Lollygaggers who don’t know how to recharge their Roombas fall to the bottom of the barrel.

Distributed workforce entrepreneurs and their investors are thinking big. They compare their startups to fast-growing companies such as Airbnb, which allows people to rent out their homes. In this case, the assets for rent are people’s skills and time. Leah Busque, a former IBM (IBM) software engineer who started and runs TaskRabbit, says thousands of people make a living (up to $60,000 a year) on her site, which operates in San Francisco, Los Angeles, New York, Chicago, and five other cities. “We are enabling micro-entrepreneurs to build their own business on top of TaskRabbit, to set their own schedules, specify how much they want to get paid, say what they are good at, and then incorporate the work into their lifestyle,” she says.

Venture capitalists have bet $38 million on TaskRabbit and millions more on similar startups. Other distributed labor companies, with names like IAmExec (be a part-time gopher) and Gigwalk (run errands for companies) are being founded every day. Listening to this entrepreneurial buzz all summer, I got a notion that I couldn’t shake—that the only way to take the temperature of this hot new labor pool was to jump into it.

by Brad Stone, Bloomberg Businessweek |  Read more:

What Was Really Behind the Benghazi Attack?

Were the attacks on the United States Consulate in Benghazi, which killed the American Ambassador and three other diplomats, motivated by the film that the assailants, and many news networks, claim was their motive? Was it really religious outrage that made a few young men lose their heads and commit murder? Have any of the men who attacked the consulate actually seen the film? I do not know one Libyan who has, despite being in close contact with friends and relatives in Benghazi. And the attack was not preceded by vocal outrage toward the film. Libyan Internet sites and Facebook pages were not suddenly busy with chatter about it.

The film is offensive. It appears that it was made, rather clumsily, with the deliberate intention to offend. And if what happened yesterday was not, as I suspect, motivated by popular outrage, that outrage has now, as it were, caught up with the event. So, some might say, the fact that the attack might have been motivated by different intentions than those stated no longer matters. I don’t think so. It is important to see the incident for what it most likely was.

No specific group claimed responsibility for the attack, which was well orchestrated and involved heavy weapons. It is thought to be the work of the same ultra-religious groups who have perpetrated similar assaults in Benghazi. They are religious, authoritarian groups who justify their actions through very selective, corrupt, and ultimately self-serving interpretations of Islam. Under Qaddafi, they kept quiet. In the early days of the revolution some of them claimed that fighting Qaddafi was un-Islamic and conveniently issued a fatwa demanding full obedience to the ruler. This is Libya’s extreme right. And, while much is still uncertain, Tuesday’s attack appears to have been their attempt to escalate a strategy they have employed ever since the Libyan revolution overthrew Colonel Qaddafi’s dictatorship. They see in these days, in which the new Libya and its young institutions are still fragile, an opportunity to grab power. They want to exploit the impatient resentments of young people in particular in order to disrupt progress and the development of democratic institutions.

Even though they appear to be well funded from abroad and capable of ruthless acts of violence against Libyans and foreigners, these groups have so far failed to gain widespread support. In fact, the opposite: their actions have alienated most Libyans.

Ambassador J. Christopher Stevens was a popular figure in Libya, and nowhere more than in Benghazi. Friends and relatives there tell me that the city is mournful. There have been spontaneous demonstrations denouncing the attack. Popular Libyan Web sites are full of condemnations of those who carried out the assault. And there was a general air of despondency in the city Wednesday night. The streets were not as crowded and bustling as usual. There is a deep and palpable sense that Benghazi, the proud birthplace of the revolution, has failed to protect a highly regarded guest. There is outrage that Tripoli is yet to send government officials to Benghazi to condemn the attacks, instigate the necessary investigations and visit the Libyan members of the consulate staff who were wounded in the attack. There is anger, too, toward the government’s failure to protect hospitals, courtrooms, and other embassies that have recently suffered similar attacks in Benghazi. The city seems to have been left at the mercy of fanatics. And many fear that it will now become isolated. In fact, several American and European delegates and N.G.O. personnel have cancelled trips they had planned to make to Benghazi.

by Hisham Matar, New Yorker |  Read more:
Photograph by Ibrahim Alaguri/AP Photo

The Machines Are Taking Over

[ed. How computerized tutors are learning to teach humans.]

In a 1984 paper that is regarded as a classic of educational psychology, Benjamin Bloom, a professor at the University of Chicago, showed that being tutored is the most effective way to learn, vastly superior to being taught in a classroom. The experiments headed by Bloom randomly assigned fourth-, fifth- and eighth-grade students to classes of about 30 pupils per teacher, or to one-on-one tutoring. Children tutored individually performed two standard deviations better than children who received conventional classroom instruction — a huge difference. (...)

The morning after I watched Tyler Rogers do his homework, I sat in on his math class at Grafton Middle School. As he and his classmates filed into the classroom, I talked with his teacher, Kim Thienpont, who has taught middle school for 10 years. “As teachers, we get all this training in ‘differentiated instruction’ — adapting our teaching to the needs of each student,” she said. “But in a class of 20 students, with a certain amount of material we have to cover each day, how am I really going to do that?”

ASSISTments, Thienpont told me, made this possible, echoing what I heard from another area math teacher, Barbara Delaney, the day before. Delaney teaches sixth-grade math in nearby Bellingham. Each time her students use the computerized tutor to do their homework, the program collects data on how well they’re doing: which problems they got wrong, how many times they used the hint button. The information is automatically collated into a report, which is available to Delaney on her own computer before the next morning’s class. (Reports on individual students can be accessed by their parents.) “With ASSISTments, I know none of my students are falling through the cracks,” Delaney told me.

After completing a few warm-up problems on their school’s iPod Touches­, the students turned to the front of the room, where Thienpont projected a spreadsheet of the previous night’s homework. Like stock traders going over the day’s returns, the students scanned the data, comparing their own grades with the class average and picking out the problems that gave their classmates trouble. (“If you got a question wrong, but a lot of other people got it wrong, too, you don’t feel so bad,” Tyler explained.)

Thienpont began by going over “common wrong answers” — incorrect solutions that many students arrived at by following predictable but mistaken lines of reasoning. Or perhaps, not so predictable. “Sometimes I’m flabbergasted by the thing all the students get wrong,” Thienpont said. “It’s often a mistake I never would have expected.” Human teachers and tutors are susceptible to what cognitive scientists call the “expert blind spot” — once we’ve mastered a body of knowledge, it’s hard to imagine what novices don’t know — but computers have no such mental block. Highlighting “common wrong answers” allows Thienpont to address shared misconceptions without putting any one student on the spot.

I saw another unexpected effect of computerized tutoring in Delaney’s Bellingham classroom. After explaining how to solve a problem that many got wrong on the previous night’s homework, Delaney asked her students to come up with a hint for the next year’s class. Students called out suggested clues, and after a few tries, they arrived at a concise tip. “Congratulations!” she said. “You’ve just helped next year’s sixth graders learn math.” When Delaney’s future pupils press the hint button in ASSISTments, the former students’ advice will appear.

Unlike the proprietary software sold by Carnegie Learning, or by education-technology giants like Pearson, ASSISTments was designed to be modified by teachers and students, in a process Heffernan likens to the crowd-sourcing that created Wikipedia. His latest inspiration is to add a button to each page of ASSISTments that will allow students to access a Web page where they can get more information about, say, a relevant math concept. Heffernan and his W.P.I. colleagues are now developing a system of vetting and ranking the thousands of math-related sites on the Internet.

by Annie Murphy Paul, NY Times |  Read more:
Illustration by Tim Enthoven

Thursday, September 13, 2012

Healthcare's "Massive Transfer of Wealth"

[ed. It really is a massive transfer of wealth to health insurers and health care providers. The end years are expensive - no matter how much you think you've saved to sustain some measure of financial security, you never know if it will be enough. Then, there's the added indignity of having essentially zero control over when, or how, you exit this life. There has to be a better way.]

Here are excerpts from one family's story about the financial aspects of end-of-life-related healthcare:
My aunt, aged 94, died last week. In and of itself, there is nothing remarkable in this statement, except for the fact that she died a pauper and on medical assistance as a ward of the state of Minnesota... 
My aunt and her husband, who died in 1985, were hardworking Americans. The children of Polish immigrants, they tried to live by the rules. Combined, they worked for a total of 80 years in a variety of low-level, white-collar jobs. If they collectively earned $30,000 in any given year, that would have been a lot. 
Yet, somehow, my aunt managed to save more than $250,000. She also received small pensions from the Teamsters Union and the state of California, along with Social Security and a tiny private annuity. In the last decade of her life, her monthly income amounted to about $1,500.. 
But when she fell ill and had to be placed in assisted living, and finally in a nursing home, her financial fate was sealed. Although she had Medicare and Medicare supplemental insurance, neither of these covered the costs of long-term care. Her savings were now at risk, at a rate of $60,000 a year... 
In the end, she spent everything she had to qualify for Medicaid in Minnesota, which she was on for the last year of her life. This diligent, responsible American woman was pauperized simply because she had the indecency to get terminally ill... 
Though I have not been able to find statistics on the subject, I am certain that there will be a massive transfer of wealth over the next two or three decades, amounting to hundreds of billions of dollars or more, from people just like my aunt to health insurers and health care providers... 
This week, I was about to close out her checking account in the amount of $215, the sum total of her wealth. But I received, in the mail, a bill from a heath care provider in the amount of $220. Neither Medicare nor her supplemental insurer will pay it, because it is an unspecified "service not covered."More details of the story at the StarTribune. Of course, it's just one family's story. Repeated hundreds of thousands of times across the country.
My own mother, age 94, has asked me, "when the time comes" to "put her down."

by Minnasotastan, TYWKIWDBI |  Read more:

© Chris Ware/The New Yorker

Tyranny of Merit


The ideal of meritocracy has deep roots in this country. Jefferson dreamed of a “natural aristocracy.” But the modern meritocracy dates only to the 1930s, when Harvard President James Bryant Conant directed his admissions staff to find a measure of ability to supplement the old boys’ network. They settled on the exam we know as the SAT.

In the decades following World War II, standardized testing replaced the gentleman’s agreements that had governed the Ivy League. First Harvard, then Yale and the rest filled with the sons and eventually daughters of Jews, blue-collar workers, and other groups whose numbers had previously been limited.

After graduation, these newly pedigreed men and women flocked to New York and Washington. There, they took jobs once filled by products of New England boarding schools. One example is Lloyd Blankfein, the Bronx-born son of a Jewish postal clerk, who followed Harvard College and Harvard Law School with a job at a white-shoe law firm, which he left to join Goldman Sachs.

Hayes applauds the replacement of the WASP ascendancy with a more diverse cohort. The core of his book, however, argues that the principle on which they rose inevitably undermines itself.

The argument begins with the observation that meritocracy does not oppose unequal social and economic outcomes. Rather, it tries to justify inequality by offering greater rewards to the talented and hardworking.

The problem is that the effort presumes that everyone has the same chance to compete under the same rules. That may be true at the outset. But equality of opportunity tends to be subverted by the inequality of outcome that meritocracy legitimizes. In short, according to Hayes, “those who are able to climb up the ladder will find ways to pull it up after them, or to selectively lower it down to allow their friends, allies and kin to scramble up. In other words: ‘whoever says meritocracy says oligarchy.’”

With a nod to the early 20th-century German sociologist Robert Michels, Hayes calls this paradox the “Iron Law of Meritocracy.” (...)

Hayes oversells his argument as a unified explanation of the “fail decade.” Although it elucidates some aspects of the Iraq War, Katrina debacle, and financial crisis, these disasters had other causes. Nevertheless, the Iron Law of Meritocracy shows why our elites take the form they do and how they fell so out touch with reality. In Hayes’s account, the modern elite is caught in a feedback loop that makes it less and less open and more and more isolated from the rest of the country.

What’s to be done? One answer is to rescue meritocracy by providing the poor and middle class with the resources to compete. A popular strategy focuses on education reform. If schools were better, the argument goes, poor kids could compete on an equal footing for entry into the elite. The attempt to rescue meritocracy by fixing education has become a bipartisan consensus, reflected in Bush’s “No Child Left Behind” and Obama’s “Race to the Top.”

Hayes rejects this option. The defect of meritocracy, in his view, is not the inequality of opportunity that it conceals, but the inequality of outcome that it celebrates. In other words, the problem is not that the son of a postal clerk has less chance to become a Wall Street titan than he used to. It’s that the rewards of a career on Wall Street have become so disproportionate to the rewards of the traditional professions, let alone those available to a humble civil servant.

by Samuel Goldman, The American Conservative |  Read more:
Illustration by Michael Hogue

How Do Our Brains Process Music?

I listen to music only at very specific times. When I go out to hear it live, most obviously. When I’m cooking or doing the dishes I put on music, and sometimes other people are present. When I’m jogging or cycling to and from work down New York’s West Side Highway bike path, or if I’m in a rented car on the rare occasions I have to drive somewhere, I listen alone. And when I’m writing and recording music, I listen to what I’m working on. But that’s it.

I find music somewhat intrusive in restaurants or bars. Maybe due to my involvement with it, I feel I have to either listen intently or tune it out. Mostly I tune it out; I often don’t even notice if a Talking Heads song is playing in most public places. Sadly, most music then becomes (for me) an annoying sonic layer that just adds to the background noise.

As music becomes less of a thing—a cylinder, a cassette, a disc—and more ephemeral, perhaps we will start to assign an increasing value to live performances again. After years of hoarding LPs and CDs, I have to admit I’m now getting rid of them. I occasionally pop a CD into a player, but I’ve pretty much completely converted to listening to MP3s either on my computer or, gulp, my phone! For me, music is becoming dematerialized, a state that is more truthful to its nature, I suspect. Technology has brought us full circle.

I go to at least one live performance a week, sometimes with friends, sometimes alone. There are other people there. Often there is beer, too. After more than a hundred years of technological innovation, the digitization of music has inadvertently had the effect of emphasizing its social function. Not only do we still give friends copies of music that excites us, but increasingly we have come to value the social aspect of a live performance more than we used to. Music technology in some ways appears to have been on a trajectory in which the end result is that it will destroy and devalue itself. It will succeed completely when it self-destructs. The technology is useful and convenient, but it has, in the end, reduced its own value and increased the value of the things it has never been able to capture or reproduce.

Technology has altered the way music sounds, how it’s composed and how we experience it. It has also flooded the world with music. The world is awash with (mostly) recorded sounds. We used to have to pay for music or make it ourselves; playing, hearing and experiencing it was exceptional, a rare and special experience. Now hearing it is ubiquitous, and silence is the rarity that we pay for and savor.

Does our enjoyment of music—our ability to find a sequence of sounds emotionally affecting—have some neurological basis? From an evolutionary standpoint, does enjoying music provide any advantage? Is music of any truly practical use, or is it simply baggage that got carried along as we evolved other more obviously useful adaptations? Paleontologist Stephen Jay Gould and biologist Richard Lewontin wrote a paper in 1979 claiming that some of our skills and abilities might be like spandrels—the architectural negative spaces above the curve of the arches of buildings—details that weren’t originally designed as autonomous entities, but that came into being as a result of other, more practical elements around them.

by David Byrne, Smithsonian | Read more:
Photo: Clayton Cubitt

Melody Gardot



Virginia Colback, “Yellow and Grey Abstract”, oil and cement on canvas

Wednesday, September 12, 2012

Tesla Boy


Whoa, Dude, Are We in a Computer Right Now?

Two years ago, Rich Terrile appeared on Through the Wormhole, the Science Channel’s show about the mysteries of life and the universe. He was invited onto the program to discuss the theory that the human experience can be boiled down to something like an incredibly advanced, metaphysical version of The Sims.

It’s an idea that every college student with a gravity bong and The Matrix on DVD has thought of before, but Rich is a well-regarded scientist, the director of the Center for Evolutionary Computation and Automated Design at NASA’s Jet Propulsion Laboratory, and is currently writing an as-yet-untitled book about the subject, so we’re going to go ahead and take him seriously.

The essence of Rich’s theory is that a “programmer” from the future designed our reality to simulate the course of what the programmer considers to be ancient history—for whatever reason, maybe because he’s bored.

According to Moore’s Law, which states that computing power doubles roughly every two years, all of this will be theoretically possible in the future. Sooner or later, we’ll get to a place where simulating a few billion people—and making them believe they are sentient beings with the ability to control their own destinies—will be as easy as sending a stranger a picture of your genitals on your phone.

This hypothesis—versions of which have been kicked around for centuries—is becoming the trippy notion of the moment for philosophers, with people like Nick Bostrom, the director of Oxford University’s Future of Humanity Institute, seriously considering the premise.

Until recently, the simulation argument hadn’t really attracted traditional researchers. That’s not to say he is the first scientist to predict our ability to run realistic simulations (among others, Ray Kurzweil did that in his 1999 book The Age of Spiritual Machines), but he is one of the first to argue we might already be living inside one. Rich has even gone one step further by attempting to prove his theories through physics, citing things like the observable pixelation of the tiniest matter and the eerie similarities between quantum mechanics, the mathematical rules that govern our universe, and the creation of video game environments.

Just think: Whenever you fuck up there could be the intergalactic version of an overweight 13-year-old Korean boy controlling you and screaming “Shit!” into an Xbox headset. It sort of takes the edge off things.

VICE: When did you first surmise that our reality could be a computer simulation?
Rich Terrile: Unless you believe there’s something magical about consciousness—and I don’t, I believe it’s the product of a very sophisticated architecture within the human brain—then you have to assume that at some point it can be simulated by a computer, or in other words, replicated. There are two ways one might accomplish an artificial human brain in the future. One of them is to reverse-engineer it, but I think it would be far easier to evolve a circuit or architecture that could become conscious. Perhaps in the next ten to 30 years we’ll be able to incorporate artificial consciousness into our machines.

We’ll get there that fast?
Right now the fastest NASA supercomputers are cranking away at about double the speed of the human brain. If you make a simple calculation using Moore’s Law, you’ll find that these supercomputers, inside of a decade, will have the ability to compute an entire human lifetime of 80 years—including every thought ever conceived during that lifetime—in the span of a month.

That’s depressing.
Now brace yourself: In 30 years we expect that a PlayStation—they come out with a new PlayStation every six to eight years, so this would be a PlayStation 7—will be able to compute about 10,000 human lifetimes simultaneously in real time, or about a human lifetime in an hour.

There’s how many PlayStations worldwide? More than 100 million, certainly. So think of 100 million consoles, each one containing 10,000 humans. That means, by that time, conceptually, you could have more humans living in PlayStations than you have humans living on Earth today.

So there’s a possibility we’re living in a super advanced game in some bloodshot-eyed goober’s PlayStation right now?
Exactly. The supposition here is how do you know it’s not 30 years in the future now and you’re not one of these simulations? Let me go back a step here. As scientists, we put physical processes into mathematical frameworks, or into an equation. The universe behaves in a very peculiar way because it follows mathematics. Einstein said, “The most incomprehensible thing about the universe is that it’s comprehensible.” The universe does not have to work that way. It does not have to be so easy to abbreviate that I can basically write down a few pages of equations that contain enough information to simulate it.

The other interesting thing is that the natural world behaves exactly the same way as the environment of Grand Theft Auto IV. In the game, you can explore Liberty City seamlessly in phenomenal detail. I made a calculation of how big that city is, and it turns out it’s a million times larger than my PlayStation 3. You see exactly what you need to see of Liberty City when you need to see it, abbreviating the entire game universe into the console. The universe behaves in the exact same way. In quantum mechanics, particles do not have a definite state unless they’re being observed. Many theorists have spent a lot of time trying to figure out how you explain this. One explanation is that we’re living within a simulation, seeing what we need to see when we need to see it.

Which would explain why there have been reports of scientists observing pixels in the tiniest of microscopic images.
Right. The universe is also pixelated—in time, space, volume, and energy. There exists a fundamental unit that you cannot break down into anything smaller, which means the universe is made of a finite number of these units. This also means there are a finite number of things the universe can be; it’s not infinite, so it’s computable. And if it only behaves in a finite way when it’s being observed, then the question is: Is it being computed? Then there’s a mathematical parallel. If two things are mathematically equivalent, they’re the same. So the universe is mathematically equivalent to the simulation of the universe.

by Ben Muluch, Vice |  Read more:
Illustration By Julian Garcia

Bill Clinton Shows How It's Done


Bill Clinton spoke for nearly 50 minutes. His speech was dense, didactic and loaded with statistics and details. The paper version handed out to reporters took up four single-spaced pages in a tiny font, and he departed from it frequently. It may have been the most effective speech of either political convention.

The reason wasn't Clinton's oft-hyped "charisma," some kind of intangible political magnetism. Sure, Clinton has that -- a remarkable looseness and intimacy that draws listeners powerfully into his aura. But the strength of his speech came in its efforts to persuade.

Clinton made arguments. He talked through his reasoning. He went point by point through the case he wanted to make. He kept telling the audience he was talking to them and he wanted them to listen. In an age when so many political speeches are pure acts of rhetoric, full of stirring sentiments but utterly devoid of informational value -- when trying to win people over to your point of view is cynically assumed to be futile, so you settle for riling them up instead -- Clinton's felt like a whole different thing. In an era of detergent commercials, he delivered a real political speech.

by Molly Ball, The Atlantic |  Read more:

Coming Apart

Of the three attacks that have provoked the United States into a major war—in 1861, 1941, and 2001—only one came as a complete surprise. Fort Sumter had been under siege for months when, just before daybreak on April 12, 1861, Confederate batteries around Charleston Harbor, after giving an hour’s notice, opened fire on the Federal position. The Japanese attack at Pearl Harbor, on December 7, 1941, was a violent shock, but only in the nature and extent of the destruction: by then, most Americans had come to believe that the country would be dragged into the global war with Fascism one way or another, though their eyes were fixed on Europe, not the Pacific.

The attacks of 9/11 were the biggest surprise in American history, and for the past ten years we haven’t stopped being surprised. The war on terror has had no discernible trajectory, and, unlike other military conflicts, it’s almost impossible to define victory. You can’t document the war’s progress on a world map or chart it on a historical timetable in a way that makes any sense. A country used to a feeling of command and control has been whipsawed into a state of perpetual reaction, swinging wildly between passive fear and fevered, often thoughtless, activity, at a high cost to its self-confidence. Each new episode has been hard, if not impossible, to predict: from the first instant of the attacks to the collapse of the towers; from the decision to invade Iraq to the failure to find a single weapon of mass destruction; from the insurgency to the surge; from the return of the Taliban to the Arab Spring to the point-blank killing of bin Laden; from the financial crisis to the landslide election of Barack Obama and his nearly immediate repudiation.

Adam Goodheart’s new book, “1861: The Civil War Awakening,” shows that the start of the conflict was accompanied, in what was left of the Union, by a revolutionary surge of energy among young people, who saw the dramatic events of that year in terms of the ideals of 1776. Almost two years before the Emancipation Proclamation, millions of Americans already understood that this was to be a war for or against slavery. Goodheart writes, “The war represented the overdue effort to sort out the double legacy of America’s founders: the uneasy marriage of the Declaration’s inspired ideals with the Constitution’s ingenious expedients.”

Pearl Harbor was similarly clarifying. It put an instant end to the isolationism that had kept American foreign policy in a chokehold for two decades. In the White House on the night of December 7th, Franklin Roosevelt’s Navy Secretary, Frank Knox, whispered to Secretary of Labor Frances Perkins, “I think the boss must have a great load off his mind. . . . At least we know what to do now.” The Second World War brought a truce in the American class war that had raged throughout the thirties, and it unified a bitterly divided country. By the time of the Japanese surrender, the Great Depression was over and America had been transformed.

This isn’t to deny that there were fierce arguments, at the time and ever since, about the causes and goals of both the Civil War and the Second World War. But 1861 and 1941 each created a common national narrative (which happened to be the victors’ narrative): both wars were about the country’s survival and the expansion of the freedoms on which it was founded. Nothing like this consensus has formed around September 11th. On the interstate south of Mount Airy, there’s a recruiting billboard with the famous image of marines raising the flag at Iwo Jima, and the slogan “For Our Nation. For Us All.” In recent years, “For Us All” has been a fantasy. Indeed, the decade since the attacks has destroyed the very possibility of a common national narrative in this country.

The attacks, so unforeseen, presented a tremendous challenge, one that a country in better shape would have found a way to address. This challenge began on the level of definition and understanding. The essential problem was one of asymmetry: the enemy was nineteen Arab men in suits, holding commercial-airline tickets. They were under the command not of a government but, rather, of a shadowy organization whose name no one could pronounce, consisting of an obscure Saudi-in-exile and his several thousand followers hiding out in the Afghan desert. The damage caused by the attacks spread outward from Ground Zero through the whole global economy—but, even so, these acts of terrorism were different only in degree from earlier truck, car, and boat bombings. When other terrorists had tried, in 1993, what the hijackers achieved in 2001, their failure to bring down one of the Twin Towers had been categorized as a crime, to be handled by a federal court. September 11th, too, was a crime—one that, by imagination, skill, and luck, produced the effects of a war.

But it was also a crime linked to one of the largest and most destructive political tendencies in the modern world: radical Islamism. Al Qaeda was its self-appointed vanguard, but across the Muslim countries there were other, more local organizations that, for nearly three decades, had been killing thousands of people in the name of this ideology. Several regimes—Iran, Sudan, Saudi Arabia, Pakistan—officially subscribed to some variant of radical Islamism, tolerating or even supporting terrorists. Millions of Muslims, while not adherents of Al Qaeda’s most nihilistic fantasies, identified with its resentments and welcomed the attacks as overdue justice against American tyranny.

A crime that felt like a war, waged by a group of stateless men occupying the fringe of a widespread ideology, who called themselves holy warriors and wanted to provoke the superpower into responding with more war: this was something entirely new. It raised vexing questions about the nature of the conflict, the enemy, and the best response, questions made all the more difficult by America’s habitual isolation, and its profound indifference to world events that had set in after the Cold War.

No one appeared more surprised on September 11th, more caught off guard, than President Bush. The look of startled fear on his face neither reflected nor inspired the quiet strength and resolve that he kept asserting as the country’s response. In reaction to his own unreadiness, Bush immediately overreached for an answer. In his memoir, “Decision Points,” Bush describes his thinking as he absorbed the news in the Presidential limousine, on Route 41 in Florida: “The first plane could have been an accident. The second was definitely an attack. The third was a declaration of war.” In the President’s mind, 9/11 was elevated to an act of war by the number of planes. Later that day, at Offutt Air Force Base, in Nebraska, he further refined his interpretation, telling his National Security Council by videoconference, “We are at war against terror.”

Those were fateful words. Defining the enemy by its tactic was a strange conceptual diversion that immediately made the focus too narrow (what about the ideology behind the terror?) and too broad (were we at war with all terrorists and their supporters everywhere?). The President could have said, “We are at war against Al Qaeda,” but he didn’t. Instead, he escalated his rhetoric, in an attempt to overpower any ambiguities. Freedom was at war with fear, he told the country, and he would not rest until the final victory. In short, the new world of 2001 looked very much like the bygone worlds of 1861 and 1941. The President took inspiration from a painting, in the White House Treaty Room, depicting Lincoln on board a steamship with Generals Grant and Sherman: it reminded Bush of Lincoln’s “clarity of purpose.” The size of the undertaking seemed to give Bush a new comfort. His entire sense of the job came to depend on being a war President.

What were the American people to do in this vast new war? In his address to Congress on September 20, 2001—the speech that gave his most eloquent account of the meaning of September 11th—the President told Americans to live their lives, hug their children, uphold their values, participate in the economy, and pray for the victims. These quiet continuities were supposed to be reassuring, but instead they revealed the unreality that lay beneath his call to arms. Wasn’t there anything else? Should Americans enlist in the armed forces, join the foreign service, pay more taxes, do volunteer work, study foreign languages, travel to Muslim countries? No—just go on using their credit cards. Bush’s Presidency would emulate Woodrow Wilson’s and Warren G. Harding’s simultaneously. Never was the mismatch between the idea of the war and the war itself more apparent. Everything had changed, Bush announced, but not to worry—nothing would change.

When Bush met with congressional leaders after the attacks, Senator Tom Daschle, the South Dakota Democrat, cautioned against the implications of the word “war.” “I disagreed,” Bush later wrote. “If four coordinated attacks by a terrorist network that had pledged to kill as many Americans as possible was not an act of war, then what was it? A breach of diplomatic protocol?” Rather than answering with an argument, Bush took a shot at Daschle’s judgment and, by implication, his manhood. Soon after the attacks, William Bennett, the conservative former Education Secretary, published a short book called “Why We Fight: Moral Clarity and the War on Terrorism.” The title suggested that anyone experiencing anything short of total clarity was suspect.

From the start, important avenues of inquiry were marked with warning signs by the Administration. Those who ventured down them would pay a price. The conversation that a mature democracy should have held never happened, because this was no longer a mature democracy.

by George Packer, New Yorker |  Read more:
Illustration: Guy Billout