Wednesday, September 18, 2013

Sea Change


[ed. If you read anything this week, read this.]

Imagine every person on Earth tossing a hunk of CO2 as heavy as a bowling ball into the sea. That’s what we do to the oceans every day.

Burning fossil fuels, such as coal, oil and natural gas, belches carbon dioxide into the air. But a quarter of that CO2 then gets absorbed by the seas — eight pounds per person per day, about 20 trillion pounds a year.

Scientists once considered that entirely good news, since it removed CO2 from the sky. Some even proposed piping more emissions to the sea.

But all that CO2 is changing the chemistry of the ocean faster than at any time in human history. Now the phenomenon known as ocean acidification — the lesser-known twin of climate change — is helping push the seas toward a great unraveling that threatens to scramble marine life on a scale almost too big to fathom, and far faster than first expected.

Here’s why: When CO2 mixes with water it takes on a corrosive power that erodes some animals’ shells or skeletons. It lowers the pH, making oceans more acidic and sour, and robs the water of ingredients animals use to grow shells in the first place.

Acidification wasn’t supposed to start doing its damage until much later this century.

Instead, changing sea chemistry already has killed billions of oysters along the Washington coast and at a hatchery that draws water from Hood Canal. It’s helping destroy mussels on some Northwest shores. It is a suspect in the softening of clam shells and in the death of baby scallops. It is dissolving a tiny plankton species eaten by many ocean creatures, from auklets and puffins to fish and whales — and that had not been expected for another 25 years.

And this is just the beginning.

by Craig Welch, Seattle Times |  Read more:
Image: Steve Ringman

Anglo American Withdraws from Pebble Mine

[ed. Sounds like the end might be near but big projects like this never really seem to die (in Alaska, anyway), they just cycle through a couple generations (or less) and reappear in new packaging. For additional background see: Gold Fish]

Anglo American, one of the key backers of the controversial Pebble mine in Alaska's Bristol Bay region, announced Monday that it is withdrawing from the Pebble Partnership -- and will take a $300 million hit for doing so. The London-based Anglo American has a 50 percent share of the Pebble venture, with Northern Dynasty Minerals out of Vancouver, Canada controlling the other half. The company said that Northern Dynasty will assume sole responsibility for the project.

In a statement, Anglo American CEO Mark Cutifani said that the company was seeking other investment opportunities.

"Despite our belief that Pebble is a deposit of rare magnitude and quality, we have taken the decision to withdraw following a thorough assessment of Anglo American’s extensive pipeline of long-dated project options," Cutifani said. "Our focus has been to prioritize capital to projects with the highest value and lowest risks within our portfolio, and reduce the capital required to sustain such projects during the pre-approval phases of development as part of a more effective, value-driven capital allocation model."

John Shively, CEO of the Pebble Partnership, insisted that reports of Pebble's death are premature. “Obviously we’re disappointed, but we still have a great project,” he said. “Anglo American was reviewing all of their assets. When they got to us, we didn’t make the cut,” he said.

Shively, who learned of the pullout this weekend in phone calls from the owner companies, said he expects that Northern Dynasty will decide in the next two or three weeks what its next steps should be. He said the “partnership has to be unraveled,” and Northern Dynasty has to consider its options.

Pebble has received intense scrutiny during the exploratory phase of the project. Critics say the mine's proposed location could present a risk to the Bristol Bay watershed and salmon fishery, one of the most lucrative fisheries in the world. Supporters have accused the Environmental Protection Agency of playing politics with the project after the EPA released an assessment of the potential impacts of a large open-pit mine on Bristol Bay fisheries last year. That report said that even barring a major mishap, damage to salmon runs were a likely side effect of mine development.

Meanwhile, the Pebble Mine prospect is also a high-value proposition: Northern Dynasty estimates that the proposed mining area could contain as much as 81 billion pounds of copper, 5.6 billion pounds of molybdenum and 107 million ounces of gold. Estimates have put the value of the resources at up to $300 billion.

by Ben Anderson, Alaska Dispatch |  Read more:
Image: EPA

Bascove, Pershing Square Bridge, 1993, oil on canvas, 26" x 42".Collection of the Museum of the City of New York.

Tuesday, September 17, 2013

The Plot to Kill Obamacare

The Republican party has voted unanimously against establishing the Affordable Care Act in the Senate and then in the House of Representatives, then voted some 40 times to repeal or cripple it; it has mounted a nearly successful campaign to nullify it through the courts and a failed presidential campaign that promised to repeal it; and it has used its control of state governments to block the law’s implementation across vast swaths of the country, at enormous economic cost to those states. Yet somehow, in the wake of all this, the party is consumed with the question Have we done enough to stop Obamacare?

This peculiar subject of introspection, as if Joe Francis were lying awake at night cursing himself for his prudery, reflects the deepening mix of terror and rage with which conservatives await the enrollment of millions of uninsured Americans beginning in October. On the substantive merits of the law, only the subtlest variations can be detected in the GOP’s evaluation. Mitch McConnell calls it the “single worst piece of legislation passed in the last 50 years in the country.” Representative John Fleming of Louisiana calls it “the most dangerous piece of legislation ever passed by a Congress” and “the most existential threat to our economy … since the Great Depression.” Virginia gubernatorial candidate Ken Cuccinelli harks back to the Fugitive Slave Acts for a comparative affront to liberty.

Having achieved near consensus on the policy, the party has fallen into intramural squabbling over which extraordinary threats to deploy. Shut down the government? Default on the national debt? (House leaders have wriggled out of demands to do the former by promising to do the latter.) Conservative activists have turned on their leaders as traitors for hesitating to employ the most obviously suicidal methods, affixing John Boehner’s name to the hated program (“Boehnercare”) or accusing McConnell of “empty rhetoric … about ending Obamacare.” These recriminations reprise the hallucinatory attacks by Cold War conservatives like Joe McCarthy and the John Birch Society, which over time migrated from their original targets onto such figures as President Eisenhower and the Army.

The historical echo is fitting in the sense that Obamacare has come to fill the place in the conservative psyche once occupied by communism and later by taxes: the main point of doctrinal agreement. (In constituent meetings, “this is the overriding issue that is being discussed,” one Republican member of Congress explained late last month. “Way more than immigration, way more than the debt.”) The transformation of Obamacare from a close relative of Republicans’ own health-care ideas to the locus of evil in modern life is owing to several things, including the almost tautological political fact that its success would be Obama’s: Permanent health-care reform would define Obama as a Reaganesque transformative figure, rather than the failure conservatives still hope him to be remembered as. The law’s slow rollout has made it a live issue, unlike the already-expired stimulus, and thus the main receptacle for simmering concerns over unemployment and the tepid economic recovery.

Most important, the law has, in its direct impact, opened a fissure over the role of government deeper than any since the New Deal. Obamacare threatens America’s unique status among advanced economies as a country where access to regular medical care is a privilege that generally must be earned. In a few weeks, the United States government, like those of France, or Australia, or Israel, will begin to regard health insurance as something to be handed out to one and all, however poor, lazy, or otherwise undeserving each recipient may be. “We can’t afford everything we do now, let alone provide free medical care to able-bodied adults,” as Missouri Republican Rob Schaaf, author of the state’s harsh anti-Obamacare initiative, put it. “I have a philosophical problem with doing that.”

The Obamacare wars have progressed from the legislative to the judicial to the electoral fronts, gaining intensity at every step. Now they move to a new battleground to secure the law and all it represents, or provoke its collapse. That an implementation battle is taking place at all is a highly unusual circumstance. Major new laws often stagger into operation with glitches, confusion, and hasty revisions, but not sabotage. Obamacare will come online in the midst of an unprecedented quasi-campaign atmosphere, with Republicans waging a desperate political and cultural war to destroy it.

by Jonathan Chait, New York Magazine |  Read more:
Image: Kristian Hammerstad

A Good Angle Is Hard to Find

About 10 years ago, I was driving along the Pacific Coast Highway, one of the most glorious stretches of asphalt in the country, when I decided to take a picture of myself. I had a Polaroid camera then, which I carried in the front seat of my Honda Accord along with a high-zoom Nikon bought at a specialty store for more money than I’d ever dropped in one place. I must have looked bizarre, pulling my sedan to the shoulder of the road and perching in the wildflowers with that big, boxy plastic eye held in front of me, as picture after picture spit out like an angry tongue. Polaroids are expensive to screw up, by the way. About a buck a misfire.

But I am a short girl, and a vain one, and I could never get my arms far enough away to find a flattering angle. Pictures I took of myself were often blasted with flash, or marred by the funny mistakes of a photo aimed blind: Here is your left eyeball. Behold, your forehead. Still, it was worth all the effort to have some souvenir of the moment—an instant image!—which I could tuck into an envelope and slide into the trusty rabbit tunnel that was the U.S. Postal Service, where it would wind 1500 miles back to my parents’ place in Dallas and find a new home underneath a magnet on the kitchen fridge.

The word “selfies” didn’t exist then. It would take at least another three years—and the advent of digital cameras—for the word to become necessary. In 2005, Jim Krause used the term in his manual “Photo Idea Index” to describe the kind of on-the-fly snapshots he and his friends were taking, unburdened by the cost and labor of traditional film processing. The “selfies” tag grew on Flickr, and later flourished on social media sites, where #selfies and #me became an ever-trending topic. In 2010, the iPhone introduced its flip-camera feature, allowing users to see and frame a shot of themselves. In the selfie origin story, this was the eureka moment.

These days, the sight of someone pulling over to the side of the road—or standing at a bar, or flashing a peace sign in front of a building, or waiting at the drive-thru in the front seat of the car—and taking a picture of themselves is not bizarre at all. We live in the endlessly documented moment, and the arm outstretched with that small, omnipotent rectangle held aloft is one of the defining postures of our time. We’ve had selfie scandals, from Weiner’s weiner to Amanda Bynes’ meltdown. We’ve had a million billion cautionary tales about sending erotic selfies, though it doesn’t seem to stop anyone. Criminals take selfies and so do cops. The presidential selfie surely could not be far behind. (On this, Hillary was first.)

But people are also worried about the selfie. Well, worried and irritated. Several trend stories have pondered the psychological damage on a generation that would rather take a picture of their life than actually live it. A recent study found that posting too many selfies annoys people (for this, they needed science?). Last month, the word made its way into the Oxford Dictionaries Online, but it has also become something of a smear, another tacky emblem of a culture that has directed all possible spotlights toward its own sucked-in cheeks. “Are you going to take a selfie?” a friend asked with mock derision when I pulled out my phone at dinner to check the time. And it was clearly a joke, but I wasn’t sure if he was making fun of people who do such things, or the fact that I was one of them.

by Sarah Hepola, TMN |  Read more:
Image: Danielle Julian Norton, Everything is Fine, 2012. Image credit Shannon Benine.

Thinking Out Loud

Every day, we collectively produce millions of books’ worth of writing. Globally we send 154.6 billion emails, more than 400 million tweets, and over 1 million blog posts and around 2 million blog comments on WordPress. On Facebook, we post about 16 billion words. Altogether, we compose some 52 trillion words every day on email and social media — the equivalent of 520 million books. (The entire US Library of Congress, by comparison, holds around 23 million books.)

And what makes this explosion truly remarkable is what came before: comparatively little. Before the Internet, most people rarely wrote for pleasure or intellectual satisfaction after graduating from high school or college.

Is any of this writing any good? Certainly, measured against the prose of an Austen, Orwell, or Tolstoy, the majority of online publishing pales. This isn’t surprising. The science fiction writer Theodore Sturgeon famously said something like, “Ninety percent of everything is crap,” a formulation that geeks now refer to as Sturgeon’s Law. Anyone who has spent time slogging through the swamp of books, journalism, TV, and movies knows that this holds pretty well even for edited and curated culture. So a global eruption of unedited, everyday self-expression is even more likely to produce this 90-10 split — an ocean of dreck, dotted sporadically by islands of genius.

But focusing on the individual writers and thinkers misses the point. The fact that so many of us are writing — sharing our ideas, good and bad, for the world to see — has changed the way we think. Just as we now live in public, so do we think in public. And that is accelerating the creation of new ideas and the advancement of global knowledge.

Literacy in North America has historically been focused mainly on reading, not writing; consumption, not production. While many parents worked hard to ensure their children were regular readers, they rarely pushed them to become regular writers. But according to Deborah Brandt, a scholar who has researched American literacy in the 20th and 21st centuries, the advent of digital communications has helped change that notion.

We are now a global culture of avid writers, one almost always writing for an audience. When you write something online—whether it’s a one-sentence status update, a comment on someone’s photo, or a thousand-word post—you’re doing it with the expectation that someone might read it, even if you’re doing it anonymously.

Having an audience can clarify thinking. It’s easy to win an argument inside your head. But when you face a real audience, you have to be truly convincing. (...)

Interestingly, the audience effect doesn’t necessarily require a big audience. This seems particularly true online.

Many people have told me that they feel the dynamic kick in with even a tiny handful of viewers. I’d argue that the cognitive shift in going from an audience of zero (talking to yourself) to an audience of 10 (a few friends or random strangers checking out your online post) is so big that it’s actually huger than going from 10 people to a million. [ed. I would agree with this.]

This is something that traditional thinkers of the pre-Internet age—particularly print and broadcast journalists — have trouble grasping. For them, an audience doesn’t mean anything unless it’s massive. If you’re writing specifically to make money, you need to draw a large crowd. This is part of the thinking that causes traditional media executives to scoff at the spectacle of the “guy sitting in his living room in his pajamas writing what he thinks.” But for the rest of the people in the world, who probably never did much nonwork writing in the first place—and who almost never did it for an audience—even a handful of readers can have a vertiginous, catalytic impact.

by Clive Thompson, Wired |  Read more:
Image: Simon C. Page

Gordon Parks, Frustrated, Chicago, IL, 1957
via:

Julian Opie, I dreamt I was driving my car (motorway corner), 2002.
via:

Go Ask Alice

One pill makes you larger
And one pill makes you small
And the ones that Mother gives you
Don’t do anything at all
Go ask Alice, when she’s ten feet tall

— Jefferson Airplane, “White Rabbit”

“Life’s a box of chocolates, Forrest. You never know what you’re gonna get.”

— Forrest Gump



Well, Children, it’s silly season again. Yes, that’s right: Twitter just filed an initial registration statement (or S-1) for its long-awaited initial public offering. Confidentially. And commemorated it with a tweet on its own social media platform, of course:

Tools.

* * *

This of course means every numbnuts and his dog are currently crawling out of the woodwork and regaling us with their carefully considered twaffle about what Twitter is doing, what it should do, and how much money we’re all going to make buying and selling Twitter’s IPO shares when and if they ever come to market. A particularly amusing sub-genre of said twaffle consists of various pundits of varying credibility and credulousness pontificating on what Twitter is actually worth, as if that is a concrete piece of information embedded in the wave function of quantum mechanics or the cosmic background radiation, rather than a market consensus which does not exist yet because, well, there is no public market for Twitter’s shares.

But there seems to be something about IPOs that renders even the most gimlet-eyed, levelheaded market observers (like Joe Nocera, John Hempton, and... well, just those two) a little goofy and soft in the head. Perhaps they just can’t understand why such an obvious and persistent arbitrage anomaly as the standard 10 to 15% IPO discount on newly public shares—which everybody seems to know about even though they can’t explain it—persists as it does. Or why, given how many simoleons the evil Svengalis of Wall Street get paid to underwrite IPOs, there are so many offerings that end up trading substantially higher (e.g., LinkedIn) or substantially lower (e.g., Facebook) than the offer price they set once shares are released for trading.

So, out of the bottomless goodness of my heart—and a heartfelt wish to nip some of the more ludicrous twitterpating I expect from the assembled financial media and punditry in the bud—I will share here in clear and simple terms some of the explanations I have offered in the past.

by The Epicurean Dealmaker |  Read more:
Image: uncredited

Monday, September 16, 2013


Andrey Malykh, My Room. Digital (Procreate, iPad). 2012-2013.

Rudolf Dischinger (German, 1904-1988), Gramophone, 1930. Oil on plywood, 79 x 64 cm. Museum für Neue Kunst, Freiburg.
via:

Storytelling Ads May Be Journalism’s New Peril

When the guy who ruined the Internet with banner ads tells you that a new kind of advertising might destroy journalism, it tends to get your attention.

That’s not entirely fair. Joe McCambley, founder of The Wonderfactory, a digital design firm, helped build the first banner ad back in 1994. It was a much-maligned innovation that grew like kudzu until it had all but overwhelmed the consumer Web, defining its look and economics for years to come.

Now the new rage is “native advertising,” which is to say advertising wearing the uniform of journalism, mimicking the storytelling aesthetic of the host site. Buzzfeed, Forbes, The Atlantic and, more recently, The New Yorker, have all developed a version of native advertising, also known as sponsored content; if you are on Buzzfeed, World of Warcraft might have a sponsored post on, say, 10 reasons your virtual friends are better than your real ones.

It is usually labeled advertising (sometimes clearly, sometimes not), but if the content is appealing, marketers can gain attention and engagement beyond what they might get for say, oh, a banner ad.

Mr. McCambley is wary. He says he thinks native advertising can provide value to both reader and advertiser when properly executed, but he worries that much of the current crop of these ads is doing damage to the contract between consumer and media organizations.

“I completely understand the value of native advertising,” Mr. McCambley said, “but there are a number of publishers who are allowing P.R. firms and advertising agencies direct access to their content management systems and allowing them to publish directly to the site. I think that is a huge mistake.

“It is a very slippery slope and could kill journalism if publishers aren’t careful,” he said.

He’s right. Publishers might build a revenue ledge through innovation of the advertising format, but the confusion that makes it work often diminishes the host publication’s credibility.

Of course, some publishers have already gone flying off the edge, most notoriously The Atlantic, which in January allowed Scientology to create a post that was of a piece with the rest of the editorial content on its site, even if it was differently labeled. They got clobbered, in part because handing the keys to the car to a controversial religion with a reputation for going after journalists was dumb.

“You are gambling with the contract you have with your readers,” Mr. McCambley said. “How do I know who made the content I am looking at and what the value of the information is?”

by David Carr, NY Times |  Read more:
Image: Yana Paskova for The New York Times

What is Better - A Happy Life or a Meaningful One?


Parents often say: ‘I just want my children to be happy.’ It is unusual to hear: ‘I just want my children’s lives to be meaningful,’ yet that’s what most of us seem to want for ourselves. We fear meaninglessness. We fret about the ‘nihilism’ of this or that aspect of our culture. When we lose a sense of meaning, we get depressed. What is this thing we call meaning, and why might we need it so badly?

Let’s start with the last question. To be sure, happiness and meaningfulness frequently overlap. Perhaps some degree of meaning is a prerequisite for happiness, a necessary but insufficient condition. If that were the case, people might pursue meaning for purely instrumental reasons, as a step on the road towards happiness. But then, is there any reason to want meaning for its own sake? And if there isn’t, why would people ever choose lives that are more meaningful than happy, as they sometimes do?

The difference between meaningfulness and happiness was the focus of an investigation I worked on with my fellow social psychologists Kathleen Vohs, Jennifer Aaker and Emily Garbinsky, published in the Journal of Positive Psychology this August. We carried out a survey of nearly 400 US citizens, ranging in age from 18 to 78. The survey posed questions about the extent to which people thought their lives were happy and the extent to which they thought they were meaningful. We did not supply a definition of happiness or meaning, so our subjects responded using their own understanding of those words. By asking a large number of other questions, we were able to see which factors went with happiness and which went with meaningfulness.

As you might expect, the two states turned out to overlap substantially. Almost half of the variation in meaningfulness was explained by happiness, and vice versa. Nevertheless, using statistical controls we were able to tease two apart, isolating the ‘pure’ effects of each one that were not based on the other. We narrowed our search to look for factors that had opposite effects on happiness and meaning, or at least, factors that had a positive correlation with one and not even a hint of a positive correlation with the other (negative or zero correlations were fine). Using this method, we found five sets of major differences between happiness and meaningfulness, five areas where different versions of the good life parted company.

The first had to do with getting what you want and need. Not surprisingly, satisfaction of desires was a reliable source of happiness. But it had nothing — maybe even less than nothing ­— to add to a sense of meaning. People are happier to the extent that they find their lives easy rather than difficult. Happy people say they have enough money to buy the things they want and the things they need. Good health is a factor that contributes to happiness but not to meaningfulness. Healthy people are happier than sick people, but the lives of sick people do not lack meaning. The more often people feel good — a feeling that can arise from getting what one wants or needs — the happier they are. The less often they feel bad, the happier they are. But the frequency of good and bad feelings turns out to be irrelevant to meaning, which can flourish even in very forbidding conditions.

The second set of differences involved time frame. Meaning and happiness are apparently experienced quite differently in time. Happiness is about the present; meaning is about the future, or, more precisely, about linking past, present and future. The more time people spent thinking about the future or the past, the more meaningful, and less happy, their lives were. Time spent imagining the future was linked especially strongly to higher meaningfulness and lower happiness (as was worry, which I’ll come to later). Conversely, the more time people spent thinking about the here and now, the happier they were. Misery is often focused on the present, too, but people are happy more often than they are miserable. If you want to maximise your happiness, it looks like good advice to focus on the present, especially if your needs are being satisfied. Meaning, on the other hand, seems to come from assembling past, present and future into some kind of coherent story.

by Roy F. Baumeister, Aeon |  Read more:
Image: Leonard Freed/Magnum

Zen in the Art of Citizen Science

When it comes to online participation in collective endeavors, 99% of us typically take a free ride.

From Wikipedia and YouTube to simple forum discussions, there is a persistent pattern known as the 90-9-1 principle. This means, for example, that of Wikipedia users, 90% only view content, 9% edit existing content, and 1% actually create new content. Inequity in effort, of vastly more people accessing collective information than contributing to it, is a persistent feature of online engagement.

Large-scale citizen-science projects, such as where ordinary people assist in genuine scientific research, when facilitated by the Internet, may not be exempt from the 1% rule of thumb. Despite the promise of app and web development to assist citizen scientists in data submission, the “build it and they will come” approaches fail because not enough people contribute to make such projects useful. Are there examples of online citizen-science projects that succeed on a big scale despite unequal participation? If so, how?

For answers, let’s take a look at eBird, a free, online citizen-science project run by the Cornell Lab of Ornithology. eBird began in 2002 and quickly became a global network within which bird watchers contribute their bird observations to a central database. Over 2.5 million people have engaged with eBird. Of those, 150,000 have submitted data (6%) and 25,000 (1%) have submitted 99% of data. The 1% includes the world’s best birders as well as less skilled but highly dedicated backyard bird watchers. For everyone else, eBird is free information, and there is lots of it.

Is eBird successful?

eBird is successful scientifically. Since 2006, eBird has grown 40% ever year, which makes it one of the fastest growing biodiversity datasets in existence. It has amassed over 140 million bird observations, with observations from every country on the planet. Researchers have written over 90 peer-reviewed publications using eBird.

eBird is successful for conservation. The last two State of the Birds reports, which relied on eBird data to examine species occurrence, habitat types, and land ownership at a level of detail never achieved before, inform decisions of the US Fish & Wildlife Service and the US Forest Service. The Nature Conservancy uses eBird data to identify which rice farmers in the Central Valley of California they should ask to flood their fields at the particular right time for migrating waterfowl.

eBird is successfully engaging bird watchers. eBird doesn’t ignore the 99% who don’t submit data. The most frequent use of the eBird database is by handheld apps that people use to figure out where to go birdwatching.

Recently I was scheduled to give an opening provocation for a workshop on technology for citizen science at the British Ecological Society meeting in London. A “provocation” is intended to provoke thoughts, emotions, and epiphanies in order to instigate deep discussion, in this case about how to use technology to make citizen science successful. Are more apps for submitting data really the answer? Should we try to break the 1% rule or engage the 99% in other ways? To prepare, I went to Steve Kelling, the head honcho of eBird. In Jack Nicolson style, he can deliver a one-liner that blankets a room in thought, which is then invariably followed by a succession of light-bulb moments of understanding. When I asked Kelling to explain the success of eBird, he sagely said, “When eBird stopped doing citizen science, it got successful.” (score!).

How could eBird succeed at citizen science by not doing citizen science?

Kelling’s counter-intuitive riddle reveals the Zen in the art of citizen science.

by Caren Cooper, Scientific American |  Read more:
Image: Tim Lenz

Two-State Illusion


The last three decades are littered with the carcasses of failed negotiating projects billed as the last chance for peace in Israel. All sides have been wedded to the notion that there must be two states, one Palestinian and one Israeli. For more than 30 years, experts and politicians have warned of a “point of no return.” Secretary of State John Kerry is merely the latest in a long line of well-meaning American diplomats wedded to an idea whose time is now past.

True believers in the two-state solution see absolutely no hope elsewhere. With no alternative in mind, and unwilling or unable to rethink their basic assumptions, they are forced to defend a notion whose success they can no longer sincerely portray as plausible or even possible.

It’s like 1975 all over again, when the Spanish dictator Francisco Franco fell into a coma. The news media began a long death watch, announcing each night that Generalissimo Franco was still not dead. This desperate allegiance to the departed echoes in every speech, policy brief and op-ed about the two-state solution today.

True, some comas miraculously end. Great surprises sometimes happen. The problem is that the changes required to achieve the vision of robust Israeli and Palestinian states living side by side are now considerably less likely than other less familiar but more plausible outcomes that demand high-level attention but aren’t receiving it.

Strong Islamist trends make a fundamentalist Palestine more likely than a small state under a secular government. The disappearance of Israel as a Zionist project, through war, cultural exhaustion or demographic momentum, is at least as plausible as the evacuation of enough of the half-million Israelis living across the 1967 border, or Green Line, to allow a real Palestinian state to exist. While the vision of thriving Israeli and Palestinian states has slipped from the plausible to the barely possible, one mixed state emerging from prolonged and violent struggles over democratic rights is no longer inconceivable. Yet the fantasy that there is a two-state solution keeps everyone from taking action toward something that might work.

All sides have reasons to cling to this illusion. The Palestinian Authority needs its people to believe that progress is being made toward a two-state solution so it can continue to get the economic aid and diplomatic support that subsidize the lifestyles of its leaders, the jobs of tens of thousands of soldiers, spies, police officers and civil servants, and the authority’s prominence in a Palestinian society that views it as corrupt and incompetent.

Israeli governments cling to the two-state notion because it seems to reflect the sentiments of the Jewish Israeli majority and it shields the country from international opprobrium, even as it camouflages relentless efforts to expand Israel’s territory into the West Bank.

American politicians need the two-state slogan to show they are working toward a diplomatic solution, to keep the pro-Israel lobby from turning against them and to disguise their humiliating inability to allow any daylight between Washington and the Israeli government.

Finally, the “peace process” industry — with its legions of consultants, pundits, academics and journalists — needs a steady supply of readers, listeners and funders who are either desperately worried that this latest round of talks will lead to the establishment of a Palestinian state, or that it will not.

Conceived as early as the 1930s, the idea of two states between the Jordan River and the Mediterranean Sea all but disappeared from public consciousness between 1948 and 1967. Between 1967 and 1973 it re-emerged, advanced by a minority of “moderates” in each community. By the 1990s it was embraced by majorities on both sides as not only possible but, during the height of the Oslo peace process, probable. But failures of leadership in the face of tremendous pressures brought Oslo crashing down. These days no one suggests that a negotiated two-state “solution” is probable. The most optimistic insist that, for some brief period, it may still be conceivable.

But many Israelis see the demise of the country as not just possible, but probable. The State of Israel has been established, not its permanence. The most common phrase in Israeli political discourse is some variation of “If X happens (or doesn’t), the state will not survive!” Those who assume that Israel will always exist as a Zionist project should consider how quickly the Soviet, Pahlavi Iranian, apartheid South African, Baathist Iraqi and Yugoslavian states unraveled, and how little warning even sharp-eyed observers had that such transformations were imminent.

In all these cases, presumptions about what was “impossible” helped protect brittle institutions by limiting political imagination. And when objective realities began to diverge dramatically from official common sense, immense pressures accumulated.

by Ian S. Lustik, NY Times |  Read more:
Images: Oded Balilty/Associated Press and Josh Cochran