Tuesday, August 9, 2011
The Endless Summer
by Andrew Cohen
I have never surfed—never even dreamed of surfing or had any inclination to pick up a board—and yet in this summer of our discontent I have become mesmerized by Bruce Brown's timeless documentary The Endless Summer. It is a beautifully shot film from pristine locales chronicling the worldly travels of two dashing surfer dudes in the mid 1960s. Brown's masterpiece has been airing over and over again this summer on ESPN Classic, and it seems to me like a perfect antidote to all the bad news coming out of Washington these days.
Wouldn't we all like to leave everything behind and go out in search of the perfect wave right about now? Wouldn't we like to worry about nothing more than finding the right beach with the right surf and the right water temperature? I'd bet the ranch that President Barack Obama, he of the Hawaiian birthplace, would sign on to that deal if he could. If the movie were food it would be your favorite dish at the local diner. If it were a song it would be the sort people pay to listen to in order to fall asleep. If I were a doctor, I would prescribe it to my patients.
Here's how Brown's people subsequently described what he accomplished nearly 50 years ago:
The documentary was released in 1966 to surprisingly good reviews from mainstream movie critics. The timing was serendipitous. The technology of filmmaking would not have allowed the film to be made five years earlier. And five years later, in 1971, the sun and fun would have seemed far too frivolous following the race riots, Kent State, and the body bags coming home from Southeast Asia. For these reasons, The Endless Summer seems as much of a period piece as Citizen Kane or Gone With the Wind. Yes, son, there really was a time when the beaches were clear and no one bugged you to put on sunscreen.
The film indeed revels in the absence of anything weighty. There is a single remark by Brown about South Africa's apartheid—he lamely notes that the area's sharks and porpoises segregate themselves in the water. There is a sexist remark about the bathing suits of Australia's female surfers. A few locals here and there are made fun of. And that's about the extent of the film's political message. We don't know what the boys think about anything beyond what they think of the water and the waves and the size of the surf. They aren't characters so much as props.
The film's philosophical message, on the other hand, is front and center: There is art and science in most human endeavors, including the ones that ultimately matter the least to the story of our existence on Earth. The "perfect wave" doesn't exist only in the perfect world these men inhabited during their journey. And yet the surfers were as beautiful and as graceful as the beaches and waves upon which they played. They were as carefree as the fish they saw in the water or the animals they saw on land. No wonder the Beach Boys used the title for their 1974 memorable compilation album (Side 1: "Surfin' Safari," "Surfer Girl," "Catch a Wave," "The Warmth of the Sun," and "Surfin USA").
I have never surfed—never even dreamed of surfing or had any inclination to pick up a board—and yet in this summer of our discontent I have become mesmerized by Bruce Brown's timeless documentary The Endless Summer. It is a beautifully shot film from pristine locales chronicling the worldly travels of two dashing surfer dudes in the mid 1960s. Brown's masterpiece has been airing over and over again this summer on ESPN Classic, and it seems to me like a perfect antidote to all the bad news coming out of Washington these days.
Wouldn't we all like to leave everything behind and go out in search of the perfect wave right about now? Wouldn't we like to worry about nothing more than finding the right beach with the right surf and the right water temperature? I'd bet the ranch that President Barack Obama, he of the Hawaiian birthplace, would sign on to that deal if he could. If the movie were food it would be your favorite dish at the local diner. If it were a song it would be the sort people pay to listen to in order to fall asleep. If I were a doctor, I would prescribe it to my patients.
Here's how Brown's people subsequently described what he accomplished nearly 50 years ago:
Here's a brief video clip from the film:In 1964, filmmaker Bruce Brown decided to follow two surfers around the world in search of a perfect wave. On a budget of only US $50 thousand, with a 16mm camera, he captured the essence, the adventure, and the art of surfing. Hence the renowned The Endless Summer. From the waters of West Africa, through the seas of Australia, to Tahiti, two surfers from California achieved their great dream: to try the wildest waves in the world.
The documentary was released in 1966 to surprisingly good reviews from mainstream movie critics. The timing was serendipitous. The technology of filmmaking would not have allowed the film to be made five years earlier. And five years later, in 1971, the sun and fun would have seemed far too frivolous following the race riots, Kent State, and the body bags coming home from Southeast Asia. For these reasons, The Endless Summer seems as much of a period piece as Citizen Kane or Gone With the Wind. Yes, son, there really was a time when the beaches were clear and no one bugged you to put on sunscreen.
The film indeed revels in the absence of anything weighty. There is a single remark by Brown about South Africa's apartheid—he lamely notes that the area's sharks and porpoises segregate themselves in the water. There is a sexist remark about the bathing suits of Australia's female surfers. A few locals here and there are made fun of. And that's about the extent of the film's political message. We don't know what the boys think about anything beyond what they think of the water and the waves and the size of the surf. They aren't characters so much as props.
The film's philosophical message, on the other hand, is front and center: There is art and science in most human endeavors, including the ones that ultimately matter the least to the story of our existence on Earth. The "perfect wave" doesn't exist only in the perfect world these men inhabited during their journey. And yet the surfers were as beautiful and as graceful as the beaches and waves upon which they played. They were as carefree as the fish they saw in the water or the animals they saw on land. No wonder the Beach Boys used the title for their 1974 memorable compilation album (Side 1: "Surfin' Safari," "Surfer Girl," "Catch a Wave," "The Warmth of the Sun," and "Surfin USA").
Can the Middle Class be Saved?
by Don Peck
In October 2005, three Citigroup analysts released a report describing the pattern of growth in the U.S. economy. To really understand the future of the economy and the stock market, they wrote, you first needed to recognize that there was “no such animal as the U.S. consumer,” and that concepts such as “average” consumer debt and “average” consumer spending were highly misleading.
In fact, they said, America was composed of two distinct groups: the rich and the rest. And for the purposes of investment decisions, the second group didn’t matter; tracking its spending habits or worrying over its savings rate was a waste of time. All the action in the American economy was at the top: the richest 1 percent of households earned as much each year as the bottom 60 percent put together; they possessed as much wealth as the bottom 90 percent; and with each passing year, a greater share of the nation’s treasure was flowing through their hands and into their pockets. It was this segment of the population, almost exclusively, that held the key to future growth and future returns. The analysts, Ajay Kapur, Niall Macleod, and Narendra Singh, had coined a term for this state of affairs: plutonomy.
In a plutonomy, Kapur and his co-authors wrote, “economic growth is powered by and largely consumed by the wealthy few.” America had been in this state twice before, they noted—during the Gilded Age and the Roaring Twenties. In each case, the concentration of wealth was the result of rapid technological change, global integration, laissez-faire government policy, and “creative financial innovation.” In 2005, the rich were nearing the heights they’d reached in those previous eras, and Citigroup saw no good reason to think that, this time around, they wouldn’t keep on climbing. “The earth is being held up by the muscular arms of its entrepreneur-plutocrats,” the report said. The “great complexity” of a global economy in rapid transformation would be “exploited best by the rich and educated” of our time.
Kapur and his co-authors were wrong in some of their specific predictions about the plutonomy’s ramifications—they argued, for instance, that since spending was dominated by the rich, and since the rich had very healthy balance sheets, the odds of a stock-market downturn were slight, despite the rising indebtedness of the “average” U.S. consumer. And their division of America into only two classes is ultimately too simple. Nonetheless, their overall characterization of the economy remains resonant. According to Gallup, from May 2009 to May 2011, daily consumer spending rose by 16 percent among Americans earning more than $90,000 a year; among all other Americans, spending was completely flat. The consumer recovery, such as it is, appears to be driven by the affluent, not by the masses. Three years after the crash of 2008, the rich and well educated are putting the recession behind them. The rest of America is stuck in neutral or reverse.
The ease with which the rich and well educated have shrugged off the recession shouldn’t be surprising; strong winds have been at their backs for many years. The recession, meanwhile, has restrained wage growth and enabled faster restructuring and offshoring, leaving many corporations with lower production costs and higher profits—and their executives with higher pay.
“The rich seem to be on the road to recovery,” says Emmanuel Saez, an economist at Berkeley, while those in the middle, especially those who’ve lost their jobs, “might be permanently hit.” Coming out of the deep recession of the early 1980s, Saez notes, “you saw an increase in inequality … as the rich bounced back, and unionized labor never again found jobs that paid as well as the ones they’d had. And now I fear we’re going to see the same phenomenon, but more dramatic.” Middle-paying jobs in the U.S., in which some workers have been overpaid relative to the cost of labor overseas or technological substitution, “are being wiped out. And what will be left is a hard and a pure market,” with the many paid less than before, and the few paid even better—a plutonomy strengthened in the crucible of the post-crash years.
Read more:
In October 2005, three Citigroup analysts released a report describing the pattern of growth in the U.S. economy. To really understand the future of the economy and the stock market, they wrote, you first needed to recognize that there was “no such animal as the U.S. consumer,” and that concepts such as “average” consumer debt and “average” consumer spending were highly misleading. In fact, they said, America was composed of two distinct groups: the rich and the rest. And for the purposes of investment decisions, the second group didn’t matter; tracking its spending habits or worrying over its savings rate was a waste of time. All the action in the American economy was at the top: the richest 1 percent of households earned as much each year as the bottom 60 percent put together; they possessed as much wealth as the bottom 90 percent; and with each passing year, a greater share of the nation’s treasure was flowing through their hands and into their pockets. It was this segment of the population, almost exclusively, that held the key to future growth and future returns. The analysts, Ajay Kapur, Niall Macleod, and Narendra Singh, had coined a term for this state of affairs: plutonomy.
In a plutonomy, Kapur and his co-authors wrote, “economic growth is powered by and largely consumed by the wealthy few.” America had been in this state twice before, they noted—during the Gilded Age and the Roaring Twenties. In each case, the concentration of wealth was the result of rapid technological change, global integration, laissez-faire government policy, and “creative financial innovation.” In 2005, the rich were nearing the heights they’d reached in those previous eras, and Citigroup saw no good reason to think that, this time around, they wouldn’t keep on climbing. “The earth is being held up by the muscular arms of its entrepreneur-plutocrats,” the report said. The “great complexity” of a global economy in rapid transformation would be “exploited best by the rich and educated” of our time.
Kapur and his co-authors were wrong in some of their specific predictions about the plutonomy’s ramifications—they argued, for instance, that since spending was dominated by the rich, and since the rich had very healthy balance sheets, the odds of a stock-market downturn were slight, despite the rising indebtedness of the “average” U.S. consumer. And their division of America into only two classes is ultimately too simple. Nonetheless, their overall characterization of the economy remains resonant. According to Gallup, from May 2009 to May 2011, daily consumer spending rose by 16 percent among Americans earning more than $90,000 a year; among all other Americans, spending was completely flat. The consumer recovery, such as it is, appears to be driven by the affluent, not by the masses. Three years after the crash of 2008, the rich and well educated are putting the recession behind them. The rest of America is stuck in neutral or reverse.
The ease with which the rich and well educated have shrugged off the recession shouldn’t be surprising; strong winds have been at their backs for many years. The recession, meanwhile, has restrained wage growth and enabled faster restructuring and offshoring, leaving many corporations with lower production costs and higher profits—and their executives with higher pay.
“The rich seem to be on the road to recovery,” says Emmanuel Saez, an economist at Berkeley, while those in the middle, especially those who’ve lost their jobs, “might be permanently hit.” Coming out of the deep recession of the early 1980s, Saez notes, “you saw an increase in inequality … as the rich bounced back, and unionized labor never again found jobs that paid as well as the ones they’d had. And now I fear we’re going to see the same phenomenon, but more dramatic.” Middle-paying jobs in the U.S., in which some workers have been overpaid relative to the cost of labor overseas or technological substitution, “are being wiped out. And what will be left is a hard and a pure market,” with the many paid less than before, and the few paid even better—a plutonomy strengthened in the crucible of the post-crash years.
Read more:
Nickel and Dimed (2011 Version)
It was at lunch with the editor of Harper’s Magazine that the subject came up: How does anyone actually live “on the wages available to the unskilled”? And then Barbara Ehrenreich said something that altered her life and resulted, improbably enough, in a bestselling book with almost two million copies in print. “Someone,” she commented, “ought to do the old-fashioned kind of journalism -- you know go out there and try it for themselves.” She meant, she hastened to point out on that book’s first page, “someone much younger than myself, some hungry neophyte journalist with time on her hands.”
That was 1998 and, somewhat to her surprise, Ehrenreich soon found herself beginning the first of a whirl of unskilled “careers” as a waitress at a “family restaurant” attached to a big discount chain hotel in Key West, Florida, at $2.43 an hour plus tips. And the rest, of course, is history. The now famous book that resulted, Nickel and Dimed: On (Not) Getting By in America, is just out in its tenth anniversary edition with a new afterword by Ehrenreich -- perfectly timed for an American era in which the book’s subtitle might have to be changed to “On (Not) Getting a Job in America.” TomDispatch takes special pride in offering Ehrenreich’s new afterword, adapted and shortened, for a book that, in its latest edition, deserves to sell another million copies
On Turning Poverty into an American Crime
By Barbara Ehrenreich
I completed the manuscript for Nickel and Dimed in a time of seemingly boundless prosperity. Technology innovators and venture capitalists were acquiring sudden fortunes, buying up McMansions like the ones I had cleaned in Maine and much larger. Even secretaries in some hi-tech firms were striking it rich with their stock options. There was loose talk about a permanent conquest of the business cycle, and a sassy new spirit infecting American capitalism. In San Francisco, a billboard for an e-trading firm proclaimed, “Make love not war,” and then -- down at the bottom -- “Screw it, just make money.”
When Nickel and Dimed was published in May 2001, cracks were appearing in the dot-com bubble and the stock market had begun to falter, but the book still evidently came as a surprise, even a revelation, to many. Again and again, in that first year or two after publication, people came up to me and opened with the words, “I never thought...” or “I hadn’t realized...”
To my own amazement, Nickel and Dimed quickly ascended to the bestseller list and began winning awards. Criticisms, too, have accumulated over the years. But for the most part, the book has been far better received than I could have imagined it would be, with an impact extending well into the more comfortable classes. A Florida woman wrote to tell me that, before reading it, she’d always been annoyed at the poor for what she saw as their self-inflicted obesity. Now she understood that a healthy diet wasn’t always an option. And if I had a quarter for every person who’s told me he or she now tipped more generously, I would be able to start my own foundation.
Even more gratifying to me, the book has been widely read among low-wage workers. In the last few years, hundreds of people have written to tell me their stories: the mother of a newborn infant whose electricity had just been turned off, the woman who had just been given a diagnosis of cancer and has no health insurance, the newly homeless man who writes from a library computer.
At the time I wrote Nickel and Dimed, I wasn’t sure how many people it directly applied to -- only that the official definition of poverty was way off the mark, since it defined an individual earning $7 an hour, as I did on average, as well out of poverty. But three months after the book was published, the Economic Policy Institute in Washington, D.C., issued a report entitled “Hardships in America: The Real Story of Working Families,” which found an astounding 29% of American families living in what could be more reasonably defined as poverty, meaning that they earned less than a barebones budget covering housing, child care, health care, food, transportation, and taxes -- though not, it should be noted, any entertainment, meals out, cable TV, Internet service, vacations, or holiday gifts. Twenty-nine percent is a minority, but not a reassuringly small one, and other studies in the early 2000s came up with similar figures.
The big question, 10 years later, is whether things have improved or worsened for those in the bottom third of the income distribution, the people who clean hotel rooms, work in warehouses, wash dishes in restaurants, care for the very young and very old, and keep the shelves stocked in our stores. The short answer is that things have gotten much worse, especially since the economic downturn that began in 2008.
Post-Meltdown Poverty
When you read about the hardships I found people enduring while I was researching my book -- the skipped meals, the lack of medical care, the occasional need to sleep in cars or vans -- you should bear in mind that those occurred in the best of times. The economy was growing, and jobs, if poorly paid, were at least plentiful.
In 2000, I had been able to walk into a number of jobs pretty much off the street. Less than a decade later, many of these jobs had disappeared and there was stiff competition for those that remained. It would have been impossible to repeat my Nickel and Dimed “experiment,” had I had been so inclined, because I would probably never have found a job.
For the last couple of years, I have attempted to find out what was happening to the working poor in a declining economy -- this time using conventional reporting techniques like interviewing. I started with my own extended family, which includes plenty of people without jobs or health insurance, and moved on to trying to track down a couple of the people I had met while working on Nickel and Dimed.
This wasn’t easy, because most of the addresses and phone numbers I had taken away with me had proved to be inoperative within a few months, probably due to moves and suspensions of telephone service. I had kept in touch with “Melissa” over the years, who was still working at Wal-Mart, where her wages had risen from $7 to $10 an hour, but in the meantime her husband had lost his job. “Caroline,” now in her 50s and partly disabled by diabetes and heart disease, had left her deadbeat husband and was subsisting on occasional cleaning and catering jobs. Neither seemed unduly afflicted by the recession, but only because they had already been living in what amounts to a permanent economic depression.
That was 1998 and, somewhat to her surprise, Ehrenreich soon found herself beginning the first of a whirl of unskilled “careers” as a waitress at a “family restaurant” attached to a big discount chain hotel in Key West, Florida, at $2.43 an hour plus tips. And the rest, of course, is history. The now famous book that resulted, Nickel and Dimed: On (Not) Getting By in America, is just out in its tenth anniversary edition with a new afterword by Ehrenreich -- perfectly timed for an American era in which the book’s subtitle might have to be changed to “On (Not) Getting a Job in America.” TomDispatch takes special pride in offering Ehrenreich’s new afterword, adapted and shortened, for a book that, in its latest edition, deserves to sell another million copies
On Turning Poverty into an American Crime
By Barbara Ehrenreich
I completed the manuscript for Nickel and Dimed in a time of seemingly boundless prosperity. Technology innovators and venture capitalists were acquiring sudden fortunes, buying up McMansions like the ones I had cleaned in Maine and much larger. Even secretaries in some hi-tech firms were striking it rich with their stock options. There was loose talk about a permanent conquest of the business cycle, and a sassy new spirit infecting American capitalism. In San Francisco, a billboard for an e-trading firm proclaimed, “Make love not war,” and then -- down at the bottom -- “Screw it, just make money.”
When Nickel and Dimed was published in May 2001, cracks were appearing in the dot-com bubble and the stock market had begun to falter, but the book still evidently came as a surprise, even a revelation, to many. Again and again, in that first year or two after publication, people came up to me and opened with the words, “I never thought...” or “I hadn’t realized...” To my own amazement, Nickel and Dimed quickly ascended to the bestseller list and began winning awards. Criticisms, too, have accumulated over the years. But for the most part, the book has been far better received than I could have imagined it would be, with an impact extending well into the more comfortable classes. A Florida woman wrote to tell me that, before reading it, she’d always been annoyed at the poor for what she saw as their self-inflicted obesity. Now she understood that a healthy diet wasn’t always an option. And if I had a quarter for every person who’s told me he or she now tipped more generously, I would be able to start my own foundation.
Even more gratifying to me, the book has been widely read among low-wage workers. In the last few years, hundreds of people have written to tell me their stories: the mother of a newborn infant whose electricity had just been turned off, the woman who had just been given a diagnosis of cancer and has no health insurance, the newly homeless man who writes from a library computer.
At the time I wrote Nickel and Dimed, I wasn’t sure how many people it directly applied to -- only that the official definition of poverty was way off the mark, since it defined an individual earning $7 an hour, as I did on average, as well out of poverty. But three months after the book was published, the Economic Policy Institute in Washington, D.C., issued a report entitled “Hardships in America: The Real Story of Working Families,” which found an astounding 29% of American families living in what could be more reasonably defined as poverty, meaning that they earned less than a barebones budget covering housing, child care, health care, food, transportation, and taxes -- though not, it should be noted, any entertainment, meals out, cable TV, Internet service, vacations, or holiday gifts. Twenty-nine percent is a minority, but not a reassuringly small one, and other studies in the early 2000s came up with similar figures.
The big question, 10 years later, is whether things have improved or worsened for those in the bottom third of the income distribution, the people who clean hotel rooms, work in warehouses, wash dishes in restaurants, care for the very young and very old, and keep the shelves stocked in our stores. The short answer is that things have gotten much worse, especially since the economic downturn that began in 2008.
Post-Meltdown Poverty
When you read about the hardships I found people enduring while I was researching my book -- the skipped meals, the lack of medical care, the occasional need to sleep in cars or vans -- you should bear in mind that those occurred in the best of times. The economy was growing, and jobs, if poorly paid, were at least plentiful.
In 2000, I had been able to walk into a number of jobs pretty much off the street. Less than a decade later, many of these jobs had disappeared and there was stiff competition for those that remained. It would have been impossible to repeat my Nickel and Dimed “experiment,” had I had been so inclined, because I would probably never have found a job.
For the last couple of years, I have attempted to find out what was happening to the working poor in a declining economy -- this time using conventional reporting techniques like interviewing. I started with my own extended family, which includes plenty of people without jobs or health insurance, and moved on to trying to track down a couple of the people I had met while working on Nickel and Dimed.
This wasn’t easy, because most of the addresses and phone numbers I had taken away with me had proved to be inoperative within a few months, probably due to moves and suspensions of telephone service. I had kept in touch with “Melissa” over the years, who was still working at Wal-Mart, where her wages had risen from $7 to $10 an hour, but in the meantime her husband had lost his job. “Caroline,” now in her 50s and partly disabled by diabetes and heart disease, had left her deadbeat husband and was subsisting on occasional cleaning and catering jobs. Neither seemed unduly afflicted by the recession, but only because they had already been living in what amounts to a permanent economic depression.
Monday, August 8, 2011
Life
"When my husband died, because he was so famous and known for not being a believer, many people would come up to me-it still sometimes happens-and ask me if Carl changed at the end and converted to a belief in an afterlife. They also frequently ask me if I think I will see him again.
Carl faced his death with unflagging courage and never sought refuge in illusions. The tragedy was that we knew we would never see each other again. I don't ever expect to be reunited with Carl. But, the great thing is that when we were together, for nearly twenty years, we lived with a vivid appreciation of how brief and precious life is. We never trivialized the meaning of death by pretending it was anything other than a final parting. Every single moment that we were alive and we were together was miraculous - not miraculous in the sense of inexplicable or supernatural. We knew we were beneficiaries of chance. . . . That pure chance could be so generous and so kind. . . . That we could find each other, as Carl wrote so beautifully in Cosmos, you know, in the vastness of space and the immensity of time. . . . That we could be together for twenty years. That is something which sustains me and it’s much more meaningful. . . .
The way he treated me and the way I treated him, the way we took care of each other and our family, while he lived. That is so much more important than the idea I will see him someday. I don't think I'll ever see Carl again. But I saw him. We saw each other. We found each other in the cosmos, and that was wonderful."
(Ann Dryan talking about her husband, Carl Sagan)
On Bristol Bay
by John Davidson
About half the world’s supply of wild salmon comes from a system of rivers, lakes, and streams in western Alaska that empties into Bristol Bay, a relatively shallow body of water roughly 250 miles long and 180 miles wide. Every summer, 40 million sockeye salmon enter the bay in schools of hundreds of thousands and mill in the estuaries of half a dozen large rivers. In the span of about four weeks in June and July, the salmon move into the mouths of these rivers, slowly at first and then, as if responding to an invisible cue, all at once.
From the deck of a drift gillnetter on the flood tide of a clear afternoon in late June, you can look out across the water and see this happening. Four or five salmon will jump or roll simultaneously, and when you turn and scan the water you see that it’s not just a pocket here and there—salmon are jumping and splashing all around the boat, and you realize you’re sitting on perhaps half a million fish that have begun to make a push for the river.The first time I realized this, it was terrifying. Our net spooled off the stern into the water and came alive with salmon, a quarter-mile of corkline and mesh writhing and splashing. As I watched the net sink (which nets are not supposed to do) it occurred to me that there were more fish moving under us than the entire fleet could possibly catch and that if we didn’t start bringing our gear in right away, we would be in danger of sinking. Over the next four hours we hauled 16,000 pounds of salmon on board a thirty-two foot boat, plugging the holds and bringing the waterline up to the scuppers. Later that evening, after off-loading the day’s catch, we caught another 8,000 pounds, and by the time I collapsed in my bunk early the next morning I had realized something else about Bristol Bay: it is abundant by nature, but overabundant by design.
The single largest source of wild salmon on the planet is also the world’s best-managed fishery. Bristol Bay has been carefully built up over the last forty years to become a $2 billion commercial and sport fishing industry that employs tens of thousands of people and makes up the bulk of the economy in southwest Alaska. Despite the negative feelings people often associate with commercial fishing, Bristol Bay is the epitome of a sustainable, renewable natural resource; absent a cataclysmic event or an environmental disaster, its salmon runs will keep coming in 40 million strong every summer in perpetuity.
But an environmental disaster is already in the works. About ninety miles inland, underneath river drainages and salmon streams that form a substantial part of the bay’s watershed, sits the single largest deposit of gold on the planet, the second largest deposit of copper, and a decent haul of silver and molybdenum ore. Pebble Mine, as it’s called, is thought to be worth more than $400 billion—enough to change global minerals markets and, for the mining companies that own the rights, to justify spending about $5 billion to build and operate a massive open pit mine in the middle of the Alaskan wilderness.
The mineable body of ore at Pebble is thirty times larger than the largest mine in Alaska. If built, the mine itself would be two miles long, a mile and a half wide, and about 1,700 feet deep. It would require the construction of more than 100 miles of roads and bridges, long-distance power transmission lines, pipelines for process water, pipelines for fuel, and a tailings dam 450 feet tall to contain the billions of tons of toxic mining residue the mine will produce. Any accident or earthquake (Pebble Mine sits on a fault line) will pollute Bristol Bay’s freshwater tributaries and wetlands with acid mine runoff, heavy metals, and process chemicals. Salmon spawning beds that drain into the Nushagak and Kvichak river systems, two of the bay’s largest, would be decimated by concentrated pollution of this kind. Even under what the mining industry considers to be “normal circumstances,” the risk of polluting the streams and rivers near the mine is all but certain, as evidenced by the water pollution at open pit mines like Bingham Canyon in Utah, which is comparable to what’s proposed for Pebble Mine
.
Read more:
It's a Bird! It's a Plane! It's...Some Dude?
They are ordinary men in extraordinary costumes, and they have risen from the ashes of our troubled republic to ensure the safety of their fellow citizens. Jon Ronson goes on patrol with Urban Avenger, Mr. Xtreme, Pitch Black, Knight Owl, Ghost, and the baddest-ass "real-life superhero" of them all, Phoenix Jones
by Jon Ronson
"Hospital?" I said. "Is he okay?"
"I don't know," said Peter. He sounded worried. "The thing you have to remember about Phoenix is that he's not impervious to pain." He paused. "You should get a taxi straight from the airport to there."
Phoenix didn't know this when he first donned the suit about a year ago, but he's one of around 200 real-life superheroes currently patrolling America's streets, looking for wrongs to right. There's DC's Guardian, in Washington, who wears a full-body stars-and-stripes outfit and wanders the troubled areas behind the Capitol building. There's RazorHawk, from Minneapolis, who was a pro wrestler for fifteen years before joining the RLSH movement. There's New York City's Dark Guardian, who specializes in chasing pot dealers out of Washington Square Park by creeping up to them, shining a light in their eyes, and yelling, "This is a drug-free park!" And there are dozens and dozens more. Few, if any, are as daring as Phoenix. Most undertake basically safe community work: helping the homeless, telling kids to stay off drugs, etc. They're regular men with jobs and families and responsibilities who somehow have enough energy at the end of the day to journey into America's neediest neighborhoods to do what they can.
Every superhero has his origin story, and as we drive from the hospital to his apartment, Phoenix tells me his. His life, he says, hasn't been a breeze. He lived for a time in a Texas orphanage, was adopted by a Seattle family around age 9, and now spends his days working with autistic kids. One night last summer, someone broke into his car. There was shattered glass on the floor, and his stepson gashed his knee on it.
"I got tired of people doing things that are morally questionable," he says. "Everyone's afraid. It just takes one person to say, 'I'm not afraid.' And I guess I'm that guy."
Read more:
15 Percent of Americans Are Now on Food Stamps
Today we discovered the unemployment rate fell a paltry tenth of a percent in July, putting us at 9.1 percent, and experts weighed in on the cratering stock market. It just might be "recession 2.0." At least 45.8 million people are acutely aware of how bad things have been, because they're on food stamps.
According to a new report from the United States Department of Agriculture, almost 46 million Americans received Supplemental Nutrition Assistance Program (SNAP) benefits in May. That's a record high, not to mention a 12 percent jump from last year at this time and a 34 percent jump from 2009. For context, this means that if you add up the residents of all 10 of America's most populous cities, you'd still need about 22 million people to get the number we've now got on SNAP.
To qualify for food stamps, a person's income can't exceed $14,088 a year, which is 130 percent of the national poverty level. That's a lot of very poor people. Unfortunately, it's easy to forget they exist when our politicians choose to take the focus off of the poor and put it instead on a game of high-stakes chicken.
via:
Why Did Japan Surrender?
by Gareth Cook
For nearly seven decades, the American public has accepted one version of the events that led to Japan’s surrender. By the middle of 1945, the war in Europe was over, and it was clear that the Japanese could hold no reasonable hope of victory. After years of grueling battle, fighting island to island across the Pacific, Japan’s Navy and Air Force were all but destroyed. The production of materiel was faltering, completely overmatched by American industry, and the Japanese people were starving. A full-scale invasion of Japan itself would mean hundreds of thousands of dead GIs, and, still, the Japanese leadership refused to surrender.
But in early August 66 years ago, America unveiled a terrifying new weapon, dropping atomic bombs on Hiroshima and Nagasaki. In a matter of days, the Japanese submitted, bringing the fighting, finally, to a close.
On Aug. 6, the United States marks the anniversary of the Hiroshima bombing’s mixed legacy. The leader of our democracy purposefully executed civilians on a mass scale. Yet the bombing also ended the deadliest conflict in human history.
In recent years, however, a new interpretation of events has emerged. Tsuyoshi Hasegawa - a highly respected historian at the University of California, Santa Barbara - has marshaled compelling evidence that it was the Soviet entry into the Pacific conflict, not Hiroshima and Nagasaki, that forced Japan’s surrender. His interpretation could force a new accounting of the moral meaning of the atomic attack. It also raises provocative questions about nuclear deterrence, a foundation stone of military strategy in the postwar period. And it suggests that we could be headed towards an utterly different understanding of how, and why, the Second World War came to its conclusion.
“Hasegawa has changed my mind,” says Richard Rhodes, the Pulitzer Prize-winning author of “The Making of the Atomic Bomb.” “The Japanese decision to surrender was not driven by the two bombings.”
President Truman’s decision to go nuclear has long been a source of controversy. Many, of course, have argued that attacking civilians can never be justified. Then, in the 1960s, a “revisionist school” of historians suggested that Japan was in fact close to surrendering before Hiroshima - that the bombing was not necessary, and that Truman gave the go-ahead primarily to intimidate the Soviet Union with our new power.
Hasegawa - who was born in Japan and has taught in the United States since 1990, and who reads English, Japanese, and Russian - rejects both the traditional and revisionist positions. According to his close examination of the evidence, Japan was not poised to surrender before Hiroshima, as the revisionists argued, nor was it ready to give in immediately after the atomic bomb, as traditionalists have always seen it. Instead, it took the Soviet declaration of war on Japan, several days after Hiroshima, to bring the capitulation.
Read more:
image credit:
For nearly seven decades, the American public has accepted one version of the events that led to Japan’s surrender. By the middle of 1945, the war in Europe was over, and it was clear that the Japanese could hold no reasonable hope of victory. After years of grueling battle, fighting island to island across the Pacific, Japan’s Navy and Air Force were all but destroyed. The production of materiel was faltering, completely overmatched by American industry, and the Japanese people were starving. A full-scale invasion of Japan itself would mean hundreds of thousands of dead GIs, and, still, the Japanese leadership refused to surrender.
But in early August 66 years ago, America unveiled a terrifying new weapon, dropping atomic bombs on Hiroshima and Nagasaki. In a matter of days, the Japanese submitted, bringing the fighting, finally, to a close.On Aug. 6, the United States marks the anniversary of the Hiroshima bombing’s mixed legacy. The leader of our democracy purposefully executed civilians on a mass scale. Yet the bombing also ended the deadliest conflict in human history.
In recent years, however, a new interpretation of events has emerged. Tsuyoshi Hasegawa - a highly respected historian at the University of California, Santa Barbara - has marshaled compelling evidence that it was the Soviet entry into the Pacific conflict, not Hiroshima and Nagasaki, that forced Japan’s surrender. His interpretation could force a new accounting of the moral meaning of the atomic attack. It also raises provocative questions about nuclear deterrence, a foundation stone of military strategy in the postwar period. And it suggests that we could be headed towards an utterly different understanding of how, and why, the Second World War came to its conclusion.
“Hasegawa has changed my mind,” says Richard Rhodes, the Pulitzer Prize-winning author of “The Making of the Atomic Bomb.” “The Japanese decision to surrender was not driven by the two bombings.”
President Truman’s decision to go nuclear has long been a source of controversy. Many, of course, have argued that attacking civilians can never be justified. Then, in the 1960s, a “revisionist school” of historians suggested that Japan was in fact close to surrendering before Hiroshima - that the bombing was not necessary, and that Truman gave the go-ahead primarily to intimidate the Soviet Union with our new power.
Hasegawa - who was born in Japan and has taught in the United States since 1990, and who reads English, Japanese, and Russian - rejects both the traditional and revisionist positions. According to his close examination of the evidence, Japan was not poised to surrender before Hiroshima, as the revisionists argued, nor was it ready to give in immediately after the atomic bomb, as traditionalists have always seen it. Instead, it took the Soviet declaration of war on Japan, several days after Hiroshima, to bring the capitulation.
Read more:
image credit:
Group Wants New Bank to Finance Infrastructure
[ed. Good planning anticipates where money will go when it drains out of particularly vulnerable/volatile sectors. This idea might still come around.]
by Michael Cooper, NY Times
March 15, 2011
Amid growing concerns that the nation’s infrastructure is deteriorating, a group of Democrats, Republicans, and labor and business leaders called Tuesday for the creation of a national infrastructure bank to help finance the construction of things like roads, bridges, water systems and power grids.
The proposal — sponsored by Senator John Kerry, Democrat of Massachusetts, and Senator Kay Bailey Hutchison, Republican of Texas — would establish an independent bank to provide loans and loan guarantees for projects of regional or national significance. The idea is to attract more infrastructure investment from the private sector: by creating an infrastructure bank with $10 billion now, they say, they could spur up to $640 billion worth of infrastructure spending over the next decade.
“We have a choice,” Mr. Kerry said at a news conference in Washington. “We can either build, and compete, and create jobs for our people, or we can fold up, and let everybody else win. I don’t think that’s America. I don’t believe anybody wants to do that.”
To underscore the need for better infrastructure, two frequent rivals were on hand at the news conference: Richard Trumka, the president of the A.F.L.-C.I.O., and Thomas J. Donohue, the president of the U.S. Chamber of Commerce, the main business lobby. With a nod to the strange-bedfellows experience of having a labor leader as an ally, Mr. Donohue said, “He and I are going to take our show on the road as the new ‘Odd Couple.’ ”
President Obama has called for establishing an infrastructure bank since his 2010 campaign. His budget calls for establishing one — and gives it the catchier name I-Bank — that would work somewhat differently: it would create a $30 billion bank that would invest in transportation projects alone, and that would provide grants as well as loans.
Read more:
by Michael Cooper, NY Times
March 15, 2011
Amid growing concerns that the nation’s infrastructure is deteriorating, a group of Democrats, Republicans, and labor and business leaders called Tuesday for the creation of a national infrastructure bank to help finance the construction of things like roads, bridges, water systems and power grids.
“We have a choice,” Mr. Kerry said at a news conference in Washington. “We can either build, and compete, and create jobs for our people, or we can fold up, and let everybody else win. I don’t think that’s America. I don’t believe anybody wants to do that.”
To underscore the need for better infrastructure, two frequent rivals were on hand at the news conference: Richard Trumka, the president of the A.F.L.-C.I.O., and Thomas J. Donohue, the president of the U.S. Chamber of Commerce, the main business lobby. With a nod to the strange-bedfellows experience of having a labor leader as an ally, Mr. Donohue said, “He and I are going to take our show on the road as the new ‘Odd Couple.’ ”
President Obama has called for establishing an infrastructure bank since his 2010 campaign. His budget calls for establishing one — and gives it the catchier name I-Bank — that would work somewhat differently: it would create a $30 billion bank that would invest in transportation projects alone, and that would provide grants as well as loans.
Read more:
Sunday, August 7, 2011
Tax Holiday
[ed. And the hits keep coming.]
From Bloomberg:
Cisco Systems Inc. has cut its income taxes by $7 billion since 2005 by booking roughly half its worldwide profits at a subsidiary at the foot of the Swiss Alps that employs about 100 people.
Now Cisco, the largest maker of networking equipment, wants to save even more -- by asking Congress to waive most federal taxes due when multinationals bring such offshore earnings home. Chief Executive Officer John T. Chambers has led the charge for the tax holiday, which would be the second since 2004. He says it would encourage companies to “repatriate” as much as $1 trillion held abroad, spur domestic investment and create jobs.
Cisco’s techniques cut the effective tax rate on its reported international income to about 5 percent since 2008 by moving profits from roughly $20 billion in annual global sales through the Netherlands, Switzerland and Bermuda, according to its records in four countries. The maneuvers, permitted by tax law, show how companies that use such strategies most aggressively would get the biggest benefit from the holiday, said Edward D. Kleinbard, a law professor at the University of Southern California in Los Angeles.
“Why should we reward firms for successfully gaming the tax system when we in turn are called on to make up the missing tax revenues?” said Kleinbard, a former corporate tax attorney at Cleary Gottlieb Steen & Hamilton LLP. “Much of these earnings overseas are reaped from an enormous shell game: Firms move their taxable income from the U.S. and other major economies -- where their customers and key employees are in reality located -- to tax havens.”
Companies including Google Inc., Apple Inc. and Pfizer Inc. are also pushing the proposed tax holiday, which would allow profits to return to the U.S. at a discounted 5.25 percent rate. Under current law, American companies can defer federal income taxes on most overseas earnings indefinitely. When they do return to the U.S., they’re taxed at the corporate rate of 35 percent -- with credits for foreign income taxes paid. Thus, companies paying little overseas face higher U.S. tax bills upon repatriation, and would get more benefit from the discount.
From Matt Taibi:
The action revolves around a bill sponsored in May by Texas Republican Kevin Brady (and co-sponsored by Utah Democrat Jim Matheson) called the Freedom To Invest Act, which would “temporarily” lower the effective corporate tax rate to 5.25 percent for all profits being repatriated.
Essentially, this is a one-time tax holiday rewarding companies for systematically offshoring their profits since 2004 – the last time they did this “one-time” deal.
This is a company whose CEO, John Chambers, wrote an editorial last October in the Wall Street Journal predicting that the tax holiday would generate a trillion dollars in repatriated earnings, money that Chambers insisted would outdo even Barack Obama’s stimulus as a job-creation engine:
From Bloomberg:
Cisco Systems Inc. has cut its income taxes by $7 billion since 2005 by booking roughly half its worldwide profits at a subsidiary at the foot of the Swiss Alps that employs about 100 people.
Now Cisco, the largest maker of networking equipment, wants to save even more -- by asking Congress to waive most federal taxes due when multinationals bring such offshore earnings home. Chief Executive Officer John T. Chambers has led the charge for the tax holiday, which would be the second since 2004. He says it would encourage companies to “repatriate” as much as $1 trillion held abroad, spur domestic investment and create jobs.
Cisco’s techniques cut the effective tax rate on its reported international income to about 5 percent since 2008 by moving profits from roughly $20 billion in annual global sales through the Netherlands, Switzerland and Bermuda, according to its records in four countries. The maneuvers, permitted by tax law, show how companies that use such strategies most aggressively would get the biggest benefit from the holiday, said Edward D. Kleinbard, a law professor at the University of Southern California in Los Angeles.
“Why should we reward firms for successfully gaming the tax system when we in turn are called on to make up the missing tax revenues?” said Kleinbard, a former corporate tax attorney at Cleary Gottlieb Steen & Hamilton LLP. “Much of these earnings overseas are reaped from an enormous shell game: Firms move their taxable income from the U.S. and other major economies -- where their customers and key employees are in reality located -- to tax havens.”
Companies including Google Inc., Apple Inc. and Pfizer Inc. are also pushing the proposed tax holiday, which would allow profits to return to the U.S. at a discounted 5.25 percent rate. Under current law, American companies can defer federal income taxes on most overseas earnings indefinitely. When they do return to the U.S., they’re taxed at the corporate rate of 35 percent -- with credits for foreign income taxes paid. Thus, companies paying little overseas face higher U.S. tax bills upon repatriation, and would get more benefit from the discount.
From Matt Taibi:
The action revolves around a bill sponsored in May by Texas Republican Kevin Brady (and co-sponsored by Utah Democrat Jim Matheson) called the Freedom To Invest Act, which would “temporarily” lower the effective corporate tax rate to 5.25 percent for all profits being repatriated.
Essentially, this is a one-time tax holiday rewarding companies for systematically offshoring their profits since 2004 – the last time they did this “one-time” deal.
This is a company whose CEO, John Chambers, wrote an editorial last October in the Wall Street Journal predicting that the tax holiday would generate a trillion dollars in repatriated earnings, money that Chambers insisted would outdo even Barack Obama’s stimulus as a job-creation engine:
The amount of corporate cash that would come flooding into the country could be larger than the entire federal stimulus package, and it could be used for creating jobs, investing in research, building plants, purchasing equipment, and other uses.And yet: Chambers’s company, Cisco, would not commit to creating so much as a single job if the tax holiday is passed. As it is, the company has already committed to a wave of layoffs. When asked a question about Cisco's plans w/regard to a potential tax holiday, the company’s spokesman, John Earhardt, declined to answer. From the Bloomberg piece:
It’s unclear whether any jobs would come from Cisco, which announced plans in May to shed an unspecified number of workers. Earnhardt, the spokesman, declined to comment on hiring plans for the company, whose customers include Verizon Communications Inc. and AT&T Inc.
Subscribe to:
Comments (Atom)






