Tuesday, December 30, 2014
Are Some Diets “Mass Murder”?
[T]he Inuit, the Masai, and the Samburu people of Uganda all originally ate diets that were 60-80% fat and yet were not obese and did not have hypertension or heart disease.
The hypothesis that saturated fat is the main dietary cause of cardiovascular disease is strongly associated with one man, Ancel Benjamin Keys, a biologist at the University of Minnesota. […] Keys launched his “diet-heart hypothesis” at a meeting in New York in 1952, when the United States was at the peak of its epidemic of heart disease, with his study showing a close correlation between deaths from heart disease and proportion of fat in the diet in men in six countries (Japan, Italy, England and Wales, Australia, Canada, and the United States). Keys studied few men and did not have a reliable way of measuring diets, and in the case of the Japanese and Italians he studied them soon after the second world war, when there were food shortages. Keys could have gathered data from many more countries and people (women as well as men) and used more careful methods, but, suggests Teicholz, he found what he wanted to find. […]
At a World Health Organization meeting in 1955 Keys’s hypothesis was met with great criticism, but in response he designed the highly influential Seven Countries Study, which was published in 1970 and showed a strong correlation between saturated fat (Keys had moved on from fat to saturated fat) and deaths from heart disease. Keys did not select countries (such as France, Germany, or Switzerland) where the correlation did not seem so neat, and in Crete and Corfu he studied only nine men. […]
[T]he fat hypothesis led to a massive change in the US and subsequently international diet. One congressional staffer, Nick Mottern, wrote a report recommending that fat be reduced from 40% to 30% of energy intake, saturated fat capped at 10%, and carbohydrate increased to 55-60%. These recommendations went through to Dietary Guidelines for Americans, which were published for the first time in 1980. (Interestingly, a recommendation from Mottern that sugar be reduced disappeared along the way.)
It might be expected that the powerful US meat and dairy lobbies would oppose these guidelines, and they did, but they couldn’t counter the big food manufacturers such as General Foods, Quaker Oats, Heinz, the National Biscuit Company, and the Corn Products Refining Corporation, which were both more powerful and more subtle. In 1941 they set up the Nutrition Foundation, which formed links with scientists and funded conferences and research before there was public funding for nutrition research. […]
Saturated fats such as lard, butter, and suet, which are solid at room temperature, had for centuries been used for making biscuits, pastries, and much else, but when saturated fat became unacceptable a substitute had to be found. The substitute was trans fats, and since the 1980s these fats, which are not found naturally except in some ruminants, have been widely used and are now found throughout our bodies. There were doubts about trans fats from the very beginning, but Teicholz shows how the food companies were highly effective in countering any research that raised the risks of trans fats. […]
Another consequence of the fat hypothesis is that around the world diets have come to include much more carbohydrate, including sugar and high fructose corn syrup, which is cheap, extremely sweet, and “a calorie source but not a nutrient.”2 5 25 More and more scientists believe that it is the surfeit of refined carbohydrates that is driving the global pandemic of obesity, diabetes, and non-communicable diseases.
by Richard Smith, BMJ | Read more:
The hypothesis that saturated fat is the main dietary cause of cardiovascular disease is strongly associated with one man, Ancel Benjamin Keys, a biologist at the University of Minnesota. […] Keys launched his “diet-heart hypothesis” at a meeting in New York in 1952, when the United States was at the peak of its epidemic of heart disease, with his study showing a close correlation between deaths from heart disease and proportion of fat in the diet in men in six countries (Japan, Italy, England and Wales, Australia, Canada, and the United States). Keys studied few men and did not have a reliable way of measuring diets, and in the case of the Japanese and Italians he studied them soon after the second world war, when there were food shortages. Keys could have gathered data from many more countries and people (women as well as men) and used more careful methods, but, suggests Teicholz, he found what he wanted to find. […]At a World Health Organization meeting in 1955 Keys’s hypothesis was met with great criticism, but in response he designed the highly influential Seven Countries Study, which was published in 1970 and showed a strong correlation between saturated fat (Keys had moved on from fat to saturated fat) and deaths from heart disease. Keys did not select countries (such as France, Germany, or Switzerland) where the correlation did not seem so neat, and in Crete and Corfu he studied only nine men. […]
[T]he fat hypothesis led to a massive change in the US and subsequently international diet. One congressional staffer, Nick Mottern, wrote a report recommending that fat be reduced from 40% to 30% of energy intake, saturated fat capped at 10%, and carbohydrate increased to 55-60%. These recommendations went through to Dietary Guidelines for Americans, which were published for the first time in 1980. (Interestingly, a recommendation from Mottern that sugar be reduced disappeared along the way.)
It might be expected that the powerful US meat and dairy lobbies would oppose these guidelines, and they did, but they couldn’t counter the big food manufacturers such as General Foods, Quaker Oats, Heinz, the National Biscuit Company, and the Corn Products Refining Corporation, which were both more powerful and more subtle. In 1941 they set up the Nutrition Foundation, which formed links with scientists and funded conferences and research before there was public funding for nutrition research. […]
Saturated fats such as lard, butter, and suet, which are solid at room temperature, had for centuries been used for making biscuits, pastries, and much else, but when saturated fat became unacceptable a substitute had to be found. The substitute was trans fats, and since the 1980s these fats, which are not found naturally except in some ruminants, have been widely used and are now found throughout our bodies. There were doubts about trans fats from the very beginning, but Teicholz shows how the food companies were highly effective in countering any research that raised the risks of trans fats. […]
Another consequence of the fat hypothesis is that around the world diets have come to include much more carbohydrate, including sugar and high fructose corn syrup, which is cheap, extremely sweet, and “a calorie source but not a nutrient.”2 5 25 More and more scientists believe that it is the surfeit of refined carbohydrates that is driving the global pandemic of obesity, diabetes, and non-communicable diseases.
by Richard Smith, BMJ | Read more:
Beyond Meat
I dumped meat a few weeks ago, and it was not an easy breakup. Some of my most treasured moments have involved a deck, a beer, and a cheeseburger. But the more I learned, the more I understood that the relationship wasn’t good for either of us. A few things you should never do if you want to eat factory meat in unconflicted bliss: write a story on water scarcity in the American Southwest; Google “How much shit is in my hamburger?”; watch an undercover video of a slaughterhouse in action; and read the 2009 Worldwatch Institute report “Livestock and Climate Change.”
I did them all. And that was that. By then I knew that with every burger I consumed, I was helping to suck America’s rivers dry, munching on a fecal casserole seasoned liberally with E. coli, passively condoning an orgy of torture that would make Hannibal Lecter blanch, and accelerating global warming as surely as if I’d plowed my Hummer into a solar installation. We all needed to kick the meat habit, starting with me.
Yet previous attempts had collapsed in the face of time-sucking whole-food preparation and cardboard-scented tofu products. All the veggie burgers I knew of seemed to come in two flavors of unappealing: the brown-rice, high-carb, nap-inducing mush bomb, and the colon-wrecking gluten chew puck. Soylent? In your pasty dreams. If I couldn’t have meat, I needed something damn close. A high-performance, low-commitment protein recharge, good with Budweiser.
I took long, moody walks on the dirt roads near my Vermont house. I passed my neighbor’s farm. One of his beef cattle stepped up to the fence and gazed at me. My eyes traced his well-marbled flanks and meaty chest. I stared into those bottomless brown eyes. “I can’t quit you,” I whispered to him.
But I did. Not because my willpower suddenly rose beyond its default Lebowski setting, but because a box arrived at my door and made it easy.
Inside were four quarter-pound brown patties. I tossed one on the grill. It hit with a satisfying sizzle. Gobbets of lovely fat began to bubble out. A beefy smell filled the air. I browned a bun. Popped a pilsner. Mustard, ketchup, pickle, onions. I threw it all together with some chips on the side and took a bite. I chewed. I thought. I chewed some more. And then I began to get excited about the future.
It was called the Beast Burger, and it came from a Southern California company called Beyond Meat, located a few blocks from the ocean. At that point, the Beast was still a secret, known only by its code name: the Manhattan Beach Project. I’d had to beg Ethan Brown, the company’s 43-year-old CEO, to send me a sample.
And it was vegan. “More protein than beef,” Brown told me when I rang him up after tasting it. “More omegas than salmon. More calcium than milk. More antioxidants than blueberries. Plus muscle-recovery aids. It’s the ultimate performance burger.”
“How do you make it so meat-like?” I asked.
“It is meat,” he replied enigmatically. “Come on out. We’ll show you our steer.”
Byond Meat HQ was a brick warehouse located a stone’s throw from Chevron’s massive El Segundo refinery, which hiccuped gray fumes into the clear California sky. “Old economy, new economy,” Brown said as we stepped inside. Two-dozen wholesome millennials tapped away at laptops on temporary tables in the open space, which looked remarkably like a set that had been thrown together that morning for a movie about startups. Bikes and surfboards leaned in the corners. In the test kitchen, the Beyond Meat chef, Dave Anderson—former celebrity chef to the stars and cofounder of vegan-mayo company Hampton Creek—was frying experimental burgers made of beans, quinoa, and cryptic green things.
The “steer” was the only one with its own space. It glinted, steely and unfeeling, in the corner of the lab. It was a twin-screw extruder, the food-industry workhorse that churns out all the pastas and PowerBars of the world. Beyond Meat’s main extruders, as well as its 60 other employees, labor quietly in Missouri, producing the company’s current generation of meat substitutes, but this was the R&D steer. To make a Beast Burger, powdered pea protein, water, sunflower oil, and various nutrients and natural flavors go into a mixer at one end, are cooked and pressurized, get extruded out the back, and are then shaped into patties ready to be reheated on consumers’ grills.
“It’s about the dimensions of a large steer, right?” Brown said to me as we admired it. “And it does the same thing.” By which he meant that plant stuff goes in one end, gets pulled apart, and is then reassembled into fibrous bundles of protein. A steer does this to build muscle. The extruder in the Beyond Meat lab does it to make meat. Not meat-like substances, Brown will tell you. Meat. Meat from plants. Because what is meat but a tasty, toothy hunk of protein? Do we really need animals to assemble it for us, or have we reached a stage of enlightenment where we can build machines to do the dirty work for us?
Livestock, in fact, are horribly inefficient at making meat. Only about 3 percent of the plant matter that goes into a steer winds up as muscle. The rest gets burned for energy, ejected as methane, blown off as excess heat, shot out the back of the beast, or repurposed into non-meat-like things such as blood, bone, and brains. The process buries river systems in manure and requires an absurd amount of land. Roughly three-fifths of all farmland is used to grow beef, although it accounts for just 5 percent of our protein. But we love meat, and with the developing world lining up at the table and sharpening their steak knives, global protein consumption is expected to double by 2050.
I did them all. And that was that. By then I knew that with every burger I consumed, I was helping to suck America’s rivers dry, munching on a fecal casserole seasoned liberally with E. coli, passively condoning an orgy of torture that would make Hannibal Lecter blanch, and accelerating global warming as surely as if I’d plowed my Hummer into a solar installation. We all needed to kick the meat habit, starting with me.Yet previous attempts had collapsed in the face of time-sucking whole-food preparation and cardboard-scented tofu products. All the veggie burgers I knew of seemed to come in two flavors of unappealing: the brown-rice, high-carb, nap-inducing mush bomb, and the colon-wrecking gluten chew puck. Soylent? In your pasty dreams. If I couldn’t have meat, I needed something damn close. A high-performance, low-commitment protein recharge, good with Budweiser.
I took long, moody walks on the dirt roads near my Vermont house. I passed my neighbor’s farm. One of his beef cattle stepped up to the fence and gazed at me. My eyes traced his well-marbled flanks and meaty chest. I stared into those bottomless brown eyes. “I can’t quit you,” I whispered to him.
But I did. Not because my willpower suddenly rose beyond its default Lebowski setting, but because a box arrived at my door and made it easy.
Inside were four quarter-pound brown patties. I tossed one on the grill. It hit with a satisfying sizzle. Gobbets of lovely fat began to bubble out. A beefy smell filled the air. I browned a bun. Popped a pilsner. Mustard, ketchup, pickle, onions. I threw it all together with some chips on the side and took a bite. I chewed. I thought. I chewed some more. And then I began to get excited about the future.
It was called the Beast Burger, and it came from a Southern California company called Beyond Meat, located a few blocks from the ocean. At that point, the Beast was still a secret, known only by its code name: the Manhattan Beach Project. I’d had to beg Ethan Brown, the company’s 43-year-old CEO, to send me a sample.
And it was vegan. “More protein than beef,” Brown told me when I rang him up after tasting it. “More omegas than salmon. More calcium than milk. More antioxidants than blueberries. Plus muscle-recovery aids. It’s the ultimate performance burger.”
“How do you make it so meat-like?” I asked.
“It is meat,” he replied enigmatically. “Come on out. We’ll show you our steer.”
Byond Meat HQ was a brick warehouse located a stone’s throw from Chevron’s massive El Segundo refinery, which hiccuped gray fumes into the clear California sky. “Old economy, new economy,” Brown said as we stepped inside. Two-dozen wholesome millennials tapped away at laptops on temporary tables in the open space, which looked remarkably like a set that had been thrown together that morning for a movie about startups. Bikes and surfboards leaned in the corners. In the test kitchen, the Beyond Meat chef, Dave Anderson—former celebrity chef to the stars and cofounder of vegan-mayo company Hampton Creek—was frying experimental burgers made of beans, quinoa, and cryptic green things.
The “steer” was the only one with its own space. It glinted, steely and unfeeling, in the corner of the lab. It was a twin-screw extruder, the food-industry workhorse that churns out all the pastas and PowerBars of the world. Beyond Meat’s main extruders, as well as its 60 other employees, labor quietly in Missouri, producing the company’s current generation of meat substitutes, but this was the R&D steer. To make a Beast Burger, powdered pea protein, water, sunflower oil, and various nutrients and natural flavors go into a mixer at one end, are cooked and pressurized, get extruded out the back, and are then shaped into patties ready to be reheated on consumers’ grills.
“It’s about the dimensions of a large steer, right?” Brown said to me as we admired it. “And it does the same thing.” By which he meant that plant stuff goes in one end, gets pulled apart, and is then reassembled into fibrous bundles of protein. A steer does this to build muscle. The extruder in the Beyond Meat lab does it to make meat. Not meat-like substances, Brown will tell you. Meat. Meat from plants. Because what is meat but a tasty, toothy hunk of protein? Do we really need animals to assemble it for us, or have we reached a stage of enlightenment where we can build machines to do the dirty work for us?
Livestock, in fact, are horribly inefficient at making meat. Only about 3 percent of the plant matter that goes into a steer winds up as muscle. The rest gets burned for energy, ejected as methane, blown off as excess heat, shot out the back of the beast, or repurposed into non-meat-like things such as blood, bone, and brains. The process buries river systems in manure and requires an absurd amount of land. Roughly three-fifths of all farmland is used to grow beef, although it accounts for just 5 percent of our protein. But we love meat, and with the developing world lining up at the table and sharpening their steak knives, global protein consumption is expected to double by 2050.
by Rowan Jacobsen, Outside | Read more:
Image: Misha Gravenor[ed. Style. I could really go for a coat like this.]
A deliciously soft and airy jacket. Mr Cesare Prandelli (Manager of Italy’s national side).
via:
The Tragedy of the American Military
In mid-September, while President Obama was fending off complaints that he should have done more, done less, or done something different about the overlapping crises in Iraq and Syria, he traveled to Central Command headquarters, at MacDill Air Force Base in Florida. There he addressed some of the men and women who would implement whatever the U.S. military strategy turned out to be.
The part of the speech intended to get coverage was Obama’s rationale for reengaging the United States in Iraq, more than a decade after it first invaded and following the long and painful effort to extricate itself. This was big enough news that many cable channels covered the speech live. I watched it on an overhead TV while I sat waiting for a flight at Chicago’s O’Hare airport. When Obama got to the section of his speech announcing whether he planned to commit U.S. troops in Iraq (at the time, he didn’t), I noticed that many people in the terminal shifted their attention briefly to the TV. As soon as that was over, they went back to their smartphones and their laptops and their Cinnabons as the president droned on.
Usually I would have stopped watching too, since so many aspects of public figures’ appearances before the troops have become so formulaic and routine. But I decided to see the whole show. Obama gave his still-not-quite-natural-sounding callouts to the different military services represented in the crowd. (“I know we’ve got some Air Force in the house!” and so on, receiving cheers rendered as “Hooyah!” and “Oorah!” in the official White House transcript.) He told members of the military that the nation was grateful for their nonstop deployments and for the unique losses and burdens placed on them through the past dozen years of open-ended war. He noted that they were often the face of American influence in the world, being dispatched to Liberia in 2014 to cope with the then-dawning Ebola epidemic as they had been sent to Indonesia 10 years earlier to rescue victims of the catastrophic tsunami there. He said that the “9/11 generation of heroes” represented the very best in its country, and that its members constituted a military that was not only superior to all current adversaries but no less than “the finest fighting force in the history of the world.”
If any of my fellow travelers at O’Hare were still listening to the speech, none of them showed any reaction to it. And why would they? This has become the way we assume the American military will be discussed by politicians and in the press: Overblown, limitless praise, absent the caveats or public skepticism we would apply to other American institutions, especially ones that run on taxpayer money. A somber moment to reflect on sacrifice. Then everyone except the few people in uniform getting on with their workaday concerns. (...)
This reverent but disengaged attitude toward the military—we love the troops, but we’d rather not think about them—has become so familiar that we assume it is the American norm. But it is not. When Dwight D. Eisenhower, as a five-star general and the supreme commander, led what may have in fact been the finest fighting force in the history of the world, he did not describe it in that puffed-up way. On the eve of the D-Day invasion, he warned his troops, “Your task will not be an easy one,” because “your enemy is well-trained, well-equipped, and battle-hardened.” As president, Eisenhower’s most famous statement about the military was his warning in his farewell address of what could happen if its political influence grew unchecked.
At the end of World War II, nearly 10 percent of the entire U.S. population was on active military duty—which meant most able-bodied men of a certain age (plus the small number of women allowed to serve). Through the decade after World War II, when so many American families had at least one member in uniform, political and journalistic references were admiring but not awestruck. Most Americans were familiar enough with the military to respect it while being sharply aware of its shortcomings, as they were with the school system, their religion, and other important and fallible institutions.
Now the American military is exotic territory to most of the American public. As a comparison: A handful of Americans live on farms, but there are many more of them than serve in all branches of the military. (Well over 4 million people live on the country’s 2.1 million farms. The U.S. military has about 1.4 million people on active duty and another 850,000 in the reserves.) The other 310 million–plus Americans “honor” their stalwart farmers, but generally don’t know them. So too with the military. Many more young Americans will study abroad this year than will enlist in the military—nearly 300,000 students overseas, versus well under 200,000 new recruits. As a country, America has been at war nonstop for the past 13 years. As a public, it has not. A total of about 2.5 million Americans, roughly three-quarters of 1 percent, served in Iraq or Afghanistan at any point in the post-9/11 years, many of them more than once.
The difference between the earlier America that knew its military and the modern America that gazes admiringly at its heroes shows up sharply in changes in popular and media culture. While World War II was under way, its best-known chroniclers were the Scripps Howard reporter Ernie Pyle, who described the daily braveries and travails of the troops (until he was killed near the war’s end by Japanese machine-gun fire on the island of Iejima), and the Stars and Stripes cartoonist Bill Mauldin, who mocked the obtuseness of generals and their distance from the foxhole realities faced by his wisecracking GI characters, Willie and Joe.
From Mister Roberts to South Pacific to Catch-22, from The Caine Mutiny toThe Naked and the Dead to From Here to Eternity, American popular and high culture treated our last mass-mobilization war as an effort deserving deep respect and pride, but not above criticism and lampooning. The collective achievement of the military was heroic, but its members and leaders were still real people, with all the foibles of real life. A decade after that war ended, the most popular military-themed TV program was The Phil Silvers Show, about a con man in uniform named Sgt. Bilko. As Bilko, Phil Silvers was that stock American sitcom figure, the lovable blowhard—a role familiar from the time of Jackie Gleason in The Honeymooners to Homer Simpson in The Simpsons today. Gomer Pyle, USMC; Hogan’s Heroes; McHale’s Navy; and even the anachronistic frontier show F Troop were sitcoms whose settings were U.S. military units and whose villains—and schemers, and stooges, and occasional idealists—were people in uniform. American culture was sufficiently at ease with the military to make fun of it, a stance now hard to imagine outside the military itself. (...)
The most biting satirical novel to come from the Iraq-Afghanistan era, Billy Lynn’s Long Halftime Walk, by Ben Fountain, is a takedown of our empty modern “thank you for your service” rituals. It is the story of an Army squad that is badly shot up in Iraq; is brought back to be honored at halftime during a nationally televised Dallas Cowboys Thanksgiving Day game; while there, is slapped on the back and toasted by owner’s-box moguls and flirted with by cheerleaders, “passed around like everyone’s favorite bong,” as platoon member Billy Lynn thinks of it; and is then shipped right back to the front.
The people at the stadium feel good about what they’ve done to show their support for the troops. From the troops’ point of view, the spectacle looks different. “There’s something harsh in his fellow Americans, avid, ecstatic, a burning that comes of the deepest need,” the narrator says of Billy Lynn’s thoughts. “That’s his sense of it, they all need something from him, this pack of half-rich lawyers, dentists, soccer moms, and corporate VPs, they’re all gnashing for a piece of a barely grown grunt making $14,800 a year.” Fountain’s novel won the National Book Critics Circle Award for fiction in 2012, but it did not dent mainstream awareness enough to make anyone self-conscious about continuing the “salute to the heroes” gestures that do more for the civilian public’s self-esteem than for the troops’. As I listened to Obama that day in the airport, and remembered Ben Fountain’s book, and observed the hum of preoccupied America around me, I thought that the parts of the presidential speech few Americans were listening to were the ones historians might someday seize upon to explain the temper of our times.
The part of the speech intended to get coverage was Obama’s rationale for reengaging the United States in Iraq, more than a decade after it first invaded and following the long and painful effort to extricate itself. This was big enough news that many cable channels covered the speech live. I watched it on an overhead TV while I sat waiting for a flight at Chicago’s O’Hare airport. When Obama got to the section of his speech announcing whether he planned to commit U.S. troops in Iraq (at the time, he didn’t), I noticed that many people in the terminal shifted their attention briefly to the TV. As soon as that was over, they went back to their smartphones and their laptops and their Cinnabons as the president droned on.Usually I would have stopped watching too, since so many aspects of public figures’ appearances before the troops have become so formulaic and routine. But I decided to see the whole show. Obama gave his still-not-quite-natural-sounding callouts to the different military services represented in the crowd. (“I know we’ve got some Air Force in the house!” and so on, receiving cheers rendered as “Hooyah!” and “Oorah!” in the official White House transcript.) He told members of the military that the nation was grateful for their nonstop deployments and for the unique losses and burdens placed on them through the past dozen years of open-ended war. He noted that they were often the face of American influence in the world, being dispatched to Liberia in 2014 to cope with the then-dawning Ebola epidemic as they had been sent to Indonesia 10 years earlier to rescue victims of the catastrophic tsunami there. He said that the “9/11 generation of heroes” represented the very best in its country, and that its members constituted a military that was not only superior to all current adversaries but no less than “the finest fighting force in the history of the world.”
If any of my fellow travelers at O’Hare were still listening to the speech, none of them showed any reaction to it. And why would they? This has become the way we assume the American military will be discussed by politicians and in the press: Overblown, limitless praise, absent the caveats or public skepticism we would apply to other American institutions, especially ones that run on taxpayer money. A somber moment to reflect on sacrifice. Then everyone except the few people in uniform getting on with their workaday concerns. (...)
This reverent but disengaged attitude toward the military—we love the troops, but we’d rather not think about them—has become so familiar that we assume it is the American norm. But it is not. When Dwight D. Eisenhower, as a five-star general and the supreme commander, led what may have in fact been the finest fighting force in the history of the world, he did not describe it in that puffed-up way. On the eve of the D-Day invasion, he warned his troops, “Your task will not be an easy one,” because “your enemy is well-trained, well-equipped, and battle-hardened.” As president, Eisenhower’s most famous statement about the military was his warning in his farewell address of what could happen if its political influence grew unchecked.
At the end of World War II, nearly 10 percent of the entire U.S. population was on active military duty—which meant most able-bodied men of a certain age (plus the small number of women allowed to serve). Through the decade after World War II, when so many American families had at least one member in uniform, political and journalistic references were admiring but not awestruck. Most Americans were familiar enough with the military to respect it while being sharply aware of its shortcomings, as they were with the school system, their religion, and other important and fallible institutions.
Now the American military is exotic territory to most of the American public. As a comparison: A handful of Americans live on farms, but there are many more of them than serve in all branches of the military. (Well over 4 million people live on the country’s 2.1 million farms. The U.S. military has about 1.4 million people on active duty and another 850,000 in the reserves.) The other 310 million–plus Americans “honor” their stalwart farmers, but generally don’t know them. So too with the military. Many more young Americans will study abroad this year than will enlist in the military—nearly 300,000 students overseas, versus well under 200,000 new recruits. As a country, America has been at war nonstop for the past 13 years. As a public, it has not. A total of about 2.5 million Americans, roughly three-quarters of 1 percent, served in Iraq or Afghanistan at any point in the post-9/11 years, many of them more than once.
The difference between the earlier America that knew its military and the modern America that gazes admiringly at its heroes shows up sharply in changes in popular and media culture. While World War II was under way, its best-known chroniclers were the Scripps Howard reporter Ernie Pyle, who described the daily braveries and travails of the troops (until he was killed near the war’s end by Japanese machine-gun fire on the island of Iejima), and the Stars and Stripes cartoonist Bill Mauldin, who mocked the obtuseness of generals and their distance from the foxhole realities faced by his wisecracking GI characters, Willie and Joe.
From Mister Roberts to South Pacific to Catch-22, from The Caine Mutiny toThe Naked and the Dead to From Here to Eternity, American popular and high culture treated our last mass-mobilization war as an effort deserving deep respect and pride, but not above criticism and lampooning. The collective achievement of the military was heroic, but its members and leaders were still real people, with all the foibles of real life. A decade after that war ended, the most popular military-themed TV program was The Phil Silvers Show, about a con man in uniform named Sgt. Bilko. As Bilko, Phil Silvers was that stock American sitcom figure, the lovable blowhard—a role familiar from the time of Jackie Gleason in The Honeymooners to Homer Simpson in The Simpsons today. Gomer Pyle, USMC; Hogan’s Heroes; McHale’s Navy; and even the anachronistic frontier show F Troop were sitcoms whose settings were U.S. military units and whose villains—and schemers, and stooges, and occasional idealists—were people in uniform. American culture was sufficiently at ease with the military to make fun of it, a stance now hard to imagine outside the military itself. (...)
The most biting satirical novel to come from the Iraq-Afghanistan era, Billy Lynn’s Long Halftime Walk, by Ben Fountain, is a takedown of our empty modern “thank you for your service” rituals. It is the story of an Army squad that is badly shot up in Iraq; is brought back to be honored at halftime during a nationally televised Dallas Cowboys Thanksgiving Day game; while there, is slapped on the back and toasted by owner’s-box moguls and flirted with by cheerleaders, “passed around like everyone’s favorite bong,” as platoon member Billy Lynn thinks of it; and is then shipped right back to the front.
The people at the stadium feel good about what they’ve done to show their support for the troops. From the troops’ point of view, the spectacle looks different. “There’s something harsh in his fellow Americans, avid, ecstatic, a burning that comes of the deepest need,” the narrator says of Billy Lynn’s thoughts. “That’s his sense of it, they all need something from him, this pack of half-rich lawyers, dentists, soccer moms, and corporate VPs, they’re all gnashing for a piece of a barely grown grunt making $14,800 a year.” Fountain’s novel won the National Book Critics Circle Award for fiction in 2012, but it did not dent mainstream awareness enough to make anyone self-conscious about continuing the “salute to the heroes” gestures that do more for the civilian public’s self-esteem than for the troops’. As I listened to Obama that day in the airport, and remembered Ben Fountain’s book, and observed the hum of preoccupied America around me, I thought that the parts of the presidential speech few Americans were listening to were the ones historians might someday seize upon to explain the temper of our times.
by James Fallows, The Atlantic | Read more:
Image: David Goldman/APMonday, December 29, 2014
Pot Pie, Redefined?
Recreational marijuana is both illegal and controversial in most of the country, and its relationship to food does not rise much above a joke about brownies or a stoner chef’s late-night pork belly poutine.
But cooking with cannabis is emerging as a legitimate and very lucrative culinary pursuit.
In Colorado, which has issued more than 160 edible marijuana licenses, skilled line cooks are leaving respected restaurants to take more lucrative jobs infusing cannabis into food and drinks. In Washington, one of four states that allow recreational marijuana sales, a large cannabis bakery dedicated to affluent customers with good palates will soon open in Seattle.
Major New York publishing houses and noted cookbook authors are pondering marijuana projects, and chefs on both coasts and in food-forward countries like Denmark have been staging underground meals with modern twists like compressed watermelon, smoked cheese and marijuana-oil vinaigrette.
“It really won’t be long until it becomes part of haute cuisine and part of respectable culinary culture, instead of just an illegal doobie in the backyard,” said Ken Albala, director of the food studies program at the University of the Pacific in San Francisco.
Two problems, however, stand in the way: First, it’s hard to control how high people get when they eat marijuana. And second, it really doesn’t taste that good.
Still, what if chefs could develop a culinary canon around marijuana that tamed both its taste and mood-altering effects, and diners came to appreciate dishes with marijuana the way one appreciates good bourbon? Paired with delicious recipes and the pleasures of good company, cannabis cookery might open a new dimension in dining that echoes the evolutions in the wine and cocktail cultures.
“I am sure someone is going to grow some that is actually delicious and we’ll all learn about it,” said Ruth Reichl, the former editor of Gourmet magazine and a former New York Times restaurant critic. Who could have predicted that kale would be the trendiest green on the plate, or that people would line up for pear and blue cheese ice cream, she asked. (...)
Cooking with marijuana requires a scientist’s touch to draw out and control the cannabinoids like tetrahydrocannabinol, or THC, which alter one’s mood and physical sensations. To get a consistent, controllable effect, marijuana is best heated and combined with fats like butter, olive oil or cream. (...)
Twenty-three states and the District of Columbia have legalized medical marijuana sales. Only four states — Washington, Oregon, Alaska and Colorado — allow recreational sales. The people who sell edible marijuana often advise people who have not tried it before to start with 10 milligrams or less. Dosing is easier to control in batter-based dishes or chocolate, where the drug can be distributed more evenly. In savory applications, dosing is trickier. A cook might be able to make sure a tablespoon of lime-cilantro butter has 10 milligrams of THC, but will the guest eat exactly that amount?
But cooking with cannabis is emerging as a legitimate and very lucrative culinary pursuit.
In Colorado, which has issued more than 160 edible marijuana licenses, skilled line cooks are leaving respected restaurants to take more lucrative jobs infusing cannabis into food and drinks. In Washington, one of four states that allow recreational marijuana sales, a large cannabis bakery dedicated to affluent customers with good palates will soon open in Seattle.Major New York publishing houses and noted cookbook authors are pondering marijuana projects, and chefs on both coasts and in food-forward countries like Denmark have been staging underground meals with modern twists like compressed watermelon, smoked cheese and marijuana-oil vinaigrette.
“It really won’t be long until it becomes part of haute cuisine and part of respectable culinary culture, instead of just an illegal doobie in the backyard,” said Ken Albala, director of the food studies program at the University of the Pacific in San Francisco.
Two problems, however, stand in the way: First, it’s hard to control how high people get when they eat marijuana. And second, it really doesn’t taste that good.
Still, what if chefs could develop a culinary canon around marijuana that tamed both its taste and mood-altering effects, and diners came to appreciate dishes with marijuana the way one appreciates good bourbon? Paired with delicious recipes and the pleasures of good company, cannabis cookery might open a new dimension in dining that echoes the evolutions in the wine and cocktail cultures.
“I am sure someone is going to grow some that is actually delicious and we’ll all learn about it,” said Ruth Reichl, the former editor of Gourmet magazine and a former New York Times restaurant critic. Who could have predicted that kale would be the trendiest green on the plate, or that people would line up for pear and blue cheese ice cream, she asked. (...)
Cooking with marijuana requires a scientist’s touch to draw out and control the cannabinoids like tetrahydrocannabinol, or THC, which alter one’s mood and physical sensations. To get a consistent, controllable effect, marijuana is best heated and combined with fats like butter, olive oil or cream. (...)
Twenty-three states and the District of Columbia have legalized medical marijuana sales. Only four states — Washington, Oregon, Alaska and Colorado — allow recreational sales. The people who sell edible marijuana often advise people who have not tried it before to start with 10 milligrams or less. Dosing is easier to control in batter-based dishes or chocolate, where the drug can be distributed more evenly. In savory applications, dosing is trickier. A cook might be able to make sure a tablespoon of lime-cilantro butter has 10 milligrams of THC, but will the guest eat exactly that amount?
by Kim Severson, NY Times | Read more:
Image: Matthew Staver for The New York TimesThe Fall of the Creative Class
[ed. For more background on Richard Florida's work see: Questioning the Cult of the Creative Class. Also, if you're interested: the response and counter-response to this essay.]
In the late 1990s, my wife and I got in a U-Haul, hit I-90 and headed west for a few days until we came to Portland, Oregon. We had no jobs, no apartment, and no notion other than getting out of Minnesota.
We chose Portland mainly because it was cheaper than the other places we’d liked on a month-long road trip through the West (San Francisco, Seattle, Missoula), because it had a great book store we both fell in love with, and because I had a cousin who lived there in the northeast part of the city, which was somewhat less trendy back then. (Our first night, police found a body in the park across the street.) The plan was to stay a year, then try the other coast, then who knows? We were young! But we loved it and stayed for nearly five years. Then, when we started thinking of breeding, like salmon, we decided to swim back to the pool in which we were bred.
For a variety of not-very-well-thought-out reasons, this brought us to Madison, Wisconsin. It wasn’t too far from our families. It had a stellar reputation. And for the Midwest, it possessed what might pass for cachet. It was liberal and open minded. It was a college town. It had coffee shops and bike shops. Besides, it had been deemed a “Creative Class” stronghold by Richard Florida, the prophet of prosperous cool. We had no way of knowing how wrong he was about Madison…and about everything.
Florida’s idea was a nice one: Young, innovative people move to places that are open and hip and tolerant. They, in turn, generate economic innovation. I loved this idea because, as a freelance writer, it made me important. I was poor, but somehow I made everyone else rich! It seemed to make perfect sense. Madison, by that reasoning, should have been clamoring to have me, since I was one of the mystical bearers of prosperity. (...)
For some reason, these and most other relationships never quite blossomed the way we’d hoped, the way they had in all the other place we’d lived. For a time, my wife had a soulless job with a boss who sat behind her, staring at the back of her head. I found work in a dusty tomb of a bookstore, doing data entry with coworkers who complained about their neurological disorders, or who told me about the magical creatures they saw on their way home, and who kept websites depicting themselves as minotaurs.
I’m not sure what exactly I expected, but within a year or two it was clear that something wasn’t right. If Madison was such a Creative Class hotbed overflowing with independent, post-industrial workers like myself, we should have fit in. Yet our presence didn’t seem to matter to anyone, creatively or otherwise. And anyway, Madison’s economy was humming along with unemployment around four percent, while back in fun, creative Portland, it was more than twice that, at eight and a half percent. This was not how the world according to Florida was supposed to work. I started to wonder if I’d misread him. Around town I encountered a few other transplants who also found themselves scratching their heads over what the fuss had been about. Within a couple years, most of them would be gone. (...)
Jamie Peck is a geography professor who has been one of the foremost critics of Richard Florida’s Creative Class theory. He now teaches at the University of British Columbia in Vancouver, but at the time Florida’s book was published in 2002, he was also living in Madison. “The reason I wrote about this,” Peck told me on the phone, “is because Madison’s mayor started to embrace it. I lived on the east side of town, probably as near to this lifestyle as possible, and it was bullshit that this was actually what was driving Madison’s economy. What was driving Madison was public sector spending through the university, not the dynamic Florida was describing.”
In his initial critique, Peck said The Rise of the Creative Class was filled with “self-indulgent forms of amateur microsociology and crass celebrations of hipster embourgeoisement.” That’s another way of saying that Florida was just describing the “hipsterization” of wealthy cities and concluding that this was what was causing those cities to be wealthy. As some critics have pointed out, that’s a little like saying that the high number of hot dog vendors in New York City is what’s causing the presence of so many investment bankers. So if you want banking, just sell hot dogs. “You can manipulate your arguments about correlation when things happen in the same place,” says Peck.
What was missing, however, was any actual proof that the presence of artists, gays and lesbians or immigrants was causing economic growth, rather than economic growth causing the presence of artists, gays and lesbians or immigrants. Some more recent work has tried to get to the bottom of these questions, and the findings don’t bode well for Florida’s theory. In a four-year, $6 million study of thirteen cities across Europe called “Accommodating Creative Knowledge,” that was published in 2011, researchers found one of Florida’s central ideas—the migration of creative workers to places that are tolerant, open and diverse—was simply not happening.
“They move to places where they can find jobs,” wrote author Sako Musterd, “and if they cannot find a job there, the only reason to move is for study or for personal social network reasons, such as the presence of friends, family, partners, or because they return to the place where they have been born or have grown up.” But even if they had been pouring into places because of “soft” factors like coffee shops and art galleries, according to Stefan Krätke, author of a 2010 German study, it probably wouldn’t have made any difference, economically. Krätke broke Florida’s Creative Class (which includes accountants, realtors, bankers and politicians) into five separate groups and found that only the “scientifically and technologically creative” workers had an impact on regional GDP. Krätke wrote “that Florida’s conception does not match the state of findings of regional innovation research and that his way of relating talent and technology might be regarded as a remarkable exercise in simplification.”
Perhaps one of the most damning studies was in some ways the simplest. In 2009 Michele Hoyman and Chris Faricy published a study using Florida’s own data from 1990 to 2004, in which they tried to find a link between the presence of the creative class workers and any kind of economic growth. “The results were pretty striking,” said Faricy, who now teaches political science at Washington State University. “The measurement of the creative class that Florida uses in his book does not correlate with any known measure of economic growth and development. Basically, we were able to show that the emperor has no clothes.” Their study also questioned whether the migration of the creative class was happening. “Florida said that creative class presence—bohemians, gays, artists—will draw what we used to call yuppies in,” says Hoyman. “We did not find that.”
In the late 1990s, my wife and I got in a U-Haul, hit I-90 and headed west for a few days until we came to Portland, Oregon. We had no jobs, no apartment, and no notion other than getting out of Minnesota.
We chose Portland mainly because it was cheaper than the other places we’d liked on a month-long road trip through the West (San Francisco, Seattle, Missoula), because it had a great book store we both fell in love with, and because I had a cousin who lived there in the northeast part of the city, which was somewhat less trendy back then. (Our first night, police found a body in the park across the street.) The plan was to stay a year, then try the other coast, then who knows? We were young! But we loved it and stayed for nearly five years. Then, when we started thinking of breeding, like salmon, we decided to swim back to the pool in which we were bred.For a variety of not-very-well-thought-out reasons, this brought us to Madison, Wisconsin. It wasn’t too far from our families. It had a stellar reputation. And for the Midwest, it possessed what might pass for cachet. It was liberal and open minded. It was a college town. It had coffee shops and bike shops. Besides, it had been deemed a “Creative Class” stronghold by Richard Florida, the prophet of prosperous cool. We had no way of knowing how wrong he was about Madison…and about everything.
Florida’s idea was a nice one: Young, innovative people move to places that are open and hip and tolerant. They, in turn, generate economic innovation. I loved this idea because, as a freelance writer, it made me important. I was poor, but somehow I made everyone else rich! It seemed to make perfect sense. Madison, by that reasoning, should have been clamoring to have me, since I was one of the mystical bearers of prosperity. (...)
For some reason, these and most other relationships never quite blossomed the way we’d hoped, the way they had in all the other place we’d lived. For a time, my wife had a soulless job with a boss who sat behind her, staring at the back of her head. I found work in a dusty tomb of a bookstore, doing data entry with coworkers who complained about their neurological disorders, or who told me about the magical creatures they saw on their way home, and who kept websites depicting themselves as minotaurs.
I’m not sure what exactly I expected, but within a year or two it was clear that something wasn’t right. If Madison was such a Creative Class hotbed overflowing with independent, post-industrial workers like myself, we should have fit in. Yet our presence didn’t seem to matter to anyone, creatively or otherwise. And anyway, Madison’s economy was humming along with unemployment around four percent, while back in fun, creative Portland, it was more than twice that, at eight and a half percent. This was not how the world according to Florida was supposed to work. I started to wonder if I’d misread him. Around town I encountered a few other transplants who also found themselves scratching their heads over what the fuss had been about. Within a couple years, most of them would be gone. (...)
Jamie Peck is a geography professor who has been one of the foremost critics of Richard Florida’s Creative Class theory. He now teaches at the University of British Columbia in Vancouver, but at the time Florida’s book was published in 2002, he was also living in Madison. “The reason I wrote about this,” Peck told me on the phone, “is because Madison’s mayor started to embrace it. I lived on the east side of town, probably as near to this lifestyle as possible, and it was bullshit that this was actually what was driving Madison’s economy. What was driving Madison was public sector spending through the university, not the dynamic Florida was describing.”
In his initial critique, Peck said The Rise of the Creative Class was filled with “self-indulgent forms of amateur microsociology and crass celebrations of hipster embourgeoisement.” That’s another way of saying that Florida was just describing the “hipsterization” of wealthy cities and concluding that this was what was causing those cities to be wealthy. As some critics have pointed out, that’s a little like saying that the high number of hot dog vendors in New York City is what’s causing the presence of so many investment bankers. So if you want banking, just sell hot dogs. “You can manipulate your arguments about correlation when things happen in the same place,” says Peck.
What was missing, however, was any actual proof that the presence of artists, gays and lesbians or immigrants was causing economic growth, rather than economic growth causing the presence of artists, gays and lesbians or immigrants. Some more recent work has tried to get to the bottom of these questions, and the findings don’t bode well for Florida’s theory. In a four-year, $6 million study of thirteen cities across Europe called “Accommodating Creative Knowledge,” that was published in 2011, researchers found one of Florida’s central ideas—the migration of creative workers to places that are tolerant, open and diverse—was simply not happening.
“They move to places where they can find jobs,” wrote author Sako Musterd, “and if they cannot find a job there, the only reason to move is for study or for personal social network reasons, such as the presence of friends, family, partners, or because they return to the place where they have been born or have grown up.” But even if they had been pouring into places because of “soft” factors like coffee shops and art galleries, according to Stefan Krätke, author of a 2010 German study, it probably wouldn’t have made any difference, economically. Krätke broke Florida’s Creative Class (which includes accountants, realtors, bankers and politicians) into five separate groups and found that only the “scientifically and technologically creative” workers had an impact on regional GDP. Krätke wrote “that Florida’s conception does not match the state of findings of regional innovation research and that his way of relating talent and technology might be regarded as a remarkable exercise in simplification.”
Perhaps one of the most damning studies was in some ways the simplest. In 2009 Michele Hoyman and Chris Faricy published a study using Florida’s own data from 1990 to 2004, in which they tried to find a link between the presence of the creative class workers and any kind of economic growth. “The results were pretty striking,” said Faricy, who now teaches political science at Washington State University. “The measurement of the creative class that Florida uses in his book does not correlate with any known measure of economic growth and development. Basically, we were able to show that the emperor has no clothes.” Their study also questioned whether the migration of the creative class was happening. “Florida said that creative class presence—bohemians, gays, artists—will draw what we used to call yuppies in,” says Hoyman. “We did not find that.”
by Frank Bures, thirty two Magazine | Read more:
Image: Will Dinski
Kazuo Ishiguro, The Art of Fiction No. 196
[ed. One of my favorite authors has a new book coming out in March: The Buried Giant.]
The man who wrote The Remains of the Day in the pitch-perfect voice of an English butler is himself very polite. After greeting me at the door of his home in London’s Golders Green, he immediately offered to make me tea, though to judge from his lack of assurance over the choice in his cupboard he is not a regular four P.M. Assam drinker. When I arrived for our second visit, the tea things were already laid out in the informal den. He patiently began recounting the details of his life, always with an amused tolerance for his younger self, especially the guitar-playing hippie who wrote his college essays using disembodied phrases separated by full stops. “This was encouraged by professors,” he recalled. “Apart from one very conservative lecturer from Africa. But he was very polite. He would say, Mr. Ishiguro, there is a problem about your style. If you reproduced this on the examination, I would have to give you a less-than-satisfactory grade.”Kazuo Ishiguro was born in Nagasaki in 1954 and moved with his family to the small town of Guildford, in southern England, when he was five. He didn’t return to Japan for twenty-nine years. (His Japanese, he says, is “awful.”) At twenty-seven he published his first novel, A Pale View of Hills (1982), set largely in Nagasaki, to near unanimous praise. His second novel, An Artist of the Floating World (1986), won Britain’s prestigious Whitbread award. And his third, The Remains of the Day (1989), sealed his international fame. It sold more than a million copies in English, won the Booker Prize, and was made into a Merchant Ivory movie starring Anthony Hopkins, with a screenplay by Ruth Prawer Jhabvala. (An earlier script by Harold Pinter, Ishiguro recalls, featured “a lot of game being chopped up on kitchen boards.”) Ishiguro was named an Officer of the Order of the British Empire and, for a while, his portrait hung at 10 Downing Street. Defying consecration, he surprised readers with his next novel, The Unconsoled (1995), more than five hundred pages of what appeared to be stream-of-consciousness. Some baffled critics savaged it; James Wood wrote that “it invents its own category of badness.” But others came passionately to its defense, including Anita Brookner, who overcame her initial doubts to call it “almost certainly a masterpiece.” The author of two more acclaimed novels—When We Were Orphans (2000) and Never Let Me Go (2005)—Ishiguro has also written screenplays and teleplays, and he composes lyrics, most recently for the jazz chanteuse Stacey Kent. Their collaborative CD, Breakfast on the Morning Tram, was a best-selling jazz album in France.
In the pleasant white stucco house where Ishiguro lives with his sixteen-year-old daughter, Naomi, and his wife, Lorna, a former social worker, there are three gleaming electric guitars and a state-of-the-art stereo system. The small office upstairs where Ishiguro writes is custom designed in floor-to-ceiling blond wood with rows of color-coded binders neatly stacked in cubbyholes. Copies of his novels in Polish, Italian, Malaysian, and other languages line one wall. (...)
INTERVIEWER
You had success with your fiction right from the start—but was there any writing from your youth that never got published?
KAZUO ISHIGURO
After university, when I was working with homeless people in west London, I wrote a half-hour radio play and sent it to the BBC. It was rejected but I got an encouraging response. It was kind of in bad taste, but it’s the first piece of juvenilia I wouldn’t mind other people seeing. It was called “Potatoes and Lovers.” When I submitted the manuscript, I spelled potatoes incorrectly, so it said potatos. It was about two young people who work in a fish-and-chips café. They are both severely cross-eyed, and they fall in love with each other, but they never acknowledge the fact that they’re cross-eyed. It’s the unspoken thing between them. At the end of the story they decide not to marry, after the narrator has a strange dream where he sees a family coming toward him on the seaside pier. The parents are cross-eyed, the children are cross-eyed, the dog is cross-eyed, and he says, All right, we’re not going to marry.
INTERVIEWER
What possessed you to write that story?
by Susannah Hunnewell, Paris Review | Read more:
Image: Matt Carr/Getty Images
The Foreign Spell
It’s fashionable in some circles to talk of Otherness as a burden to be borne, and there will always be some who feel threatened by—and correspondingly hostile to—anyone who looks and sounds different from themselves. But in my experience, foreignness can as often be an asset. The outsider enjoys a kind of diplomatic immunity in many places, and if he seems witless or alien to some, he will seem glamorous and exotic to as many others. In open societies like California, someone with Indian features such as mine is a target of positive discrimination, as strangers ascribe to me yogic powers or Vedic wisdom that couldn’t be further from my background (or my interest).
Besides, the very notion of the foreign has been shifting in our age of constant movement, with more than fifty million refugees; every other Torontonian you meet today is what used to be called a foreigner, and the number of people living in lands they were not born to will surpass 300 million in the next generation. Soon there’ll be more foreigners on earth than there are Americans. Foreignness is a planetary condition, and even when you walk through your hometown—whether that’s New York or London or Sydney—half the people around you are speaking in languages and dealing in traditions different from your own. (...)
Growing up, I soon saw that I was ill-equipped for many things by my multi-continental upbringing—I would never enjoy settling down in any one place, and I wouldn’t vote anywhere for my first half-century on earth—but I saw, too, that I had been granted a kind of magic broomstick that few humans before me had ever enjoyed. By the age of nine, flying alone over the North Pole six times a year—between my parents’ home in California and my schools in England—I realized that only one generation before, when my parents had gone to college in Britain, they had had to travel for weeks by boat, sometimes around the stormy Cape of Good Hope. When they bid goodbye to their loved ones—think of V. S. Naipaul hearing of his father’s death while in England, but unable to return to Trinidad—they could not be sure they’d ever see them again.
At seventeen, I was lucky enough to spend the summer in India, the autumn in England, the winter in California, and the spring bumping by bus from Tijuana down to Bolivia—and then up the west coast of South America. I wasn’t rich, but the door to the world was swinging open for those of us ready to live rough and call ourselves foreigners for life. If my native India, the England of my childhood, and the America of my official residence were foreign, why not spend time in Yemen and on Easter Island?
In retrospect, it seems inevitable that I would move, in early adulthood, to what still, after twenty-seven years of residence, remains the most foreign country I know, Japan. However long I live here, even if I speak the language fluently, I will always be a gaikokujin, an “outsider person,” whom customs officials strip-search and children stare at as they might a yeti. I’m reminded of this on a daily basis. Even the dogs I pass on my morning walks around the neighborhood bark and growl every time they catch wind of this butter-reeking alien.
Japan remains itself by maintaining an unbreachable divide between those who belong to the group and those who don’t. This has, of course, left the country behind in an ever more porous world of multiple homes, and is a source of understandable frustration among, say, those Koreans who have lived in the country for generations but were—until relatively recently—obliged to be fingerprinted every year and denied Japanese passports. Yet for a lifelong visitor, the clarity of its divisions is welcome; in free-and-easy California, I always feel as accepted as everyone else, but that doesn’t make me feel any more Californian. Besides, I know that Japan can work as smoothly as it does only by having everyone sing their specific parts from the same score, creating a single choral body. The system that keeps me out produces the efficiency and harmony that draws me in.
I cherish foreignness, personally and internationally, and feel short-shrifted when United Airlines, like so many multinationals today, assures me in a slogan, “The word foreign is losing its meaning”; CNN, for decades, didn’t even use the word, in deference to what it hoped would be a global audience. Big companies have an investment in telling themselves—and us—that all the world’s a single market. Yet all the taco shacks and Ayurvedic doctors and tai chi teachers in the world don’t make the depths of other cultures any more accessible to us. “Read The Sheltering Sky,” I want to tell my neighbors in California as they talk about that adorable urchin they met in the souk in Marrakesh. Next time you’re in Jamaica—or Sri Lanka or Cambodia—think of Forster’s Marabar Caves as much as of the postcard sights that leave you pleasantly consoled. Part of the power of travel is that you stand a good chance of being hollowed out by it. The lucky come back home complaining about crooked rug merchants and dishonest taxi drivers; the unlucky never come home at all.
by Pico Iyer, Lapham's Quarterly | Read more:
Image: Islands, by Brad Kunkle, 2012Going Aboard
When Herman Melville was twenty-one, he embarked on the whaleship Acushnet, out of New Bedford. We all know what that led to. This past summer, Mystic Seaport finished their five-year, 7.5-million-dollar restoration of the 1841 whaleship Charles W. Morgan, the sister ship to the Acushnet. The Morgan is in many ways identical to Melville’s fictional Pequod, save that sperm whale jawbone tiller and a few other sinister touches. Mystic Seaport celebrated the completion by sailing the Morgan around New England for a couple months. I went aboard for a night and a day, intent on following in Ishmael’s footsteps, hoping to breathe a little life into my idea of the distant, literary ship.
by Ben Shattuck, Paris Review | Read more:
Image: Ben Shattuck
2014: The Year When Activist Documentaries Hit the Breaking Point
If I were making a documentary about the uniformity that has infested modern documentaries, it would go something like this: Open with a sequence detailing the extent of the problem, flashing on examples of its reach, cutting in quick, declarative sound bites, scored with music of steadily mounting tension that climaxes just as the title is revealed. Over the next 90-120 minutes, I would lay out the problem in greater detail, primarily via copious interviews with experts on the subject, their data points illustrated via scores of snazzily animated infographics. Along the way, I would introduce the viewer to a handful of Regular Folk affected by the issue at hand, and show how their daily lives have become a struggle (or an inspiration). But lest I send the viewer staggering from the theater bereft of hope, I’d conclude by explaining, in the simplest terms possible, exactly how to solve the problem. And then, over the end credits, I would tell you, the viewer, what you can do to help — beginning with a visit to my documentary’s official website.
What you would learn from this film is that too many of today’s documentaries have become feature-length versions of TV newsmagazine segments, each a 60 Minutes piece stretched out to two hours, two pounds of sugar in a five-pound bag. And perhaps this viewer became more aware of it in 2014 because, early in the year, I saw a film that was like a case study in what’s wrong with this approach: Fed Up, a position-paper doc on the obesity epidemic. It’s got the thesis-paragraph pre-title opening, the animated graphics (complete with cutesy, nonstop sound effects), the closing-credit instructions. And then, as if its TV-news style isn’t obvious enough, it’s even got the comically commonplace “headless fat people walking down the streets” B-roll and narration by, no kidding, Katie Couric.
Fed Up plays like something made to burn off time on MSNBC some Saturday afternoon between reruns of Caught On Camera and Lock-Up, but nope: I saw it at the beginning of 2014 because it was playing at the Sundance Film Festival. It received a simultaneous theatrical and VOD release in May; last month, Indiewire reported that its robust earnings in both have made it one of the year’s most successful documentaries.
Look, this could just be a matter of pet peeves and personal preferences, and of trends that have merely made themselves apparent to someone whose vocation requires consumption of more documentaries than the average moviegoer. But this formula, and the style that goes hand in hand with it, is infecting more and more nonfiction films, lending an air of troubling sameness to activist docs like Ivory Tower (on the financial crisis of higher education) and Citizen Koch (on the massive casualties of the Citizens United decision). But it’s been in the air for some time, with earlier films like Food Inc., Bully, The Invisible War, Waiting for “Superman,” and the granddaddy of the movement, Davis Guggenheim’s Oscar-winning An Inconvenient Truth — a film, lest we forget, about a PowerPoint presentation. And it doesn’t stop there; even a profile movie like Nas: Time Is Illmatic has a big, state-the-premise pre-title sequence, which plays, in most of these films, like the teaser before the first commercial break.
The formulaic construction of these documentaries — as set in stone as the meet-cute/hate/love progression of rom-coms or the rise/addiction/fall/comeback framework of the music biopic — is particularly galling because it’s shackling a form where even fewer rules should apply. The ubiquity (over the past decade and a half) of low-cost, low-profile, high-quality video cameras and user-friendly, dirt-cheap non-linear editing technology has revolutionized independent film in general, allowing young filmmakers opportunities to create professional-looking product even directors of the previous generation could only dream of. (...)
It’s easy to arrive at that point with these diverse subjects, the logic goes, but a more straightforward, news-doc approach is required for aggressive, activist documentaries with points to make and moviegoers to educate — and the commonness of that thinking is perhaps why so many critics have gone nuts for CITIZENFOUR, Laura Poitras’ account of Edward Snowden’s leak of NSA documents detailing surveillance programs around the world. That’s a giant topic, but the surprise of the picture is how intimate and personal it is, primarily due to the filmmaker’s place within the story: she was the contact point for Snowden, hooked in to his actions via encrypted messages, in the room with the whistleblower as he walked through the documents with Glenn Greenwald.
As a result, much of the film is spent in Snowden’s Hong Kong hotel, Poitras’ camera capturing those explanations and strategy sessions, a procedural detailing logistics, conferences, and conversations. There are no expert talking heads to provide (unnecessary, I would argue) context; there are no jazzy charts and graphs to explain it all to the (presumably) slower folks in the audience. The only such images come in a quick-cut montage of illustrations within the leaked documents, and they’re solely that — illustrations. The most powerful and informative graphics in the film are the mesmerizing images of encrypted messages from Snowden to Poitras, which fill the screen with impenetrable numbers, letters, and symbols, before clearing away to reveal the truth underneath, a powerful metaphor for Snowden’s actions (and the film itself).
What you would learn from this film is that too many of today’s documentaries have become feature-length versions of TV newsmagazine segments, each a 60 Minutes piece stretched out to two hours, two pounds of sugar in a five-pound bag. And perhaps this viewer became more aware of it in 2014 because, early in the year, I saw a film that was like a case study in what’s wrong with this approach: Fed Up, a position-paper doc on the obesity epidemic. It’s got the thesis-paragraph pre-title opening, the animated graphics (complete with cutesy, nonstop sound effects), the closing-credit instructions. And then, as if its TV-news style isn’t obvious enough, it’s even got the comically commonplace “headless fat people walking down the streets” B-roll and narration by, no kidding, Katie Couric.Fed Up plays like something made to burn off time on MSNBC some Saturday afternoon between reruns of Caught On Camera and Lock-Up, but nope: I saw it at the beginning of 2014 because it was playing at the Sundance Film Festival. It received a simultaneous theatrical and VOD release in May; last month, Indiewire reported that its robust earnings in both have made it one of the year’s most successful documentaries.
Look, this could just be a matter of pet peeves and personal preferences, and of trends that have merely made themselves apparent to someone whose vocation requires consumption of more documentaries than the average moviegoer. But this formula, and the style that goes hand in hand with it, is infecting more and more nonfiction films, lending an air of troubling sameness to activist docs like Ivory Tower (on the financial crisis of higher education) and Citizen Koch (on the massive casualties of the Citizens United decision). But it’s been in the air for some time, with earlier films like Food Inc., Bully, The Invisible War, Waiting for “Superman,” and the granddaddy of the movement, Davis Guggenheim’s Oscar-winning An Inconvenient Truth — a film, lest we forget, about a PowerPoint presentation. And it doesn’t stop there; even a profile movie like Nas: Time Is Illmatic has a big, state-the-premise pre-title sequence, which plays, in most of these films, like the teaser before the first commercial break.
The formulaic construction of these documentaries — as set in stone as the meet-cute/hate/love progression of rom-coms or the rise/addiction/fall/comeback framework of the music biopic — is particularly galling because it’s shackling a form where even fewer rules should apply. The ubiquity (over the past decade and a half) of low-cost, low-profile, high-quality video cameras and user-friendly, dirt-cheap non-linear editing technology has revolutionized independent film in general, allowing young filmmakers opportunities to create professional-looking product even directors of the previous generation could only dream of. (...)
It’s easy to arrive at that point with these diverse subjects, the logic goes, but a more straightforward, news-doc approach is required for aggressive, activist documentaries with points to make and moviegoers to educate — and the commonness of that thinking is perhaps why so many critics have gone nuts for CITIZENFOUR, Laura Poitras’ account of Edward Snowden’s leak of NSA documents detailing surveillance programs around the world. That’s a giant topic, but the surprise of the picture is how intimate and personal it is, primarily due to the filmmaker’s place within the story: she was the contact point for Snowden, hooked in to his actions via encrypted messages, in the room with the whistleblower as he walked through the documents with Glenn Greenwald.
As a result, much of the film is spent in Snowden’s Hong Kong hotel, Poitras’ camera capturing those explanations and strategy sessions, a procedural detailing logistics, conferences, and conversations. There are no expert talking heads to provide (unnecessary, I would argue) context; there are no jazzy charts and graphs to explain it all to the (presumably) slower folks in the audience. The only such images come in a quick-cut montage of illustrations within the leaked documents, and they’re solely that — illustrations. The most powerful and informative graphics in the film are the mesmerizing images of encrypted messages from Snowden to Poitras, which fill the screen with impenetrable numbers, letters, and symbols, before clearing away to reveal the truth underneath, a powerful metaphor for Snowden’s actions (and the film itself).
by Jason Bailey, Flavorwire | Read more:
Image: Fed Up
Sunday, December 28, 2014
The Capitalist Nightmare at the Heart of Breaking Bad
Back in October, you could have gone to Toys ’R’ Us and picked up the perfect present for the Breaking Bad fan in your family. Fifty bucks (all right, let’s assume you’re in Albuquerque) would buy you “Heisenberg (Walter White)” complete with a dinky little handgun clutched in his mitt; his sidekick Jesse, in an orange hazmat suit, was yours for $40. But then a Florida mom (it’s always a mom; it’s often in Florida) objected, and got a petition going, needless to say. “While the show may be compelling viewing for adults, its violent content and celebration of the drug trade make this collection unsuitable to be sold alongside Barbie dolls and Disney characters,” she wrote.
It’s worth noting, perhaps, that if Barbie’s proportions had their equivalent in an adult female, that woman would have room for only half a liver and a few inches of intestine; her tiny feet and top-heavy frame would oblige her to walk on all fours. A great role model? I’m not so sure. (And Disney is not always entirely benign. My mother was five when Snow White came out; I’m not sure she ever really recovered from her encounter with its Evil Queen.)
“I’m so mad, I am burning my Florida Mom action figure in protest,” Bryan Cranston tweeted when the storm broke. Cranston went from advertising haemorrhoid cream (“Remember – oxygen action is special with Preparation H”) to playing Hal, the goofy dad on Malcolm in the Middle, to full-on superstardom as Breaking Bad became a talisman of modern popular culture. The show began broadcasting in the US in January 2008 and ran for five seasons. Stephen King called it the best television show in 15 years; it was showered with dozens of awards; Cranston took the Emmy for Outstanding Lead Actor in a Drama Series for four out of the show’s five seasons.
So get over it, Florida Mom. Breaking Bad was, and remains (at least for the time being), the apogee of water-cooler culture: serious but seriously cool, and the nerd’s revenge, to boot. Walter White – for those of you who are yet to have your lives devoured by the show – is a high-school chemistry teacher: you might think that’s a respected, reasonably well-compensated profession, but in 21st-century America he’s got to have a second job at a carwash just to make ends meet. When he is diagnosed, as the series begins, with terminal lung cancer, his terror (his existential, male, white-collar terror) focuses not on the prospect of his own death, but on how he will provide for his family. A chance encounter with a former student, Jesse Pinkman – a classic dropout nogoodnik with a sideline in drug sales – sets his unlikely career as a drug baron in motion. As his alter ego “Heisenberg” (the name a knowing echo of that icon of uncertainty), Walter has chemical skills that enable him to cook some of the purest methamphetamine the world has ever known . . . and the rest, as they say, is history. (...)
But here’s the thing: Florida Mom is on to something, even if she’s wrong about exactly what it is she was objecting to. “A celebration of the drug trade”? I don’t think so. But why did Breaking Bad get under my skin? Why does it still bother me, all these months later? And why do I think, in an era of exceptional television, that it’s the best thing I have ever seen? (...)
Not everyone wants to use words such as “metadiegetic” when talking about telly, and the close analysis of everything from the show’s vision of landscape to its use of music, or “the epistemological implications of the use a criminal pseudonym”, may be exhausting for some. Yet Pierson’s essay, which opens the volume, draws attention to one of the chief reasons the show has such a terrible and enduring resonance.
Breaking Bad is, he argues, a demonstration of the true consequences of neoliberal ideology: the idea that “the market should be the organising agent for nearly all social, political, economic and personal decisions”. Under neoliberal criminology, the criminal is not a product of psychological disorder, but “a rational-economic actor who contemplates and calculates the risks and the rewards of his actions”. And there is Walter White in a nutshell.
It’s worth noting, perhaps, that if Barbie’s proportions had their equivalent in an adult female, that woman would have room for only half a liver and a few inches of intestine; her tiny feet and top-heavy frame would oblige her to walk on all fours. A great role model? I’m not so sure. (And Disney is not always entirely benign. My mother was five when Snow White came out; I’m not sure she ever really recovered from her encounter with its Evil Queen.)
“I’m so mad, I am burning my Florida Mom action figure in protest,” Bryan Cranston tweeted when the storm broke. Cranston went from advertising haemorrhoid cream (“Remember – oxygen action is special with Preparation H”) to playing Hal, the goofy dad on Malcolm in the Middle, to full-on superstardom as Breaking Bad became a talisman of modern popular culture. The show began broadcasting in the US in January 2008 and ran for five seasons. Stephen King called it the best television show in 15 years; it was showered with dozens of awards; Cranston took the Emmy for Outstanding Lead Actor in a Drama Series for four out of the show’s five seasons.
So get over it, Florida Mom. Breaking Bad was, and remains (at least for the time being), the apogee of water-cooler culture: serious but seriously cool, and the nerd’s revenge, to boot. Walter White – for those of you who are yet to have your lives devoured by the show – is a high-school chemistry teacher: you might think that’s a respected, reasonably well-compensated profession, but in 21st-century America he’s got to have a second job at a carwash just to make ends meet. When he is diagnosed, as the series begins, with terminal lung cancer, his terror (his existential, male, white-collar terror) focuses not on the prospect of his own death, but on how he will provide for his family. A chance encounter with a former student, Jesse Pinkman – a classic dropout nogoodnik with a sideline in drug sales – sets his unlikely career as a drug baron in motion. As his alter ego “Heisenberg” (the name a knowing echo of that icon of uncertainty), Walter has chemical skills that enable him to cook some of the purest methamphetamine the world has ever known . . . and the rest, as they say, is history. (...)
But here’s the thing: Florida Mom is on to something, even if she’s wrong about exactly what it is she was objecting to. “A celebration of the drug trade”? I don’t think so. But why did Breaking Bad get under my skin? Why does it still bother me, all these months later? And why do I think, in an era of exceptional television, that it’s the best thing I have ever seen? (...)
Not everyone wants to use words such as “metadiegetic” when talking about telly, and the close analysis of everything from the show’s vision of landscape to its use of music, or “the epistemological implications of the use a criminal pseudonym”, may be exhausting for some. Yet Pierson’s essay, which opens the volume, draws attention to one of the chief reasons the show has such a terrible and enduring resonance.
Breaking Bad is, he argues, a demonstration of the true consequences of neoliberal ideology: the idea that “the market should be the organising agent for nearly all social, political, economic and personal decisions”. Under neoliberal criminology, the criminal is not a product of psychological disorder, but “a rational-economic actor who contemplates and calculates the risks and the rewards of his actions”. And there is Walter White in a nutshell.
by Erica Wagner, New Statesman | Read more:
Image: Ralph SteadmanSaturday, December 27, 2014
Bob Dylan
[ed. NSA, CIA, VA, Health Insurance, Hospitals, Facebook, Citicorp, College Tuition, Transportation, Public Utilities, Climate Change, Publishing, Big Pharma, Net Neutrality, Minimum Wage, Guantanamo, AfghanIraq, Torture, K-Street, Wall Street, Congress (and much, much more). Happy New Year.]
Subscribe to:
Comments (Atom)









